Month: June 2011

Mirroring – Things to Consider

DB Mirroring! Very fancy term which we all are hearing starting SQL Server 2005. Your DR Server is often called as “Hot Standby”( Real time Synch) when you are using DB mirroring where as it is considered as “Warm Standby” while your DR DB is Log Shipped!
Gotcha – If you are implementing Asynchronous mirroring Mode, Can you consider this as a real Hot Standby? I would say our all time favorite answer ” It Depends”. yes it really depends and I mean it.

PS I’m not going to explain how to configure DB Mirroring, Modes of Mirroring, Fail over Options in this blog post. That will be covered in another Blog post under the same label “Mirroring”.

So…….
First and foremost, Are the Principal and Mirror Servers are exactly identical in every aspect of windows and your SQL Server? This is a must to avoid surprises when a real disaster hits your data center and everyone expects your DR Database would behave exactly same as production database in an event of Prod DB failure.(Disaster)
There’s a well known so called Best practice saying that for OLTP Databases Mirroring works best and for any OLAP Database Log Shipping works best. I Strongly Disagree with this statement.
So, what would i really need to consider in terms of Database(s)?
Instead of differentiating OLAP/OLTP, try to understand Log Records Generation rate(in other words, your Transaction log Growth) and Size of your individual transactions. yes, very frequent huge Bulk imports will lead to many issues with Database Mirroring.
IMHO, this is something which as a DBA for your application you would(must) know which also covers effective T-Log Maintenance.( Imp – No matter how good network you are on, if your LDF is not properly maintained, your Mirroring session won’t work as you expect and your principal(Prod) Database would be in jeopardy!

Network:
DB mirroring is highly dependent on network( of course it works on TCP End points). So you should be on a high band width and low latent network. Less the latency between your Prod and DR Data center, more efficiently your Mirroring works.
LAN vs MAN vs WAN: Obviously LAN provides best performance in terms of latency.
Typical RTT(round trip times) are as follows:
MAN – 2 ms to 20 ms
WAM > 200 ms

Monitor: 
You’ve to always keep an eye on your Mirroring state changes. You’ve to get notified with whatever alerting tool(s) you’ve in place in your enterprise. you can also create manual alerts from using a simple T-SQL Script via SQL Agent( You need DB Mail Profile being enabled on your SQL Server).
You’ve to keep an eye on number of VLF’s( Virtual Log Files) in your Principal Database, in other words keep an eye on your transaction log. VLF’s concept is beyond the scope of this blog post and will be explained in later posts.
Extremely Important: SQL Server will not truncate your Transaction log if log records are not sent to your Mirror Database even though you are taking a frequent log backups on your Principal Database. The Log records will not be copied to your DR Database as long as your  Mirroring Session is “SUSPENDED/PAUSED” or “DISCONNECTED” which might grow your transaction log to the point where your Drive has no enough room to accommodate your log and Users start screaming ” Am not able to do any Inserts/Updates/Deletes and it’s an outage/Impact to our Application, What’s happening”.
So….again i want to stress, always keep a special eye on your Databases which are participating in DB Mirroring. How to and where to identify “Mirroring is causing not to truncate your Log”? – Will be covered in a later blog. Quick Tip – do “select name, log_reuse_wait_desc from sys.databases”. If you see any thing saying “mirroring” in your result set beside your database, that’s it 🙂
Always monitor Unsent portion of the log in Principal and Redo portion of the Log in your Mirrored databases. You can set up all kinds of Cool thresholds from your DB Mirroring Monitor in SSMS and you can set your own Alerts accordingly.

Last but definitely not the Least – Practice your DR Scenario as many times as you can. Convince your Management and your Application Partners for frequent DR(Disaster Recovery) Drills aka BCP(Business Continuity Plan) Exercices.

SSMS – Tweaks

SQL Server Management Studio – Outstanding UI provided by Microsoft for managing/Administering/Developing/manipulating  code in (for) SQL Server environments.
As a DBA how much time would you spend with SSMS on a daily basis? I would say 90-95%! Well, we will be using Configuration manager if needed sometimes to troubleshoot few things (Will be covered in another post).
As a SQL Server Developer how much time would you spend in SSMS on a daily basis? I would say close to 100%.
Note: I’ve also seen few Application teams still using TOAD to develop T-SQL. I’m not covering TOAD coz I never used TOAD (why would/Should I??? if I’m already using fabulous SSMSJ)
PS BI (Business Intelligence) developers spend more time in BIDS (Business intelligence development Studio).
A Quick Question– How many of you explored all the features (at least few of the available features) in SSMS?
Will the default settings of SSMS Installation suffice our daily requirements? Of course “Yes” in most of the cases. But I’m pretty sure, after reading this blog completely you’ll definitely try exploring those options (if you are not yet at that point).
Okay! Now it’s time to explore…..
I’m assuming, you’ve got SSMS 2008 already installed on your machine(s) and you’ve a brief knowledge on why and how SSMS is used. I’ll explain few scenarios where most of us will find interesting.
Scenario1: Selecting records using GUI.
Most of the times we’ll be using these two options just to get a sample result set. So do we really need 1000 rows? How about changing it to 100 or even 10 Records? Cool huh!
How to edit that?
Go to Tools – Options – SQL Server Object Explorer – Commands. Change the default values at the right pane to the desired values. That’s all we’ve to do!
In this example, I’ve changed select value from 1000 to 10.



Scenario 2:Have you ever felt including IF NOT EXISTS logic automatically whenever you script any Object out of your database. SSMS by default don’t do that, but we can change that setting as shown below.
Now if we script an object in SSMS, it looks something similar to below code snippet.
USE [AdventureWorks]
GO
/****** Object:  Table [Person].[Address]    Script Date: 06/26/2011 12:43:40 ******/
SETANSI_NULLSON
GO
SETQUOTED_IDENTIFIERON
GO
IFNOTEXISTS(SELECT*FROMsys.objectsWHEREobject_id=OBJECT_ID(N'[Person].[Address]’)ANDtypein(N’U’))
BEGIN
CREATETABLE[Person].[Address](
                [AddressID] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
                [AddressLine1] [nvarchar](60) NOT NULL,
                [AddressLine2] [nvarchar](60) NULL,
                [City] [nvarchar](30) NOT NULL,
                [StateProvinceID] [int] NOTNULL,
                [PostalCode] [nvarchar](15) NOT NULL,
                [rowguid]……………………………………….. bla bla
Play and get familiarized with rest of the scripting options which you are seeing in above screenshot.
Scenario 3:
When editing your tables using SSMS.
Let’s say you are adding a new column to your table in the middle. (Useless in most of the cases, since all it matters to SQL is the order how you pull your columns in your select statement.By default new column will be added at the end of table.) We’ve to do that by right clicking on table and selecting DESIGN. But SSMS won’t let us to do and it’ll throw an error because this change requires the table to be dropped in general. So what to do?
Just un-check Prevent Changes that require table re-creation and that’s it! Go to books online for finding out all the changes that require table re-creation in SQL Server.
Caution:For huge tables, it may timeout and I’ve seen few cases where SSMS crashes. I recommend you to Script out changes (generate Script button on your top left while you are doing any change) and review code to look what SSMS is really doing for you behind the scenes instead of just clicking NEXT-NEXT-OK in your SSMS for making any table changes. Believe me, you’ll never regret doing so.
Scenario 4: Shortcut Keys in ur SSMS.
Most of we DBA’s will be regularly using a set of commands on a very regular basis. For example sp_who2, “wait_stats DMV”, “dm_exec_requests” DMV and so on…So how about saving those regularly used statements as keyboard shortcuts instead of typing entire query manually repeatedly…
You can define your Queries as keyboard Shortcuts in SSMS as shown below.
PS sp_who will be default for ctrl+1.
Scenario 5: Annoying tab width in Query Pane.
How may query panes can you see in your SSMS at once?? 4,5 or 6….at most I can say 7 if you get rid of Object explorer and registered servers etc. The rest of query panes will be appearing in a drop down menu at top right which is really annoying if you are dealing with many things at once.(which most of us do as part of our job)
See below for what I’m saying…

Correction – that’s just SPID not SPID Number in above screenshot – forgive me,I’m not going to edit that screenshot now.

So, how to decrease its width by removing all the unnecessary details provided by SSMS by default?
Once I clicked OK, now you can see my SSMS accommodating more number of TABS.
These are few options which you can tweak in your SSMS today. Please see there are many more Cool features and options which are hiding in your SSMS without your notice. It’s time to explore all those features which MSFT has added for all of us, making our lives easyJ

PerfMon

Today I’m going to give a brief about collecting Performance Monitoring metrics using Perfmon. ( For a better Picture of what exactly is happening with your SQL Server, you may need to use SQL Profiler along with Perfmon and Ofcourse you should consider using DMO’s(Dynamic management Objects) starting SQL Server 2005.
Enough intro 🙂
As a DBA you must and Will use Perfmon if you want to gather Real Metrics on your Server(Both Physical and SQL Server). But if you are new to perfmon i’m sure that you’ll be confused with the number of counters and metrics it offers. Make sure to familiarize with atleast important counters which we as a DBA might need to capture your server health. Consider opening from your client machine or choose a DEV Server( ideally sitting in the same data center) and capture your production server metrics from there.
Familiarize yourselves with scheduling counter logs and analyzing what it captured.

What format to choose for your counter logs?
You can leave it as default(BLG) and you can make use of PAL tool( It’s a free and fabulous tool for analyzing your Performance Logs). Please note Log Parser is a prerequisite to install PAL. Again, don’t install these tools on your Servers. Make use of your Client machine(s).
or
CSV – if you are more into and addicted to Excel, you can save it to a CSV file and open it with Excel and slice and dice the results accordingly.( you can go as fancy as you can with all Cool graphs and color coding and what not).
Get familiar with Win Server 2003/2008/2008R2 as most of the shops out there are still running on Win Server 20003.

Okay…enough blabbering and let’s get straight into the point.

Counters we must monitor for SQL Server at Minimum:
Processor – % Processor Time
System – Processor Queue Length
Memory – Available MBytes – ideally always should > 1GB(at least 750 MB)
Memory – Page Faults/Sec – look for constant high numbers. That’s something which we don’t want.
Paging File – % Usage
SQLServer: Buffer Manager – Page life expectancy – more the Number more better.
SQLServer: Buffer Manager  – Page reads and Writes/Sec. Note – these are physical Reads not Logical Reads.
SQLServer: Memory Manager – Memory Grants Pending – Sessions which are waiting for memory to be granted. This should be ideally close to Zero always.
SQLServer: SQL Statistics – Batch Requests/sec
SQLServer: SQL Statistics – Compilations/sec
SQLServer: SQL Statistics – Recompilations/sec
Physical Disk –Avg Disk Reads/Sec
Physical Disk –Avg Disk Writes/Sec – < 8 msec on nonchached Disks and <1 msec on cached Disks
Physical Disk – Avg. Disk sec/Read and Write – The Avg. Disk sec/Read performance counter indicates the average time, in seconds, of a read of data from the disk.
I would say DISK I/O metrics are little tricky and sometimes misleading( It really depends on what storage you are on. I’ll get into more specifics below)
For example,Your perfmon says you have waits for disk reads/writes. Does this always mean that you need faster and more disks?
A big screaming Noooo! It could be paging, So the real culprit is memory not your disks.
Also, are your Disks DAS D’s or SAN D’s? Let me make something very clear, Windows designed Perfmon DISK metrics for your Direct Attached Storage not for your SAN Storage.
For example let’s consider Disk Queue Length Metric:
Disk Queue Length will Vary….It actually depends on how Old/New your Drives are and also we should consider do we really know how many Disks are present in our array??
All the Modern days Drives will come with Read-Ahead/Rear-Behinds, which will increase Disk queue lengths which is really not a Problem. If we don’t have any regular Long durationIO Stalls in our SQL Server (we can find out using virtualfilestats DMO) we are fine!
Read-ahead/read-behind:
When executing a read from the disk, the disk arm moves the read/write to (or near) the correct track, and after some settling time the read head begins to pick up bits. Usually, the first sectors to be read are not the ones that have been requested by the operating system. The disk’s embedded computer typically saves these unrequested sectors in the disk buffer, in case the operating system requests them later.
Disk Buffers:
Disk Controllers, The disk buffer is usually quite small from 2 to 32 MB for most of the drives out there present in market.
Disk Array Controllers, as opposed to Disk Controllers usually have normal cache memory of around 0.5 to 8 GB.
Typically Perfmon Counters for Disk(s) are designed for DASD’s, not for SAN D’s. For SAN ideally we’ve to go with the Vendor provided Tool for monitoring.
I personally feel monitoring the counters of sec/read and sec/write; Reads/Sec and Writes/Sec in Perfmon and the IO stalls by virtualfilestats DMO in SQL server would give us a much better picture of DISKS(especially in the present days of SAN).
For Newbies
DAS – Direct Attached Storage
SAN – Storage Area Network

Simulate 824 IO Error

Simulating a 824 IO Error in SQL ServerJ.  Sounds Cool huh! Not really, Trust me, you won’t and never say 823/824 (and more often ignored Informational Sev 10 825 Message)  is cool once you encounter it on your Production Database! Okay…..Enough scaring you people.
I tried to play a little bit with my test databases on my test server simulating these errors and I would like to share it with my peers.
As I posted in my previous post, I learnt this (in fact most of the SQL internals) from Paul Randal presentations and demos. Again….trust me if you want to dig into real internals of SQL Server, Paul Randal is the one for you. I will post my personal favorites in SQL Server world in a future blog (no disrespect to many other fabulous MCM’s and MVP’s around the world). I admire Paul at most…that’s it!
Note: you should not try this on any of your Server anywhere near your Domain. Try this on your own personal laptop just to get familiarize with these errors instead of surprises on your Servers; I’ve to warn you all sometimes it even takes reinstalling SQL Server!!
Prereqs/Links/References:
HEX EDITOR XVI32 – You can also, use your favorite editor for messing up with your M/N/LDF file.
Ok, It’s time to corrupt your Database (yes, you heard it right ;p)
Let’s choose “AdventureworksLT” database as a victim for our experiment. You can download that database from codeplex.com.
Let’s choose product table and PK_PRODUCT_PRODUCTID cluster Index. Before that we’ve to identify what underlying page is storing that clustered index. For that we can use DBCC IND.
DBCC IND(‘AdventureWorksLT’,‘SalesLT.Product’, 1) —Am using 1 since I’m targetting clustered index. From the results pane choose any page you want.
I choose PagePID = 500 as my target page.As I mentioned in my last post under DBCC lable, you can make use of DBCC PAGE to look at the underlying contents of that page. Also, you need to findout the starting offset for the page you’ve chosen(in my case i’ve chosen Poor Page 500 ;p) to change/edit/Damage using Editor. That becomes 4096000 for me(Since each page is 8KB). Use your own Math for your page!
Let’s do that just as an FYI. (I’m still learning how to exaclty interpret entire DBCC PAGE Output.)
Now, in order to access the Database file(s) you’ve to take the respective database offline, If not SQL Server will hold lock on it and you can’t edit succesfully using hexeditor.
It’s time to corrupt your Page:
Take DB Offline. (Your DB might not go Offline sometimes if it still has any Open connections in some cases.) Open your MDF file with Hex editor.
Search for your offset value, in my case it’s 4096000. I changed my values to x x x x  from 0 0 0 0 (FYI i changed Column 12 to 16 in the highlighted row)  and save the file and bring the database Online! Voilaaaa..That’s it. You’ve introduced corruption to your database.
Note: Don’t choose Page 9 of your First file in your Database. That’s the Boot Page and the most important piece for you database. DB may not come Online if you mess around your 9th page of 1st File.
Now It’s time to Test your work:
Run this Query on your Database, it’s nothing but you are hitting that index with a Select Statement.
SELECTTOP 1000 [ProductID],[Name],[ProductNumber],[Color],[StandardCost],[ListPrice],[Size]
      ,[Weight],[ProductCategoryID],[ProductModelID],[SellStartDate],[SellEndDate],[DiscontinuedDate]
      ,[ThumbNailPhoto],[ThumbnailPhotoFileName],[rowguid],[ModifiedDate]
  FROM [AdventureWorksLT].[SalesLT].[Product]
Hohooooooooo…In results pane I’m seeing
Msg 824, Level 24, State 2, Line 2
SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x3be14ad4; actual: 0x071b76e8).
 It occurred during a read of page (1:500) in database ID 12 at offset 0x000000003e8000 in file
 ‘C:\Program Files\Microsoft SQL Server\MSSQL10_50.PROD\MSSQL\DATA\AdventureWorksLT_Data.mdf’.
 Additional messages in the SQL Server error log or system event log may provide more detail.
 This is a severe error condition that threatens database integrity and must be corrected immediately.
 Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors;
 for more information, see SQL Server Books Online.

Let’s run CHECKDB on this Database:
DBCC CHECKDB (‘AdventureWorksLT’) WITH NO_INFOMSGS,ALL_ERRORMSGS
Results:
Msg 8978, Level 16, State 1, Line 1
Table error: Object ID 229575856, index ID 1, partition ID 72057594039828480, alloc unit ID 72057594041270272 (type In-row data). Page (1:499) is missing a reference from previous page (1:500). Possible chain linkage problem.
Msg 8928, Level 16, State 1, Line 1
Object ID 229575856, index ID 1, partition ID 72057594039828480, alloc unit ID 72057594041270272 (type In-row data): Page (1:500) could not be processed.  See other errors for details.
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 229575856, index ID 1, partition ID 72057594039828480, alloc unit ID 72057594041270272 (type In-row data), page (1:500). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 12716041 and -4.
Msg 8976, Level 16, State 1, Line 1
Table error: Object ID 229575856, index ID 1, partition ID 72057594039828480, alloc unit ID 72057594041270272 (type In-row data). Page (1:500) was not seen in the scan although its parent (1:182) and previous (1:501) refer to it. Check any previous errors.
CHECKDB found 0 allocation errors and 4 consistency errors in table ‘SalesLT.Product’ (object ID 229575856).
CHECKDB found 0 allocation errors and 4 consistency errors in database ‘AdventureWorksLT’.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (AdventureWorksLT).

And ofcourse your ErrorLog will scream with whole bunch of StackDumps created bla bla…. As you can see below.

That’s it Folks! Pretty Simple huh!!
Play with DBCC IND and PAGE commands and get familiarize with them if you want to become a Geek.
Note: These are Undocumented commands. Use @ Ur own risk! Just want to remind you again, you should not practice this on any of your server on your domain. In other words if you care about that Server/Database don’t play with it with the above demo.

MSDTC- SQL2005/Win2003 Cluster

Let me share my one of the experiences I had couple of weeks ago with my 4 node Active-Active Cluster. Our Non prod envi is a 4 Node A-A cluster with SQL Server 2005 SP3 (Don’t ask me why we are still on SQL 2005 and more even why on SP3 ;))
Folks out there who deal with lot of linked Servers and Distributed Queries might already have seen this behavior of SQL Server with Failover of MSDTC on SQL Server 2005 sitting on a Windows Server 2003.
Well, we all know our Windows administrators will be rolling over Patches and Hot fixes to Servers monthly once or at least once every couple of months. Due to the patching they had to restart windows Servers and I’m not sure what order they did and the next day Developers were standing behind me saying “ My distributed Queries stopped working” what’s happening?
When I verified MSDTC Service was running on Node2 without any issues. I tried to failover just MSDTC Service instead of entire SQL Group from Node 2 to Node 1 and fail it back to Node 2.
Well, this made the distributed transactions to run from Node 1 with no issues, but not from Node 2(In fact rest of nodes).
When I saw the SQL Error logs on Node 2, I saw my error log filled with “SQL Server could not register with Microsoft Distributed Transaction Coordinator (MS DTC) as a resource manager for this transaction. The transaction may have been stopped by client or the resource manager
Everything looks pretty normal when I saw from “CLUADMIN” with SQL Server and MSDTC service. So at this point I suspected the order of Services being brought Online…when windows team restarted Servers(Nodes in this case)
So this time I tried DTC Service started before the SQL Server Service was brought Online, so that SQL won’t fail to register itself and DTC transactions will be initiated properly. Voilaaa….this made it workedJ
Actually, I got this clue from one of the MSDN blogs when I was BINGING the issue. So, I would like to say, don’t just bang your head against a wall if you see any issue, Look at your SQL Server Error Logs( My favorite place and my First place to look for any SQL errors) or even you can  go to Microsoft blogs and seek for help. There are tons of tips and material for SQL Server ( It’s up to you whom to trust and whom not to). 

CHECKDB in ErrorLogs on STARTUP

Couple of days ago, one of my colleagues observed in the error logs saying, “CHECKDB for database Completed on the database(s) blabla” when she restarted SQL Server Services and she asked me how come SQL Server is running Checkdb within matter of seconds when SQL Services are restarted where as it takes hours and hours when we run Manual CheckDB’s?? (FYI few of our Databases are 450 +GB in size on this Instance).
I really had no answer at that moment?
After refreshing my mind, I recalled the statement of P Randal saying theirs is nothing in SQL Server which automatically runs CHECKDB for you and Crash recovery of SQL Server won’t run DBCC CHECKDB.
IMHO Paul is the best (I mean it) resource for all your questions and doubts in regards with CHECKDB (after all he’s the one who wrote that code). I blindly follow whatever he says without any hesitation anything in SQL Server and I’m a proud reader of his blogJ
Well, So..this made me to research a little bit further on what’s really happening behind the scenes!
Before going any further I want to say a little bit about Boot Page.
FYI Boot page is the one which stores the most critical pieces of information regarding any Database and the boot page is always page 9 of your 1st file in your Database. If your boot page is corrupted, your database is literally gone!  Can you imagine SQL Server running without MASTER Database or MASTER database being corrupted? Nooooooooooo right?? Similarly you can’t imagine any Database with a corrupted Boot page. Hope I had a good analogy 😉
So, why did I mention about Boot page first of all? Coz…that’s the place where you can find the information regarding when was the last clean DBCC ran against your database.
But how to look at that? Using DBCC PAGE.
I’ll check on my Master Database.

Well, It returned nothing! You’ve to enable traceflag 3604 to dump the results to your results pane or 3605 to dump results to your error log.




so it’s saying dbi_dbcclastknowngood = 2011-06-20 20:04:48.617.
If you are not comfortable with DBCC PAGE, you can make use of DBCC DBINFO(‘database’). Please note, even this requires traceflag 3604/3605 to be enabled. If you want to get the results in a tabular format, you can make use of WITH TABLERESULTS.
Now, I’ve restarted my SQL Server and let’s see what’s there in our Error Logs.

If you see it thoroughly it’s basically saying that DBCC was last ran on master database at 2011-06-20 20:04:48.617 but not at the current time when I restarted SQL Server. SQL is pulling that information from Boot Page of the respective database.

So. I learnt a lesson that we’ve to keep a very close eye on the Date and Time in error log besides the content, which we ignore very often for informational messages!
Note: DBCC PAGE, DBINFO and TABLERESULTS are undocumented commands.  If you are not comfortable using them, please don’t use them on your SQL Servers.

Scary Saturday – DBCC UPDATEUSAGE

Signing In – 9.00 AM EDT


As i always say, be always prepared for the Worst as a DBA. Had to wake up at 6.30 AM, DR exercise was scheduled to begin at 7.00 AM and i was preparing for it 😦
Distraction – Got an Email with Message ” Consistency Checks Job Failed” from one of my other Servers. Aaaaah…what a start it was:(  Time not to PANIC!!…Well, time to prepare mentally. Prepared my Cup of Tea and had a phone conversation with one of my Friends which of course kicks out some stress from Ur poor brain especially when it’s early in the Morning!
Once my DR SQL Server is Up and running and all the Databases are Online(FYI, we’ve SRDF as our Disaster recovery Strategy for this Server/Application – It’s a dedicated Box), gone to the Consistency Checks Job for Error Messages..Interestingly it was yelling “Msg 2508, Level 16, State 3, Line 1
The In-row data RSVD page count for object “XXXXX”, index ID 0, partition ID 44405715894272, alloc unit ID 44405715894272 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE.
CHECKDB found 0 allocation errors and 1 consistency errors in table ‘XXXXX’ (object ID 677577452).
CHECKDB found 0 allocation errors and 1 consistency errors in database ‘XYZ’.”



My first reaction after looking at this Error Msg was – Thank God! Well, why would i thank god if my consistency checks Job failed! Coz, this is not a threatening condition of your Database and i was expecting more worse.
So what is this Error all about?
Well,SQL Server 2000 used to update the page space used metadata which is no more entertained starting SQL Server 2005 RTM. If your Database has been migrated from SQL Server 2000 you might not see this error until you run CHECKDB on that database for the first time. Running DBCC CHECKDB wont fix this Issue(in our case we are running on SQL Server 2008 R2 RTM x64), you’ve to run DBCC UPDATEUSAGE as per the message CHECKDB returned.

Now i remember one of my colleague restoring a SQL 2000 Database on this new Server. Well, he forgot to run DBCC CHECKDB WITH DATA_PURITY once he migrated(Upgraded) Database to new Server. Ummm…again remembering myself  “Well this is not dream where you’ll be only the one who touches a given SQL Server”
UPDATEUSAGE Results:
I’m truncating the message and changed the User Object names.
DBCC UPDATEUSAGE: Usage counts updated for table ‘DDD’ (index ‘XXXXLINK’, partition 1):
USED pages (In-row Data): changed from (1358) to (1354) pages.
RSVD pages (In-row Data): changed from (2726) to (1369) pages.
DBCC UPDATEUSAGE: Usage counts updated for table ‘DDD’ (index ‘XYZXUSER’, partition 1):
RSVD pages (In-row Data): changed from (2312) to (2313) pages.
DBCC UPDATEUSAGE: Usage counts updated for table ‘DDD’ (index ‘XACCNO’, partition 1):
USED pages (In-row Data): changed from (1580) to (1576) pages.
RSVD pages (In-row Data): changed from (3172) to (1593) pages.
DBCC UPDATEUSAGE: Usage counts updated for table ‘DDD’ (index ‘XXRECID’, partition 1):
USED pages (In-row Data): changed from (1373) to (1369) pages.
RSVD pages (In-row Data): changed from (2765) to (1393) pages.
DBCC UPDATEUSAGE: Usage counts updated for table ‘SYSLOG’ (index ‘SYSLOG’, partition 1):
DATA pages (In-row Data): changed from (4015) to (4016) pages.
USED pages (In-row Data): changed from (4048) to (4017) pages.
RSVD pages (In-row Data): changed from (102408) to (5137) pages.
DBCC UPDATEUSAGE: Usage counts updated for table ‘SYSLOG’ (index ‘SYSDATE’, partition 1):
RSVD pages (In-row Data): changed from (1104) to (1105) pages.
DBCC UPDATEUSAGE: Usage counts updated for table ‘SYSLOG’ (index ‘SYSSITEID’, partition 1):
RSVD pages (In-row Data): changed from (1528) to (1529) pages.
DBCC UPDATEUSAGE: Usage counts updated for table ‘SYSLOG’ (index ‘SYSPID’, partition 1):
RSVD pages (In-row Data): changed from (1664) to (1665) pages.
DBCC UPDATEUSAGE: Usage counts updated for table ‘SYSLOG’ (index ‘SYSRECID’, partition 1):
RSVD pages (In-row Data): changed from (1969) to (994) pages.
DBCC UPDATEUSAGE: Usage counts updated for table ‘EEE’ (index ‘EEE’, partition 1):
Once done, my DBCC reported no issues and now i’ve a good reason to enjoy rest of my day:)
BTW my DR Test is going pretty smooth as of now:) Happy Vibes…..;)Wohoooooooooo