SQL server – Auto update stats part 2

Oopps! Upgrade your browser pretty please. Oopps! Upgrade your browser pretty please.

The last week I posted a blog about auto update stats and this is a sequal to that blog post. I case you haven’t read it you should read it first. If you tried the script last week, you noticed that  :

– the statistics didn’t update when the data was added
– the statistics didn’t update when you selected from the table unless you recompiled the procedure.
– the less accurat statistics was the larger difference between estimated and actual number of records was.
– the statistics was updated when you recompiled the procedure, selected from the table and at least 20% of the data was changed
– the statistics updated less frequent when more data was added. 20% of 100 000 records is a lot more than 20% of 10 000 records.

If this would have been a real case scenario you would probably get into some serious performance problem as the statistics become inaccurat and the procedure get a new execution plan for some reason. The execution plan may get flushed from the cache due to some memory constraint and the procedure gets compiled with the assumption that you have less record than you actually have. The problem gets even worse when the data in the index is incrementing and you wants to get the last added records. Lets assume you want to get the last 19% of a table with 100 000 000 of records and the statistics is out of date. You want to get 19 000 000 records and SQL server statistics estimate the number of records of 1! That’s a big difference and SQL server will give you a plan that would use a nested loop instead of a merge or hash join when joining a second table. The nested loop gets executed 19 000 000 times instead of the estimated 1. Should you depend on the auto update stats? Well, the short answer is, “it depends”. I guess you’ve heard that before. In most cases auto update stats is a good thing, but in some cases you have to force the update stats by yourself. One good thing is that statistics are updated when you rebuild your indexes, BUT it’s not updated when you reorganize the indexes. I guess some of you always rebuild all your indexes on a regular bases, and some of you are using a smart script to rebuild or reorganize your indexes depending of the fragmentation. If you are using a smart script you can get some serious trouble, because not every index gets fragmented! An index with data that is increasing, like an identity column, adds data in the last page and creates a new one when it’s full. And if the index doesn’t get fragmented, it will not be rebuilt with your script and the statistics will not be updated.

If this is your case, you have a couple of options:

– Schedule an update statisticse
– Add update statistics in a procedure that you know is depending on fresh statistics
– Add the trace flag 2371 to change the threshold for the auto update stats. The threshold will change depending on the amount of data that exist in the table. You can read more about about the trace flag here

Enable the trace flag with: DBCC TRACEON(2371) Execute the code from my previous post and compare with previous results and you can see that the statistics are updated more frequent. I have seen some terrible issues with bad statistics, like a procedure that took about 7hours to execute. After adding update stats in the procedure it took only a couple of minutes.

If you are having performance issues with your SQL server, call us and our SQL server consultants will do what they can to investigate and solve your issues.