SQL Server statistics – memory request

Oopps! Upgrade your browser pretty please. Oopps! Upgrade your browser pretty please.

In my previous blog post, I covered some issues with incorrect statistics and in this post I’ll present another issue. The optimizer is using the statistics to choose an execution plan that is as good as possible, but it also uses the row counts in the statistics to request memory for the query.

If the row count is higher than the actual number of records, the optimizer tells the SQL engine to request more memory than necessary, with possible memory exhaustion as a result.

But the worst case is when the row count is to low, because then the SQL engine will not request enough memory to process the query. SQL engine will not request more memory and the data will be spilled to tempdb.

If you are running a slow query, you can investigate the memory request for that query by using the following code:

FROM sys.[dm_exec_requests] AS der
CROSS APPLY sys.[dm_exec_sql_text](der.[sql_handle]) AS dest
CROSS APPLY sys.[dm_exec_query_plan]([der].[plan_handle]) AS deqp
LEFT JOIN sys.[dm_exec_query_memory_grants] AS deqmg ON
der.[session_id] = [deqmg].[session_id]
AND [der].[request_id] = [deqmg].[request_id]


If the max_used_memory_kb is much less the requested_memory_kb, then the optimizer over estimates the row count and requests more memory than needed.

The requested_memory can be forced to use more than whats necessary by changing the “min memory by query (kb)” by using this command:

EXEC sp_configure ‘min memory per query (KB)’,2048

The default is 1024, but it might be worth changing if you know for sure that most of your queries requires a lot more and you have plenty of RAM.


If you have performance issues, feel free to contact any of our SQL server experts and we will help you.