The release cycle of SQL Azure is 4 times a year, and new functionality is…
I really miss a lot of DMV:s that I normally use for performance tuning, and I hope Microsoft will add some more support for DMV:s. When you build a database, you often need to compare the performance of two similar T-SQL statements or procedures to find the fastest statements and to get an honest result you need to flush the cache to compare them with “cold” cache. In SQL server 2008 you can use DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS, but in SQL Azure these statements doesn’t exist, and you can’t flush the cache. On the other hand, you shouldn’t use them in production anyway! But how should I compare them then? I don’t have a SQL Azure development server and even if I had, I can’t flush the cache to get a cold cache. You could of course develop on SQL server 2008 and hope that you only use supported T-SQL statements. Well, I guess thats not gonna happen. If you want to compare the two statements between each other in SQL Azure you can’t rely on the execution time due to the hot cache. you need to look at the execution plan and the execution stats to find out which of them are using less IO operations and CPU worker time.
With the following script that I created for SQL server 2005, you can get the execution stats and the execution plans for individual SQL statements, but you can’t get the procedure stats like in SQL 2008, becuase sys.dm_exec_procedure_stats is not supported in SQL Azure.
SELECT QS.sql_handle, ROW_NUMBER() OVER(PARTITION BY qs.sql_handle ORDER BY statement_start_offset) AS statement_no, qs.execution_count, qs.total_physical_reads, qs.total_logical_reads, qs.total_logical_writes, qs.last_execution_time, sql_total_worker_time_s=convert(money,qs.total_worker_time)/1000000, sql_max_worker_time_s=convert(money,qs.max_worker_time)/1000000, sql_last_worker_time_s=convert(money,qs.last_worker_time)/1000000, sql_min_worker_time_s=convert(money,qs.min_worker_time)/1000000, sql_avg_worker_time_s=(convert(money,qs.total_worker_time)/qs.execution_count)/1000000, sql_total_elapsed_time_s=convert(money,qs.total_elapsed_time)/1000000, sql_max_elapsed_time_s=convert(money,qs.max_elapsed_time)/1000000, sql_last_elapsed_time_s=convert(money,qs.last_elapsed_time)/1000000, sql_min_elapsed_time_s=convert(money,qs.min_elapsed_time)/1000000, sql_avg_time_s=(convert(money,qs.total_elapsed_time)/qs.execution_count)/1000000, SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1, ((CASE statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE QS.statement_end_offset END – QS.statement_start_offset)/2) + 1) AS statement_text, qp.query_plan FROM sys.dm_exec_query_stats AS QS CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
In the result you can find the queries that is using most CPU time (worker time), elapsed time, IO operations and the execution plan. If you are not used with reading the execution plans.
I’ll recommend you to read : SQL Server 2008 Query Performance Tuning Distilled from Grant Fritchey. Read the next blog post from my collegue for some breaking news about insert performance in SQL Azure.