When you design a database you should have performance in mind, and indexing strategy is one of the important things to consider. If you create too few or to many indexes you’ll suffer from bad performance. If you don’t have indexes to cover your queries, you’ll have slow running queries, and if you have to many, your inserts, updates and deletes will be slow. You have to balance this and there are always tradeoffs and there are no simple answers that will work for everybody, but there are some guidelines you can follow.
In this series of blog posts I’ll try to give you some of these guidelines.
– Don’t create too many or too few.
– Every table should have a clustered index! End of discussion! (some might say that small tables shouldn’t have, but I’ll cover that in a later post)
– Choose the clustered index wisely, don’t let SQL Server Management Studio to select it for you (by creating a primary key in the table designer.
– Index columns that is used in a foreign key relation
– Test you workload with your indexes
– Monitor your performance to find new indexes
– Remove indexes that is not used
– Don’t create duplicate indexes
– The column order in you index counts, if the first column in the index is not used in a where/join clause, you’ll end up with a scan
– Use the right fill factor for every index, don’t count on the server default setting for fillfactor
– Consider partitioning for performance and maintenance
– Use data compression for datawarehouse/reporting databases.
– Rebuild the indexes on a regular basis
– Update the statistics of the indexes
Get back another day for more details on specific recommendations
If you have some questions, feel free to contact any of our SQL server consultants and we will help you.