Data-Driven Intranet

SQL Server Design Tips

Back to Welcome Page

Summary


Hardware Selection

CPU should be high performance in integer operations.

Drive should be fast, SCSI, RAID preferred.

Plenty of RAM 128K minimum with more for high concurrency usage and/or large database systems.

Ample extra disk space for query-intensive (DSS) applications.

Summary

Excessive CPU Usage

"Excessive" defined: Performance monitor shows 90% or more CPU utilization for sustained for periods of 10 seconds or more.

Keep stored procedures short: avoid progressive swapouts associated with long-running processes.

There should be no paging of SQL Server itself. In addition, there should be enough data cache to maintain a 90% cache-hit ratio during peak processing periods.

Frequently executed queries, and updates should be stored procedures regardless of how simple they are.

Specify table locks on queries that do table scans and are known to escalate to table locks anyway.

Avoid making and breaking database connections too frequently. It is OK to be idle for up to 15-20 minutes before terminating. Trying to conserve user connections is a false economy since 40 concurrent connections only uses about 1 Meg of reserved memory.

Summary

Slow Online Queries

For systems that must balance OLTP and DSS requirements, replicate OLTP system to a denormalized DSS copy with additional indexing and summary tables. Denormalization should be done on a case-by-case basis after studying (and understanding) the problem.

Run UPDATE STATISTICS at least weekly on all tables. Daily for very volatile tables.

User trigger procedures to aggregate summary totals and perform other denormalization taskes.

Avoid joining more than 4 tables in a single query.

Limit the size of online retrievals to the number of rows users are going to realistically want to browse. Use FASTFIRSTROW retrieves on large browses to improve "perceived response time".

"Pin" certain tables in data cache if they are small and very frequently read (such as lookup tables for codes, etc).

Make sure the server has enough RAM. Memory is especially beneficial to DSS (query-intensive) applications and when peak concurrency requirements are high.

Summary

Slow Transaction Processing

Rebuild clustered indexes on volatile tables often. Volatile tables are those that have a large percentage of inserts and deletes. Use FILLFACTOR appropriate to the volatility.

Use DBCC SQLPERF(WAITSTATS) to identify where transactions are being delayed.

Turn on IRL (Insert Row Level) locking on high volatility tables.

Delay non-time critical transaction processing to overnight batch window.

Limit duration of transactions that hold locks.

Never pause for user input in the middle of a transaction.

Summary

Deadlocks

Segregate OLTP processing from DSS processing, either physically thru replication, or temporally thru overnight transaction processing.

Limit the size of retrievals to what the user can reasonably browse.

Eliminate cycle deadlocks by adhering to a fixed order of update for multiple table transactions (e.g., alphabetical or some published order).

To support a large number of concurrent users, initially program for optimistic concurrency control and back off only when excessive deadlocks occur. Add a timestamp column to each table to make life easier for the Lock Manager. Remember that when deadlocks are rare, it is often cheaper to let them happen and retry them.

If frequent conversion deadlocks occur, use UPDLOCK directive on reads that are expected to be followed by updates. Use pessimistic locking with the API access methods.

Avoid "Last Page Collisions" by putting a clustered index on every table and never on an identity (auto-increment) field.

Program the host application to automatically retry failed stored procedures when Error 1205 ("Congratulations, you have been selected as the deadlock victim!") occurs.

Remember that foreign key constraint checks put a share lock on related tables.

Summary

Low Cache-Hit Ratio

Desirable target: 90% average rate.

Ways to monitor: SET STATISTICS IO ON; Performance Monitor

Choose your clustered index carefully, selecting a column that is most frequently sorted on or accessed by value range.

Don't specify an excessive percentage of cache to procedures. Procedure cache is dependent on number of concurrent users and size of frequently used stored procedures (another reason to keep stored procedures small). Use DBCC PROCCACHE to verify that active cache is not too much smaller than the allocation size.

Maximize the number of indexes and data rows to a page by using small compact keys (preferably integers), and 'narrow' rows.

Use varchar for fields that whose contents vary significantly in actual size.

Consider vertical segmentation on tables with long rows, putting small frequently accessed columns in one table and large infrequently needed columns in the other.

Consider 'index coverage' supplying popular columns on the index. For example, if you are reading the customer table often just to get a 1-byte customer type, add the customer type field to the primary index and save reading a large customer record into memory.

Add physical memory as long as it improves cache hit ratio (while adjusting procedure cache percentage accordingly).

Summary

Unnecessary Table Scans

Make sure that frequent or large queries are fully supported by indexes. This is especially true of DSS systems.

Use SHOWPLAN to verify that indexes are being used as expected.

Indexes with insufficent "dispersal" will not be used by the optimizer. (A search argument must eliminate 80-85% of the rows in order to prevent a table scan.)

Summary

Excessive Network Traffic

Use NT's Network Monitor to track volume.

Do as much work in stored procedures as possible. Take full advantage of T-SQL program logic to perform complete functions and return useful resultsets as much as possible.

Use parameterized stored procedures to extend their usefulness.

Summary

Batch Processing Doesn't Fit in Batch Window

Switch database to single user mode during batch process while online is down (permits updates without locking).

Add batch-friendly indexes to databases at the beginning of batch processing and drop them before bringing up the online (time the index creation process against the productivity gain to make sure each index results in a win).

Summary

Excessive Use of Tempdb

Tempdb allocation should be at least 25% of the size of the largest database.

Make sure that application-created temporary tables are dropped as soon as they are no longer needed.

Keep transactions small; down to the smallest logical unit of work.

Work tables and temporary clustered indexes are created on nested queries where the inner table must be repeatedly scanned. Avoid these by using JOIN rather than nested queries where possible and using adequate indexing to avoid scans.

Summary

Back to Welcome Page
1