When writing T-SQL queries, it’s important to reduce the volume of data down as quickly as possible. Larger volumes of data mean more input / output operations (think more reads from and writes to disk), more CPU cycles (hash joins with large sets of rows versus smaller ones), and more overhead for acquiring and releasing locks on resources in SQL Server. By shrinking down large sets of data early on in the query, we can greatly increase the efficiency of our operations.

Here’s a simple example. Using the standard AdventureWorks2012 database and Jonathan Kehayias’s excellent scripts to enlarge it, we can show how pre-aggregating data can reduce the amount of CPU resources required for a query.

Running this through the excellent (and free) tool Plan Explorer from SQL Sentry, we see that it consumed around 8.1 seconds of total CPU time. Examining the plan a little more closely, we can see that a large number of rows flowed through until the very last step, where the Stream Aggregate operation occurred.

2016-04-17 16_23_47-SQL Sentry Plan Explorer PRO
Click to enlarge

Some of those operations were hash joins, which require CPU resources to compute the various “buckets” that rows are placed into based on the hash key (perhaps I need to write something explaining this in more detail, but that’s for another day). Thus, the more rows that pass through them, the more CPU the query execution will require. Thus, if we can reduce the number of rows that pass through these hash join operations, we can likely reduce the amount of CPU resources required.

Looking at the query, we can see that it is grouping at the level of the customer first and last name, as well as the product name. Following the joins, we see that these attributes are linked to the columns SalesOrderHeader.CustomerID and SalesOrderDetail.ProductID. So, what would happen if we grouped as soon as we had those attributes available, rather than at the time the integer values have been expanded to their full names? We can do this by using a derived table in the query as follows.

Running this query through Plan Explorer, we find that this indeed had the desired effect: the number of CPU seconds consumed is reduced down to just over 3 seconds (a roughly 62% decrease). In addition, when examining the plan we can see that the optimizer moved the aggregate operation earlier in the plan, thus reducing the number of rows that flowed through the hash join operations.

2016-04-17 16_37_35-SQL Sentry Plan Explorer PRO
Click to enlarge

There are many other ways this approach might help, such as limiting the amount of query memory required for sorts. The derived table approach also may not always work; SQL is a largely declarative language, in that you tell it the data you want, and it decides how to retrieve it. We can give it hints, either implicitly (as we are doing here), or explicitly, using table hints (a thing that should rarely be necessary), but ultimately it controls how it answers our request. Thus, it’s critical that we check the execution plans of our queries, and that we run against representative sizes and compositions of data. This is the only way we can give ourselves a chance at robust, well performing SQL code.

Get My Free SQL-Tools Scripts

Put in your e-mail below and I'll send you a link to download a set of utilities I've made over the years. They've made my job easier and I know they will for you too!

I will never give away, trade or sell your email address. You can unsubscribe at any time.

Powered by Optin Forms
Principles of Good SQL: Aggregate Early

Leave a Reply

Your email address will not be published. Required fields are marked *