A little while back I got one of those phone calls. You know the one, the lovely 6am phone call about random performance problems. There were two problems that night. One which I’ll talk about later in another post, the second one which I want to talk about today.
The query that was having problems is a dynamically generated query which comes from a stored procedure. The basic query which was being run looked a lot like this.
SELECT /*A bunch of columns*/
FROM answer a
JOIN session s ON a.SessionID = s.SessionID
WHERE a.SessionID IN (4857385,5269932,5682479,6095026)
Most of the time that this query was being run everything was just fine, however there were a some times when it was timing out. Looking into the execution plan for a normal run of the query everything looked just fine. However when this was being run sometimes there were 1.2M rows being pulled from the session table even though there were 4 specific IDs being passed in.
Looking at the properties of the index scan which was being performed against the session table I could see that the SQL Server turned the query to WHERE s.SessionID >= 4857385 AND s.SessionID <= 6095026. This was a problem as for some of these queries as like with this query there were 1.2M rows being returned from the session table instead of the 4 rows that should have been returned.
The fix in this case was to simply change there where clause from “WHERE a.SessionID” to “WHERE s.SessionID”. Now I’m not sure why this worked from the internals point of view but I do know that it worked. The next time the stored procedure ran it run in milliseconds instead of timing out at 30 seconds.
In this case the server in question was SQL Server 2008 R2 (10.50.2796). This may or may not apply to other builds of SQL Server. I’m pretty sure this is going to be a your mileage may vary sort of thing.
This is officially the least amount of work that I’ve ever done tuning a query as I only made a single change to a single letter of the query.