Performance Tuning a Spotlight for SQL Server Query

The other day I was looking at parallel query plans on a customers system and I noticed that the bulk of the parallel queries on the system where coming from Spotlight for SQL Server.

The query in question is used by spotlight to figure out when the most recent full, differential and log database backups were taken on the server.  The query itself is pretty short, but it was showing a query cost of 140 on this system.  A quick index created within the MSDB database solved this problem reducing the cost of the query down to 14.  The query cost was reduced because a clustered index scan of the backupset table was changed into a nonclustered index scan of a much smaller index.

The index I created was:

CREATE INDEX mrdenny_databasename_type_backupfinishdate on backupset
(database_name, type, backup_finish_date)
with (fillfactor=70, online=on, data_compression=page)

Now if you aren’t running Enterprise edition you’ll want to turn the online index building off, and you may need to turn the data compression off depending on the edition and version of SQL Server that you are running.

If you are running SpotLight for SQL Server I’d recommend adding this index as this will fix the performance of one of the queries which SpotLight for SQL Server is running against the database engine pretty frequently.  I’d recommend adding this index to all the SQL Server’s which SpotLight for SQL Server monitors.

Denny

Share

2 Responses

  1. Denny,

    You’re providing a performance solution to a query that runs, I believe, every 5 minutes by default. But in this case, I feel a policy change is more prudent. 
    We have other processes in place to let us know if our TLog backups failed, so I changed the setting to only check once a day for the FULL and DIFF backups.
    If we didn’t have those other processes, I would have set the backup check to run every 120 minutes. 
    Since we have 40+ instances, that’s 40+ indexes that we don’t have to create on the msdb database.
  2. I believe that you are correct that it runs about every 5 minutes by default.  While a change to that may be a good idea, most people don’t make a chance as they have SpotLight be the check for everything as they buy it to handle everything.  Having the check run every 120 minutes may not be often enough for some people.  It all depends on the RPO that they have defined.  If the RPO is 20 minutes, but you only look for failed backups every 120 minutes you could end up outside your RPO if there was a problem.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trust DCAC with your data

Your data systems may be treading water today, but are they prepared for the next phase of your business growth?