The other day I was looking at parallel query plans on a customers system and I noticed that the bulk of the parallel queries on the system where coming from Spotlight for SQL Server.
The query in question is used by spotlight to figure out when the most recent full, differential and log database backups were taken on the server. The query itself is pretty short, but it was showing a query cost of 140 on this system. A quick index created within the MSDB database solved this problem reducing the cost of the query down to 14. The query cost was reduced because a clustered index scan of the backupset table was changed into a nonclustered index scan of a much smaller index.
The index I created was:
CREATE INDEX mrdenny_databasename_type_backupfinishdate on backupset
(database_name, type, backup_finish_date)
with (fillfactor=70, online=on, data_compression=page)
Now if you aren’t running Enterprise edition you’ll want to turn the online index building off, and you may need to turn the data compression off depending on the edition and version of SQL Server that you are running.
If you are running SpotLight for SQL Server I’d recommend adding this index as this will fix the performance of one of the queries which SpotLight for SQL Server is running against the database engine pretty frequently. I’d recommend adding this index to all the SQL Server’s which SpotLight for SQL Server monitors.
I believe that you are correct that it runs about every 5 minutes by default. While a change to that may be a good idea, most people don’t make a chance as they have SpotLight be the check for everything as they buy it to handle everything. Having the check run every 120 minutes may not be often enough for some people. It all depends on the RPO that they have defined. If the RPO is 20 minutes, but you only look for failed backups every 120 minutes you could end up outside your RPO if there was a problem.