Working as a consultant now, I see this as something that is often ignored by DBAs. This is an easy thing maintain and yet so many don’t know how to. Keeping these in check can give you a performance boost not only on startup but with your insert/update/delete as well as backup/restore operations. SQL Server performs better with a smaller number of right sized virtual log files. I highly recommend you add this to your server reviews.
What is a VLF?
Every transaction log is composed of smaller segments called virtual log files. Every time a growth event occurs new segments, virtual log files, are created at the end of your transaction log file. A large number of VLFs can slow things down.
What causes High VLFs?
As transactions force growth of the log file, inappropriate log file sizing or auto-growth settings can cause a high number of VLFs to occur. Each growth event adds VLFs to the log file. The more often you grow in conjunction with smaller growth segments, the more VLFs your transaction log will have.
If you grow your log by the default 1 MB you may end up with thousands of VLFs as opposed to growing by 1GB increments. MSDN does a great job on explaining how a transaction logs work for a deeper dive I recommend reading it.
How do I know how many VLFs my log files have?
It’s very easy to figure out how many VLFs you have in your log file.
Make sure you are on the context of the database you want to run it against. In this case TEMPDB and run the DBCC LOGINFO command.
USE tempdb DBCC LOGINFO
The query will return a result set of all LSNs created for that database, the COUNT of those rows is the amount of VLFs you have.
Now there are many ways you can get fancy with it using TSQL, so have fun with it. Write something that rolls through all your databases and gives you the record counts for each. There are plenty of useful examples on the internet.
The VLF counts should be under 100 ideally, anything above should be addressed.
*New for 2017 is a DMV that will give you an even easier way to get the VLF counts sys.dm_db_log_stats ( database_id ) .
SELECT name AS 'Database Name', total_vlf_count AS 'VLF count' FROM sys.databases AS s CROSS APPLY sys.dm_db_log_stats(s.database_id) WHERE total_vlf_count > 100;
How do you Fix?
These transaction log files should be shrunk until there are only two VLFs, then grown in chunks back to the current size.
- Perform Shrink using DBCC SHRINKFILE
SHRINKFILE (N'Log_Logical_Name_Here', 0, TRUNCATEONLY);
- Regrow your log in an increment that makes sense to your environment. However, if your file growth is in excess 8GB it is recommended to grow in 8000MB chunks while manually regrowing the file. Your autogrowth should be set to a lower value. There is no set rule to what those values should be, it may take trial and error to figure out what is best for your environment.
USE [master] GO ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev', SIZE = 8192000KB ) GO
Note: Growing out you log can cause a performance hit and block on going transactions, be sure to perform this during a maintenance window.
It’s that simple, now go take a look at your files. You may be surprise on what you find.
Contact the Author | Contact DCAC