There is a lot of talk online about transaction log growth and what you should do. There are two answers to this question, and in this post I’ll give you both answers.
The theoretical answer is that you want to grow your transaction logs in 8 gig chunks. This will maximize the amount of space that make up each VLF within the transaction log file. This will allow SQL Server to to minimize the number of VLFs that are created which is a performance improvement. Unfortunately there is a problem with this which leads us to the practical answer…
The practical answer is that there is a bug in SQL Server which causes the database to grow he transaction log by 32k the first time you issue a growth which is a multiple of 4 Gigs (which 8 Gigs obviously is). Because of this bug, which has not yet been fixed in any released versions of SQL Server it is recommended to grow the transaction log by a slightly smaller number such as 8000 megs at a time. The VLFs will be almost as large, keeping log performance where it is expected to be without having the SQL Server create a 32k log growth and 32k VLF.
With this bug there is no error or message that the log was grown by 32k instead of the requested amount. You can check the log growth information by looking at the DBCC LOGINFO output.
Now don’t forget that you don’t want to auto grow by is large size. If you do transactions may be killed by client side timeouts while waiting for the log to grow as the transaction log doesn’t support instant file initialization no matter what. You’ll therefor want to pre-grow the transaction log to the needed size ahead of time, with a smaller auto grow size that your storage can accept quickly. Personally i like to use a 1 gig or 1000 Meg log auto grow size for emergency auto grows. This gives me enough log space to work with, without having to wait for the storage to take to long to write the file.
DennyContact the Author | Contact DCAC