Transaction log growth best practices

There is a lot of talk online about transaction log growth and what you should do. There are two answers to this question, and in this post I’ll give you both answers.

The theoretical answer is that you want to grow your transaction logs in 8 gig chunks. This will maximize the amount of space that make up each VLF within the transaction log file. This will allow SQL Server to to minimize the number of VLFs that are created which is a performance improvement. Unfortunately there is a problem with this which leads us to the practical answer…

The practical answer is that there is a bug in SQL Server which causes the database to grow he transaction log by 32k the first time you issue a growth which is a multiple of 4 Gigs (which 8 Gigs obviously is). Because of this bug, which has not yet been fixed in any released versions of SQL Server it is recommended to grow the transaction log by a slightly smaller number such as 8000 megs at a time. The VLFs will be almost as large, keeping log performance where it is expected to be without having the SQL Server create a 32k log growth and 32k VLF.

With this bug there is no error or message that the log was grown by 32k instead of the requested amount. You can check the log growth information by looking at the DBCC LOGINFO output.

Now don’t forget that you don’t want to auto grow by is large size. If you do transactions may be killed by client side timeouts while waiting for the log to grow as the transaction log doesn’t support instant file initialization no matter what. You’ll therefor want to pre-grow the transaction log to the needed size ahead of time, with a smaller auto grow size that your storage can accept quickly. Personally i like to use a 1 gig or 1000 Meg log auto grow size for emergency auto grows. This gives me enough log space to work with, without having to wait for the storage to take to long to write the file.



4 Responses

  1. Hi Denny,

    I didn’t managed to reproduce this bug on SQL Server 2008 R2 developer edition.
    I created new database, new table, set log autogrowh on 9000 MB and run insert operation.
    When autogrowth operation finished I had 20 VLF in the log file.
    Did I miss something here?

    Thank you in advance.

    Best regards,

  2. As I understand the bug as it was explained to me, create a database, then increase the log by 4 or 8 Gigs manually. It then should only increase in size by 32k. It may not do it every time as specific conditions must be met for the bug to show itself, and I don’t have a list of those conditions.

  3. This is slightly away from the topic but TLOG size only. As per the documentation the TLOG files are capped at 2 TB (typically you wount be using one this big) you may refer to the MS document at All the papers say that the LDF growth is limited to 2 TB, but here in this test case I’ve a log file at 2.7 TB. Believe the 2 TB limit is for MBR partitions & no holds bar for GPT. Pressuming for documentation this was tested on MBR but with GPT I’m sitting on a LDF 2.72 TB in size. If you all can please share your comments here at

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trust DCAC with your data

Your data systems may be treading water today, but are they prepared for the next phase of your business growth?