Database %d was shutdown due to error 9002 in routine ‘HeapPageRef::Update’.

So I hit the error in the title the other night while converting a heap to a clustered index on a production system.  After this error poped up, some other nasty stuff showed up in the error log, which I’ve included below.

Error: 9002, Severity: 17, State: 4.
The transaction log for database ‘%d’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
Database %d was shutdown due to error 9002 in routine ‘HeapPageRef::Update’. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.

Error: 3314, Severity: 21, State: 4.
During undoing of a logged operation in database %d, an error occurred at log record ID (99:4216:617). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

Error: 9001, Severity: 21, State: 5.
The log for database %d is not available. Check the event log for related error messages. Resolve any errors and restart the database.

Error: 3314, Severity: 21, State: 4.
During undoing of a logged operation in database %d, an error occurred at log record ID (99:4216:617). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

Error: 9001, Severity: 21, State: 1.
The log for database %d is not available. Check the event log for related error messages. Resolve any errors and restart the database.

Error: 3314, Severity: 21, State: 5.
During undoing of a logged operation in database %d, an error occurred at log record ID (87:9933:1). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

At this point crash recovery ran and the database started back up.

The first error is that the log filled up, followed by the nasty part.

The good part is that this is going to be a pretty rare problem to run into.  This will only happen on update operations on a heap, or adding a clustered index to a heap and you run out to transaction log space.  The reason that this can happen is that updates against a heap and clustered index operations are a two phase operation.  First deletes of the old rows is done, then an insert of the new rows is done.  When the insert of the new rows is started an internal flag is set which states that the insert operation is a “must complete” operation.  When an unexpected error happens between the delete operation and the subsequent insert operation, the database is taken offline as a precautionary measure to ensure consistency and prevent concurrent operations to the record that was being modified.

The downside to this problem, is that it can show it’s ugly face in SQL Server 2008 and SQL Server 2008 R2 and there is no fix available for it in either version.  The only way to fix the problem is to change the table from a heap to a clustered index or to add more transaction log space.

I have been told that this has been fixed in SQL Server “Denali” as “Denali” will handle the heap updates differently than prior versions.  If you have SQL Server 2005 you shouldn’t see this problem.

In my case the problem was solved by letting the database come back online, and creating the clustered index.  My database was in simple recovery mode so after recovery the log was marked for reuse automatically meaning that there was enough room in the transaction log for the operation to create the clustered index to complete.

If you run into this problem, hopefully this helps explain what is happening and why.

Denny

Share

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trust DCAC with your data

Your data systems may be treading water today, but are they prepared for the next phase of your business growth?