If You Thought Database Restores Were Slow, Try Restoring From an EMC Data Domain

Published On: 2015-07-20By:

Recently I did something which I haven’t had to do for a VERY long time, restore a database off of an EMC Data Domain. Thankfully I wasn’t restoring a failed production system, I was restoring to a replacement production system, so I was getting log shipping setup.

I’ve worked in plenty of shops with Data Domains before, but apparently I’ve blocked out the memories of doing a restore on them. Because if your backups are done the way EMC wants them to be done to get the most of the Data Domain (uncompressed SQL Backups in this case) the restore process is basically unusable. The reason that we are backing up the databases uncompressed was the allow the Data Domain to dedupe the backups as much as possible so that the final backup stored on the Data Domain would be as small as possible so that it could be replicated to another Data Domain in another data center.

The database in this case is ~6TB in size, so it’s a big database. Running the restore off of the EMC Data Domain, was painfully slow. I canceled it after about 24 hours. It was at ~2% complete. Doing a little bit of math that database restore was going to take 25 days. While the restore was running we tried calling EMC support to see if there was a way to get the EMC Data Domain to allow the restores to run faster, and they answer was no, that’s as fast as it’ll run.

After stopping the restore, I backed up the same database to a local disk, and restored it to the new server from there. This time the restore took ~8 hours to complete. A much more acceptable number.

If you are using EMC’s Data Domain (or any backup appliance) do not use that appliance as your only location of your SQL Server backups. These appliances are very efficient at writing backups to them, and replicating those backups off to another site (which is what is being done in this case). But they are horrible at rehydrating those backups so that you can actually restore them. The proof of this is in the throughput of the restore commands. Here’s the output of some of the restore commands that were running. These are for full backups, so there’s nothing for SQL Server to process here, it’s just moving blocks from point A to point B.

RESTORE DATABASE successfully processed 931 pages in 6.044 seconds (1.203 MB/sec).
RESTORE DATABASE successfully processed 510596 pages in 1841.175 seconds (2.166 MB/sec).
RESTORE DATABASE successfully processed 157903 pages in 440.849 seconds (2.798 MB/sec).
RESTORE DATABASE successfully processed 2107959 pages in 4696.428 seconds (3.506 MB/sec).
RESTORE DATABASE successfully processed 77307682 pages in 118807.557 seconds (5.083 MB/sec).
RESTORE DATABASE successfully processed 352411 pages in 816.810 seconds (3.370 MB/sec).
RESTORE DATABASE successfully processed 8400718 pages in 23940.799 seconds (2.741 MB/sec).
RESTORE DATABASE successfully processed 51554 pages in 111.890 seconds (3.599 MB/sec).
RESTORE DATABASE successfully processed 1222431 pages in 3167.605 seconds (3.014 MB/sec).

The biggest database there was restoring at 5 Megs a second. That was 33 hours to restore a database which is just ~606,816 Megs (~592 Gigs) in size. Now before you blame the SQL Server’s or the network, all these servers are physical servers running on Cisco UCS hardware. The network is all 10 Gig networking, and the storage on these new servers is a Pure storage array. The proof that the network and storage was fine was the full restore of the database which was done from the backup to disk, as that was restored off of a UNC path which was still attached to the production server.

When testing these appliances, make sure that doing restores within an acceptable time window is part of your testing practice. If we had found this problem during a system down situation, the company would probably have just gone out of business. There’s no way the business could have afforded to be down for ~25 days waiting for the database to restore.

Needless to say, as soon as this problem came up, we provisioned a huge LUN to the servers to start writing backups to. We’ll figure out how to get the backups offsite (the primary reason that the Data Domain exists in this environment) another day (and in another blog post).

How could EMC fix this.

Step 1 would be to stop telling people that it can replicate very large databases from site to site. While it technically can, doing so while still maintaining some level of performance while doing restores doesn’t seem possible at the moment.

Step 2 would be to stop telling people to disable SQL Compression to make replication work. Again while this does make replication work, restore performance is horrible.

Step 3, figure out and resolve the performance problem when reading files from the array, especially large files when there are huge amounts of deduplicated blocks within the backup. This hydration problem is the performance killer here. It has to be possible, other array vendors do it within their normal LUNs that have deduplication. EMC does it within their arrays which have deduplicated data, but in the Data Domain the performance sucks. Something needs to be done about this.


Contact the Author | Contact DCAC

3 responses to “If You Thought Database Restores Were Slow, Try Restoring From an EMC Data Domain”

  1. DataDomainTech says:

    Thank you, mrdenny, for your feedback in this restore scenario.  We’d love to start a discussion about your concerns over at the ECN Forums.  There are many customers on the forums that could provide valuable insight into this matter.  We would also like to gather information about the system you were using during this restore, and your experience.  We’ll pass this information along to our engineering and development teams.  They are always interested in real-world feedback on how customers are using our products and what their experiences are.

    Please feel free to reach out to me directly at bettsl@emc.com.  I’d like to review the SR you had opened at the time of the restore.  Also, if yould generate a support bundle on the DataDomain now (so the logs don’t roll off) that would be great.  Thanks in advance.

  2. JeffVogl says:

    Hi, I’m curious, was this backup taken.

    1) Standard SQL backup to a file share that was then compressed?

    2) the Data domain agent running a virtual machine backup of everything?

    3) the tool that runs from within SQL routing the backup output to the data domain? (I think it’s called the Boost Agent for Microsoft Applications)

  3. Denny Cherry says:


    These were just standard SQL backups being written to a network share using the native Data Domain compression and dedupe.


Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner   Denny Cherry & Associates Consulting LLC BBB Business Review    Microsoft MVP    Microsoft Certified Master VMWare vExpert
INC 5000 Award for 2020    American Business Awards People's Choice    American Business Awards Gold Award    American Business Awards Silver Award    FT Americas’ Fastest Growing Companies 2020   
Best Full-Service Cloud Technology Consulting Company       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers
Share via
Copy link