In a previous post, I talked about some of the available storage options that can be used to back up your databases directly to the cloud. Doing this is a really good way to get your backups off-site and replicated in a single motion, depending on the region and redundancy option select, for minimal cost.
So how do you back up to URL, the cloud? I’ll show you. Before we start, this assumes that you have a storage account in Azure setup and configured. If not, go do that first and then come back. You will also need SQL Server 2016 or higher in order to make this work.
Before you start backing up to Azure, you will need to decide on whether you want to use block or page blobs for your storage. As I mentioned in my previous post, using block blobs is recommended for cost as well as performance. However, you can use either.
With storage in Azure, the method used to back up the database will actually determine which storage type is used. In either case you must use a credential. If you use a Credential with a Share Access Signature, it will be a block blob. If you use a Credential with a storage account access key, it will be a page blob.
If you look on the blade for the storage in the Azure portal, you will see both an option for Access Keys as well as the Share Signature.
Azure allows you to have two access keys which allows you to rotate keys as needed when using page blobs. You will need one of these keys in order to successfully back up using page blobs.
Like with any keys, you want to safe guard them. If someone has the keys and knows the storage account name, they could gain access. In short, rotate your keys regularly.
Once you have the key, you can create the CREDENTIAL.
CREATE CREDENTIAL [sqlbackups] WITH IDENTITY = 'backupsdemo'
,SECRET = '<mystorageaccountaccesskey>';
The identity value is the name of your storage account. The Secret is the key that you obtained from the portal. Once the credential has been created, you can then backup your database.
BACKUP DATABASE ScratchDemo
TO URL = 'https://backupsdemo blob.core.windows.net/sqlbackups/ScratchDemo_pageblog.bak'
WITH CREDENTIAL = 'sqlbackups';
With block blobs, using the Shared Access Signature (SAS) is a little different. When you configure the signature, you will specify a time frame in which access will be allowed. You also need to specify an IP address or range. Once the signature has been created, you’ll see a bunch of options like what is shown below:
Keep in mind that when generating the shared access signature, it utilizes the access keys. You can specify either key1 or key2 for this process. If you regenerate either of those keys, you could need to regenerate the SAS token and then update. Once the SAS token is generated, a connection string, token, and various URL endpoints will be displayed like shown below.
In order to use the signature for backups, we need to copy the SAS token. We will use this token to create the corresponding credential. One thing to note, the SAS token will begin with a ‘?’ which SQL Server already expects. When you create the credential, remove the ‘?’ from the token.
CREATE CREDENTIAL [https://backupsdemo.blob.core.windows.net/sqlbackups]
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = '<SAS_TOKEN_GOES_HERE>';
The credential name needs to be the URL that points to your storage account name and subsequent container. The “Secret” is the SAS token that we gathered from the portal after generating the shared access signature (SAS).
BACKUP DATABASE ScratchDemo
TO URL = 'https://backupsdemo.blob.core.windows.net/sqlbackups/ScratchDemo_blockblob.bak'
Note that it is completely possible to have a credential for both types of storage (block or page) created. The page blobs credential can be named anything, however the block blob credential must to be named the URL of the storage endpoint including the container name.
Once the database has been backed up you can see the files in the Azure portal.
Note that I have these stored in Cool storage so unless I want to pay a penalty for early deletion, they will sit there for at least 30 days. In a future post, I’ll show you how you can remove the files using Powershell. If you are using Ola Hallengrens maintenance solution (which we recommend) it fully supports backing up to URL.
Ensuring your backups are safe and sound is a primary tenant of being a DBA. Backing up to URL is a great way to accomplish this. For a minimal cost, your backups could be off-site or even redundant across the country. Keep in mind, however, that even with Azure things fail. If and when your internet connectivity drops, make sure that you have a backup plan to handle temporary onsite backups until connectivity is restored.
Happy backing up!
© 2019 – 2018, John Morehouse. All rights reserved.
Azure offers a lot of features that enable IT professionals to really enhance their environment. One feature that I really like about Azure is storage accounts. Since disk is relatively cheap, this continues to hold true in the cloud. For less than $100 per month, you could get up to 5TB of storage including redundancy to another Azure region.
Once you have created Azure account, you will need to go to the Storage Accounts blade from the left-hand Resource menu in the Portal. If you don’t see Storage Accounts listed in the favorites already, click on All Services at the top and filter for “storage accounts”. Do not select the classic storage account. You want the new and improved storage account.
Click on Storage Accounts. Click Add
On the following screen, the subscription should auto-populate however, if you have multiple subscriptions you can change it to reflect that one that you want.
The resource group is a container for all assets within Azure. I tended to think of this as a bowl of things. Things like virtual machines, storage accounts, virtual networks, etc. that are related all go into the same bowl. If you want to delete all those resources, you simply empty the bowl, and everything will be deleted for you.
In this example, I’ll create a new resource group, called “CreateStorageDemo”.
Then give the storage account a name. This name must be globally unique simply because the name is a part of the endpoint that can be used to manipulate the storage. You’ll need to choose wisely. In this case, I have supplied a name of “prodsqlbackupdemo”.
Next, choose the region of the storage. Usually you want to choose a region that is closest to you to help reduce latency. If I’m in Kentucky and I choose a region in Europe, the latency across the pond is going to be greater than if I were to choose the East US 2 region. You can see the regions for Azure via this map. The East US 2 region is on the east cost of the United States.
For now, select Standard storage. This will give you the most versatility in what you can storage. Premium storage only currently allows page blobs. Depending on your needs, premium storage may or may not meet them.
Leave the account kind to StorageV2, which is just a general purpose storage. Like with standard storage selection, this will give you more versatility in the long run.
Replication. The replication option refers to if you want your data replicated somewhere else, you can select it here. Here’s a synopsis of what these options mean:
The more redundant your storage is the greater the cost but if your data is critical, it’s worth it. For this example, because I’m just showing you how to spin up an account, I’ll select LRS, Locally-redundant storage.
The final option is the access tier, hot or cool. These tiers each individually have requirements around them. If you need to access the data frequently, select Hot. Hot will be more expensive and similar latency than cool however it’ll have a higher availability. If you can leave your data alone for at least 30 days, you can use the Cool tier. The cost will be cheaper however you could get hit with a penalty if you delete the file(s) prior to 30 days. Here is a chart provided by Microsoft that shows the differences:
- LRS – Locally redundant storage which means that your data would be redundant within the data center it sits in. There can be multiple data centers within a region so you don’t know exactly where it is but it’ll be redundant within that building. If the building goes away, your data will go away along with it.
- ZRS – Zone redundant storage gives you greater redundancy as your data is replicated within data centers within a region. This would allow your data to survive in the event that your initial data center went away unexpectedly.
- GRS – Geographically redundant storage provides redundancy across a large distance. Think redundancy from Altanta, GA to Seattle, WA. This spans many miles and allows for redundancy in the event an entire region is affected by some outage, like a hurricane on the east coast. The data is not readable on the secondary.
- RA-GRS – Read Accessible geographically redundant storage gives you the ability to read your data from a secondary while being geographically redundant.
As you can see, setting up a storage account for any purpose is fairly straight forward and simple. With a couple of mouse clicks, you can easily have storage readily available to upload files to, whether they are family pictures or backups of your production SQL Server databases. Either way, a great solution to get things out of your local data center if needed. Keep in mind that Microsoft is continually pushing updates to the portal so don’t worry if things change slightly over time.
© 2018, John Morehouse. All rights reserved.
More and more I am impressed on the Azure Portal. Microsoft continues to make enhancements to the user interface (UI) and add in new features. One of these new features that I’ve recently become aware of is the ability to easily switch accounts.
As a consultant, I work with clients that have a presence in Azure. They can grant me access to their cloud assets in two ways:
- They can add one of my email accounts to their subscriptions.
- They can grant me an email account within their domain and then add that email to their subscriptions.
Most of my clients select the second option. This means that I could have multiple email accounts that I have to use in order to sign into the portal. Using a password manager such as 1Password, not usually a big deal and more of an annoyance rather than a headache.
Within the past month or so, Microsoft has updated the portal to allow me to easily switch accounts. Previously you had to log out of the portal and then log back in.
While I haven’t tested it, I’m assuming that these entries are cached. Once you’ve logged into the portal with the account, they will appear in the drop down list as shown above.
This really makes using the Azure Portal that much easier, especially for consultants that might have to manage multiple identities.
Still haven’t looked at Azure? You get $200 worth of credits to try out resources for 30 days. Sign up now!
© 2018, John Morehouse. All rights reserved.
As a database administrator, I am very conscious of how data is being backed up. Up until SQL Server 2016, you were limited to backing up your databases to a physical device. That device could be a tape, local disk, or network storage. After successfully creating a backup, if you’re doing it right, the backups then had to be moved off-site for disaster recovery purposes.
Today, with SQL Server 2016 or higher, you can now backup directly to a URL. This URL is an endpoint into a cloud storage account, in this case, for Microsoft Azure. Furthermore, with Azure Availability Zones now available, certain regions give you extra redundancy and will be replicated to another region for a minimal cost. Not only does this solution offer you the ability to back up your databases off-site, but you get the added benefit of having them replicated for you.
Azure storage that can be utilized for backups comes in two different flavors, block blobs or page blobs along with three different tiers. These tiers are hot, cool, and archive. I’ll explain more about the tiers later. Microsoft recommends using block blobs for backups rather than page blobs simply because it is more secure, it’s cheaper, and you can get better performance.
Block blob storage is used for objects like files. You can think of this storage like looking at files on your local operating system. Since we are using this storage for backup purposes, this is a good choice. The maximum size for a single block blob is ~4.75TB.
Page blob storage is for random read/write storage like virtual hard drives. Azure virtual machines use page blob storage for disks under the hood. The I/O for these disks are random so page blogs work well for them. The maximum size for a single page blog is 8TB.
Microsoft has a good article that explains the differences in storage types in further detail.
As I mentioned, both block and page blobs are available in three different tiers for general availability, namely Hot, Cool, Archive. There is a fourth tier, Premium, but it is in Preview at the moment. You can use these tiers to help reduce costs (the Archive is the cheapest) and improve performance. Hot is going to perform better than the other two tiers however it is also going to be more expensive. Here’s a breakdown of the storage that is currently available:
- Hot – Storage that is optimized for data that is accessed frequently. You could put today’s backup of your databases here so that in the event of a restore, it would be easily accessible.
- Cool – Optimized for data that is kept for at least 30 days and is access infrequently. As the age of the backup file increases, you could move them from Hot to Cool storage, saving on costs.
- Note: You can incur a penalty if data is removed from cool storage prior to 30 days
- Archive – Data is rarely accessed and stored for a minimum of 180 days. This storge would be used for long-term archival of backups.
- Note: With archive storage, the latency in data retrieve will be higher than Hot/Cool. Like cool storage, you can incur a penalty if data is removed prior to 180 days.
- Premium – Still in Preview, this storage provides high performance for data that is accessed often. You’ll get better performance along with a higher cost.
Microsoft provides a good comparison chart. Click on the image below to go to the documentation
In a nutshell, if you do not plan on restoring the data often from Azure storage, the Cool storage offers a good solution to allow you to back up things off-site as well as for a minimal cost. As an example, I priced out 5TB of storage for a client, including zone redundant storage, for less than $100/month. That is a cheap cost, in my opinion, to get their backups off-site and redundant for disaster recovery.
You can use the Azure Pricing Calculator to estimate the monthly cost depending on regions, tiers, and the amount of storage space you want. Once you have chosen the appropriate options for your needs, you can start to use it to back up your databases. I’ll show how to do this in an upcoming post.
© 2018, John Morehouse. All rights reserved.