[Video] Configuring Geo-Replication to SQL DB

With the hurricane bearing down on the east coast of the United States last week, making sure you have a good disaster recovery plan in place should have been a top priority for you.  If you happen to be using Azure SQL DB, adding geo-replication is a simple and easy process.  On Friday of last week I posted a blog on how to configure asynchronous geo-replication however I do not think the post effectively explains on how easy it is to configure.

So, I created a video.  In less than 5 minutes, I show you how to configure geo-replication for a Azure SQL DB database.  In this video, you will see me geo-replicate my database from the East Coast of the United States to the West Coast.

Keep in mind that this video just shows a test database that has hardly any data in it.  If you had to replicate a larger database, it would take some time so planning would be important.  The sooner you start, however, the sooner you’ll have a copy of your database safely in another region.  Even with the risk of data loss in the event of a manual fail over, it might be better than being completely offline for hours, days, or even weeks.

It is worth mentioning that you will incur additional costs by having another copy of the database replicated in another region, but once the need has passed, you can remove the geo-replication and the secondary database.  That is one of the beautiful things about Azure, the ability to scale things up and down as needed.  Also, if you replicate to a different country, be cautious of their data privacy regulations.  Once you put data there, it might not be able to leave the country.

Enjoy the video and be sure to check back for more videos to come!  You can also subscribe to the Denny Cherry & Associates YouTube channel to watch videos from the entire team.

© 2018, John Morehouse. All rights reserved.

Contact the Author | Contact DCAC

Steps for SQL DB Geo Replication

Unless you are living under a rock, you have most likely heard of the hurricane that is bearing down on the east coast of the United States.  Some areas have mandatory evacuation and I have seen reports of 80’ plus foot waves so this hurricane is not a joke.

Another item that should not be a joke is a disaster recovery plan.  This is a vital and critical plan that every organization should have in place.  If you do not have a plan in place, you are asking for events, like a hurricane, to potentially force your business to close its doors for days or even weeks on end.

However, if you are utilizing SQL DB in Azure, you can easily configure a quick and dirty DR plan to get you through the rough times.  While it might not be an automatic failover (I’ll blog about that later), it is better than nothing and it just might save your business.

Let’s take a look.

Azure Portal

Looking in the portal, I’ve got an Azure database, Test1, that is residing in the East US region.  This is just a Basic database, nothing fancy nor pricey.  However, with the hurricane coming, I would want to move my data away from the East coast.

 

 

The database currently is not replicated anywhere and other than the default backups that Microsoft takes, it is not configured for any type of failover or disaster.  However, that can be easily remedied with a couple of mouse clicks.

If you did not notice, there is a Geo-Replication option under Settings.

 

The resulting blade will show you a map of the world along with all of the options for where a replica (or two) could reside.  The blue check mark indicates where the database currently resides.

You will also notice that geo-replication is currently not configured.

There are a couple of ways that you could enable geo-replication.  You can click on any of the green circles on the map or you can select an appropriate region under the “Target Regions” shown below the map.

The West US 2 region seems like it’d be a good fit for my needs.  By choosing this region, the database would be replicated from the East coast to the West coast, which should be well away from the hurricane.  If the hurricane has an effect on the West coast, we all have bigger problems to worry about.

Once I select West US 2, a Create Secondary blade will appear.

In this case, I already had a server configured in West US 2 that I can use.  If it was not already present, it would only take a few mouse clicks to configure a new target server.   You will also notice that the pricing tier is just Basic and nothing fancy.

Once you have the server configured and/or selected, just simply click “OK”.  You’ll be returned back to the database geo-replication blade.  You will see that there is now a secondary listed and it’s status is “Initalizing”.  The blue dotted line (it is animated in the portal) dictates data movement from one region to another.  Once data has replicated to the new region this will turn into a solid line.

 

Once it has finished, the status will also change to Readable as shown below.

If you wanted to do a manual fail-over at this point to West US 2, you could do so by selecting the elipse option next to the secondary and select “Forced Failover”

It is worth noting, however, that this geo-replicated configuration is asynchronous which means there is a potential for data loss in the event of a manual failover.  It is worth repeating.  A manual failover in this configuration you run the risk of potential data loss.

If you wanted to have automatic synchronous failover, you would need to setup a failover group.  I’ll blog about that later.

Summary

If you are using SQL DB in Azure and don’t have your critical database geo-replicated, you are opening yourself up to potential issues in the event of a disaster.   By configuring geo-replication you at least give yourself the ability to make a choice.  You could manually fail-over and keep the lights on, or do nothing and potentially close the doors for days or weeks.

Keep in mind that after the disaster, you could stop the replication and delete the secondaries.  Yes, you will have an increase in cost but it might just be worth it to save your business.

 

© 2018, John Morehouse. All rights reserved.

Contact the Author | Contact DCAC

How the Cloud Democratizes Solutions

The grey hairs I see every now and again remind me of how long I’ve been working in IT. I’m now in my 22nd year of having a job (2 years as an intern, but I did have the ‘sys’ password), and I’ve seen a lot of things come and go. When I started working servers cost at least tens of thousands of dollars, were the size of refrigerators, and had less processing power than the Macbook Pro I’m typing this post on. More importantly, they had service contracts that cost a rather large amount of cash per year. This bought you, well I’m not sure, but your hardware reps surely took you out for at least a steak dinner or two. Eventually, we moved to commodity hardware (those boxes from HP and Dell that cost a tenth of what you paid 20 years ago) and service contracts were a few hundred dollars per server per year. (And then those sales reps started selling expensive storage).

Most of my career has been spent working in large Fortune 500 enterprises—I think about things that at the time were only available to organizations of that size and budget, and are now available for a few clicks and a few dollars per hour. I’m going to focus on three specific technologies that I think are cool, and interesting, but as I’m writing this, I’ve already thought of three or four more.

image

Massively Parallel Processing

MPP processing is a data warehouse design pattern that allows for massive scale out solutions, to quickly process very large amounts of data. In the past it required buying an expensive appliance from Teradata, Oracle, Netezza, or Microsoft. I’m going to focus on Microsoft here, but there are several other cloud options for this model, like Amazon Redshift or Snowflake, amongst others. In terms of the on-premises investment you had to make, effectively to get your foot in the door with one of these solutions, you were looking at at least $250k/USD which is a fairly conservative estimate. In a cloud world? SQL Data Warehouse can cost as little as $6/hour, and while that can add up to a tidy sum over the course of a month, you can pause the service when you aren’t using it, and only pay for the storage. This allows you to do quick proof of concept work, and more importantly compare solutions to see which one best meets your needs. It also allows a smaller organization to get into the MPP game without a major investment.

Secondary and Tertiary Data Centers

Many organizations have two data centers. I’ve only worked for one that had a third data center. You may ask why is this important? A common question I get when teaching Always On Availability Groups, is if we are split across multiple sites, where do we put the quorum file share? The correct answer is that it should be in a third, independent data center. (Which virtually no organizations have). However, Windows Server 2016 offers a great solution, for mere pennies a month—a cloud witness, a “disk” stored in Azure Blob Storage. If you aren’t on Windows Server 2016, it may be possible to implement a similar design using Azure File Storage, but it is not natively supported. Additionally, cloud computing greatly simplifies the process of having multiple data centers. There’s no worries about having staff in two locales, or getting network connectivity between the two sites; that’s all done by your cloud vendor. Just build stuff, and make it reliable. And freaking test your DR (That doesn’t change in the cloud)

Multi-Factor Authentication

If you aren’t using multi-factor authentication for just about everything, you are doing it wrong. (This was a tell that Joey wrote this post and not the Russian mafia). Anyway, security is more important than ever (hackers use the cloud too) and having multi-factor authentication can offer additional levels of security that go far beyond passwords. MFA is not a new thing, and I have a collection of dead SecureID tokens dating back to the 90s that tell me that. However, implementing MFA used to require you to buy an appliance (later it was some software), possible have ADFS, and a lot of complicated configuration work. Now? It’s simply a setting in the Office 365 portal (if you are using AAD authentication; if you are an MS shop learn AAD, it’s a good thing). While I complain about the portal, and how buried this setting is, it’s still far easier and cheaper (a few $ a month) than buying stuff and configured ADFS.

These a but a few examples of how cloud computing makes things that used to be available to only the largest enterprises available to organizations of any size. Cloud is cool and makes things better.

Contact the Author | Contact DCAC

Always On Availability Groups transport has detected a missing log block…

If you are running SQL Server 2016 (especially before CU3) you have received this error:

DATE/TIME:    8/21/2017 11:24:53 AM

DESCRIPTION:    Always On Availability Groups transport has detected a missing log block for availability database "Database". LSN of last applied log block is (99939:95847:0). Log scan will be restarted to fix the issue. This is an informational message only. No user action is required.

COMMENT:    (None)

JOB RUN:    (None)

Image result for missing block

I’ve talked with the product team about it, and its just something that happens, and is more of an informational message. Based on some discussions I’ve have on #sqlhelp on Twitter, it may be related to not enough network bandwidth between nodes. But if you see these sporadically, relax, it’s nothing major.

Contact the Author | Contact DCAC
1 2

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner   Denny Cherry & Associates Consulting LLC BBB Business Review    Microsoft MVP    Microsoft Certified Master VMWare vExpert
INC 5000 Award for 2020    American Business Awards People's Choice    American Business Awards Gold Award    American Business Awards Silver Award    FT Americas’ Fastest Growing Companies 2020   
Best Full-Service Cloud Technology Consulting Company       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers