Recently I was asked by a client to upgrade their SQL Server Failover Cluster from standard storage to Azure premium storage with as little downtime as possible. Due to the fact that the SQL Server instance was clustered already this was actually a pretty straight forward process.
The first step was to figure out which node of the cluster was running as the active node, so we could start with the passive node. The next step was the tell cluster to not allow failovers of the SQL Cluster. Next we opened the SIOS Data Keeper Cluster Edition GUI and break the mirror for the disks that we are going to upgrade. Then I logged into the Azure portal and converted the VM from a G2 into a GS2 so that premium disks could be attached. After the VM restarted (don’t forget, this is the passive node so there’s no outage for the restart) the disks are removed from the VM and the new disks are added. The new disks were added via PowerShell like this:
get-AzureVM -name ServerName | add-AzureDataDisk -CreateNew -DiskSizeInGb 1023 -DiskLabel ServerName-T http://Something.blob.core.windows.net/vhds/ServerName-t.vhd” | update-AzureVM
After the disks are all added, they are formatted and given the correct drive letters.
Next SIOS Data Keeper Cluster Edition is told to restart the mirror. This forces it to do a full sync as there’s no data on the old drives. This takes forever as we are limited to reading data from the disk at the speed of the old standard disks (500 IOPs). Once it’s done (in this case there was about 200 Gigs of data to replicate across three disks) the cluster can be failed over (this is the only outage in the process).
We can now upgrade the second VM to support premium storage, then change out the disks and restart the replication again. It’s a long process but it works, and there’s just a single outage to the process.