Getting around annoying outbound firewall rules at venues when presenting.

Presentations that are given are user groups, events, etc. often require demos.  Sometimes those demos are to large or comple1714341218_b139dfb7c2_zx to run on a laptop.  To get around this problem people will use Azure for their demos.  Doing this, depending on the demo, requires using RDP to connect to the VM in Azure so that you can run your demo.

But what happens when the network administrator has decided that they need to secure the outbound network connections and make it to that they prevent network connections on random TCP ports?  This blocks RDP from working.

When you spin up a VM in Azure it uses a random port number as the network port that RDP listens on to make it harder for hackers to find your RDP port.  Random port numbers are also used because every VM within a single resource group shares a public IP Address, so in order you to be able to RDP to the machines, every machine will need to have RDP setup on a different port number.

We can use this ability to gain access to our Azure VMs and get around this annoying practice of having the output connections blocked.

Now I should be clear that this isn’t required all the time, only when the network is blocking the outbound connections.  Thankfully it doesn’t happen all the time, but it does happen.  I’ve been at some corporate venues doing training for corporate clients where this has come up, as well as at some SQL Saturday events. Now there’s nothing that the event organizer can do to solve the issue, the network admins sadly aren’t going to change anything just for us.  But thankfully they don’t need to, we can reconfigure our Azure VMs a little so that this isn’t an issue anymore.

Even the most locked down network is going to allow web traffic. That’s TCP ports 80 and 443.  As long as that traffic is allowed, we’re good to go.  What we’re going to do is change the random TCP port that the Azure firewall is using for RDP access and have it use port 443 instead.

To make this change, log into the Azure portal and open the properties of the VM that you want to work with.  Select Endpoints from the VM’s settings.  It’ll look something like this (the public port number will probably be different).

rdp1

Change the public port number from whatever it is to 443. Do not change the private port number.  Then click save at the top.  It’ll take a minute or two for the firewall to be reconfigured.  Once it is, download the new RDP connection file by clicking the Connect button on the VM’s properties blade.

You should now be able to connect to your VM.

If you have multiple VMs in a single resource group you’ll only be able to set one of them to use port 443.  So just use one VM and use it as a jump box to then access all the other VMs.

Denny

The post Getting around annoying outbound firewall rules at venues when presenting. appeared first on SQL Server with Mr. Denny.

Contact the Author | Contact DCAC

If You Thought Database Restores Were Slow, Try Restoring From an EMC Data Domain

Recently I did something which I haven’t had to do for a VERY long time, restore a database off of an EMC Data Domain. Thankfully I wasn’t restoring a failed production system, I was restoring to a replacement production system, so I was getting log shipping setup.

I’ve worked in plenty of shops with Data Domains before, but apparently I’ve blocked out the memories of doing a restore on them. Because if your backups are done the way EMC wants them to be done to get the most of the Data Domain (uncompressed SQL Backups in this case) the restore process is basically unusable. The reason that we are backing up the databases uncompressed was the allow the Data Domain to dedupe the backups as much as possible so that the final backup stored on the Data Domain would be as small as possible so that it could be replicated to another Data Domain in another data center.

The database in this case is ~6TB in size, so it’s a big database. Running the restore off of the EMC Data Domain, was painfully slow. I canceled it after about 24 hours. It was at ~2% complete. Doing a little bit of math that database restore was going to take 25 days. While the restore was running we tried calling EMC support to see if there was a way to get the EMC Data Domain to allow the restores to run faster, and they answer was no, that’s as fast as it’ll run.

After stopping the restore, I backed up the same database to a local disk, and restored it to the new server from there. This time the restore took ~8 hours to complete. A much more acceptable number.

If you are using EMC’s Data Domain (or any backup appliance) do not use that appliance as your only location of your SQL Server backups. These appliances are very efficient at writing backups to them, and replicating those backups off to another site (which is what is being done in this case). But they are horrible at rehydrating those backups so that you can actually restore them. The proof of this is in the throughput of the restore commands. Here’s the output of some of the restore commands that were running. These are for full backups, so there’s nothing for SQL Server to process here, it’s just moving blocks from point A to point B.

RESTORE DATABASE successfully processed 931 pages in 6.044 seconds (1.203 MB/sec).
RESTORE DATABASE successfully processed 510596 pages in 1841.175 seconds (2.166 MB/sec).
RESTORE DATABASE successfully processed 157903 pages in 440.849 seconds (2.798 MB/sec).
RESTORE DATABASE successfully processed 2107959 pages in 4696.428 seconds (3.506 MB/sec).
RESTORE DATABASE successfully processed 77307682 pages in 118807.557 seconds (5.083 MB/sec).
RESTORE DATABASE successfully processed 352411 pages in 816.810 seconds (3.370 MB/sec).
RESTORE DATABASE successfully processed 8400718 pages in 23940.799 seconds (2.741 MB/sec).
RESTORE DATABASE successfully processed 51554 pages in 111.890 seconds (3.599 MB/sec).
RESTORE DATABASE successfully processed 1222431 pages in 3167.605 seconds (3.014 MB/sec).

The biggest database there was restoring at 5 Megs a second. That was 33 hours to restore a database which is just ~606,816 Megs (~592 Gigs) in size. Now before you blame the SQL Server’s or the network, all these servers are physical servers running on Cisco UCS hardware. The network is all 10 Gig networking, and the storage on these new servers is a Pure storage array. The proof that the network and storage was fine was the full restore of the database which was done from the backup to disk, as that was restored off of a UNC path which was still attached to the production server.

When testing these appliances, make sure that doing restores within an acceptable time window is part of your testing practice. If we had found this problem during a system down situation, the company would probably have just gone out of business. There’s no way the business could have afforded to be down for ~25 days waiting for the database to restore.

Needless to say, as soon as this problem came up, we provisioned a huge LUN to the servers to start writing backups to. We’ll figure out how to get the backups offsite (the primary reason that the Data Domain exists in this environment) another day (and in another blog post).

How could EMC fix this.

Step 1 would be to stop telling people that it can replicate very large databases from site to site. While it technically can, doing so while still maintaining some level of performance while doing restores doesn’t seem possible at the moment.

Step 2 would be to stop telling people to disable SQL Compression to make replication work. Again while this does make replication work, restore performance is horrible.

Step 3, figure out and resolve the performance problem when reading files from the array, especially large files when there are huge amounts of deduplicated blocks within the backup. This hydration problem is the performance killer here. It has to be possible, other array vendors do it within their normal LUNs that have deduplication. EMC does it within their arrays which have deduplicated data, but in the Data Domain the performance sucks. Something needs to be done about this.

Denny

Contact the Author | Contact DCAC

If You Thought Database Restores Were Slow, Try Restoring From an EMC Data Domain

Recently I did something which I haven’t had to do for a VERY long time, restore a database off of an EMC Data Domain. Thankfully I wasn’t restoring a failed production system, I was restoring to a replacement production system, so I was getting log shipping setup.

I’ve worked in plenty of shops with Data Domains before, but apparently I’ve blocked out the memories of doing a restore on them. Because if your backups are done the way EMC wants them to be done to get the most of the Data Domain (uncompressed SQL Backups in this case) the restore process is basically unusable. The reason that we are backing up the databases uncompressed was the allow the Data Domain to dedupe the backups as much as possible so that the final backup stored on the Data Domain would be as small as possible so that it could be replicated to another Data Domain in another data center.

The database in this case is ~6TB in size, so it’s a big database. Running the restore off of the EMC Data Domain, was painfully slow. I canceled it after about 24 hours. It was at ~2% complete. Doing a little bit of math that database restore was going to take 25 days. While the restore was running we tried calling EMC support to see if there was a way to get the EMC Data Domain to allow the restores to run faster, and they answer was no, that’s as fast as it’ll run.

After stopping the restore, I backed up the same database to a local disk, and restored it to the new server from there. This time the restore took ~8 hours to complete. A much more acceptable number.

If you are using EMC’s Data Domain (or any backup appliance) do not use that appliance as your only location of your SQL Server backups. These appliances are very efficient at writing backups to them, and replicating those backups off to another site (which is what is being done in this case). But they are horrible at rehydrating those backups so that you can actually restore them. The proof of this is in the throughput of the restore commands. Here’s the output of some of the restore commands that were running. These are for full backups, so there’s nothing for SQL Server to process here, it’s just moving blocks from point A to point B.

RESTORE DATABASE successfully processed 931 pages in 6.044 seconds (1.203 MB/sec).
RESTORE DATABASE successfully processed 510596 pages in 1841.175 seconds (2.166 MB/sec).
RESTORE DATABASE successfully processed 157903 pages in 440.849 seconds (2.798 MB/sec).
RESTORE DATABASE successfully processed 2107959 pages in 4696.428 seconds (3.506 MB/sec).
RESTORE DATABASE successfully processed 77307682 pages in 118807.557 seconds (5.083 MB/sec).
RESTORE DATABASE successfully processed 352411 pages in 816.810 seconds (3.370 MB/sec).
RESTORE DATABASE successfully processed 8400718 pages in 23940.799 seconds (2.741 MB/sec).
RESTORE DATABASE successfully processed 51554 pages in 111.890 seconds (3.599 MB/sec).
RESTORE DATABASE successfully processed 1222431 pages in 3167.605 seconds (3.014 MB/sec).

The biggest database there was restoring at 5 Megs a second. That was 33 hours to restore a database which is just ~606,816 Megs (~592 Gigs) in size. Now before you blame the SQL Server’s or the network, all these servers are physical servers running on Cisco UCS hardware. The network is all 10 Gig networking, and the storage on these new servers is a Pure storage array. The proof that the network and storage was fine was the full restore of the database which was done from the backup to disk, as that was restored off of a UNC path which was still attached to the production server.

When testing these appliances, make sure that doing restores within an acceptable time window is part of your testing practice. If we had found this problem during a system down situation, the company would probably have just gone out of business. There’s no way the business could have afforded to be down for ~25 days waiting for the database to restore.

Needless to say, as soon as this problem came up, we provisioned a huge LUN to the servers to start writing backups to. We’ll figure out how to get the backups offsite (the primary reason that the Data Domain exists in this environment) another day (and in another blog post).

Denny

Contact the Author | Contact DCAC

Be Careful When Starting Up Azure VMs Running SQL Server

So Microsoft has done something pretty dumb with the Azure VMs which are running Microsoft SQL Server. By default the front end firewall (the one that allows or blocks traffic from the public Internet to the VMs) allows traffic to the default SQL Server port 1433. At first this is fine, until you change the firewall port on the Windows firewall to allow the other VMs to connect to SQL. Now you’ve got a problem as the public firewall is open, and your Windows firewall is open, so anyone who attempts to connect to the SQL port 1433 from the outside will have direct access to the SQL Server instance.

So when creating VMs which will be running SQL Server that you are creating from the default SQL Server template you’ll need to go into the Azure portal and change the firewall endpoints. Do to this edit the properties of the VM, and edit the settings. Then edit the Endpoints.

If you see the “SQL Server” endpoint as shown below, and you’ve disabled the Windows Firewall on the VM from blocking TCP port 1433, then the entire public Internet has access to your SQL Server VM.

VM_Settings

To remove this mouse over the SQL Server endpoint and click the menu button shown below, then click “Delete” from the context menu that appears.

VM_Settings2

For each SQL Server VM that you’ve deployed using Microsoft’s SQL VM Template.

If you’ve setup SQL Server VMs in Azure within the last couple of months you’ll want to go and check the Azure Endpoints and make sure you don’t have a firewall hole that you weren’t expecting. I’ve spoken to Azure team at Microsoft about this and the default template is being fixed so that it isn’t setup this way any more, if it isn’t fixed already.

Denny

Contact the Author | Contact DCAC
1 2 3 4

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner   Denny Cherry & Associates Consulting LLC BBB Business Review    Microsoft MVP    Microsoft Certified Master VMWare vExpert
INC 5000 Award for 2020    American Business Awards People's Choice    American Business Awards Gold Award    American Business Awards Silver Award    FT Americas’ Fastest Growing Companies 2020   
Best Full-Service Cloud Technology Consulting Company       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers