5 Things You Should Know About Azure SQL

Azure SQL offers up a world of benefits that can be captured by consumers if implemented correctly.  It will not solve all your problems, but it can solve quite a few of them. When speaking to clients I often run into misconceptions as to what Azure SQL can really do. Let us look at a few of these to help eliminate any confusion.

You can scale easier and faster

Let us face it, I am old.  I have been around the block in the IT realm for many years.  I distinctly remember the days where scaling server hardware was a multi-month process that usually resulted in the fact that the resulting scaled hardware was already out of date by the time the process was finished.  With the introduction of cloud providers, the ability to scale vertically or horizontally can usually be accomplished within a few clicks of the mouse.  Often, once initiated, the scaling process is completed within minutes instead of months.  This is multiple orders of magnitude better than the method of having to procure hardware for such needs.

The added benefit of this scaling ability is that you can then scale down when needed to help save on costs.   Just like scaling up or out, this is accomplished with a few mouse clicks and a few minutes of your time.

It is not going to fix your performance issues

If you currently have performance issues with your existing infrastructure, Azure SQL is not going to necessarily solve your problem.  Yes, you can hide the issue with faster and better hardware, but really the issue is still going to exist, and you need to deal with it.  Furthermore, moving to Azure SQL could introduce additional issues if the underlying performance issue is not addressed before hand.   Make sure to look at your current workloads and address any performance issues you might find before migrating to the cloud.  Furthermore, ensure that you understand the available service tiers that are offered for the Azure SQL products.   By doing so, you’ll help guarantee that your workloads have enough compute resources to run as optimal as possible.

You still must have a DR plan

If you have ever seen me present on Azure SQL, I’m quite certain you’ve heard me mention that one of the biggest mistakes you can do when moving to any cloud provider is not having a DR plan in place.  There are a multitude of ways to ensure you have a proper disaster recovery strategy in place regardless of which Azure SQL product you are using.  Platform as a Service (Azure SQL Database or SQL Managed Instance) offers automatic database backups which solves one DR issue for you out of the gate.  PaaS also offers geo-replication and automatic failover groups for additional disaster recovery solutions which are easily implemented with a few clicks of the mouse.

When working with SQL Server on an Azure Virtual machine (which is Infrastructure as a Service), you can perform database backups through native SQL Server backups or tools like Azure Backup.

Keep in mind that high availability is baked into the Azure service at every turn.  However, high availability does not equal disaster recovery and even cloud providers such as Azure do incur outages that can affect your production workloads.  Make sure to implement a disaster recovery strategy and furthermore, practice it.

It could save you money

When implemented correctly, Azure SQL could indeed save you money in the long run. However, it all depends on what your workloads and data volume look like. For example, due to the ease of scalability Azure SQL offers (even when scaling virtual machines), secondary replicas of your data could be at a lower service tier to minimize costs.  In the event a failover needs to occur you could then scale the resource to a higher performing service tier to ensure workload compute requirements are met. Azure SQL Database offers a serverless tier that provides the ability for the database to be paused.  When the database pauses, you will not be charged for any compute consumption.  This is a great resource for unpredictable workloads.

Saving costs in any cloud provider implies knowing what options are available as well as continued evaluation of which options would best suit your needs.

It is just SQL

Azure SQL is not magical quite honestly.  It really is just the same SQL engine you are used to with on-premises deployments.  The real difference is how you engage with the product and sometimes that can be scary if you are not used to it.  As a self-proclaimed die-hard database administrator, it was daunting for me when I started to learn how Azure SQL would fit into modern day workloads and potential help save organizations money.  In the end, though, it’s the same product that many of us have been using for years.

Summary

In this blog post I’ve covered five things to know about Azure SQL.  It is a powerful product that can help transform your own data ecosystem into a more capable platform to serve your customers for years to come.  Cloud is definitely not a fad and is here to stay.  Make sure that you expand your horizons and look upward because that’s where the market is going.

If you aren’t looking at Azure SQL currently, what are you waiting for?  Just do it.

© 2020, John Morehouse. All rights reserved.

The post 5 Things You Should Know About Azure SQL first appeared on John Morehouse.

Contact the Author | Contact DCAC

Azure SQL DB Extended Events and Cross-Tenancy

Creating an Extended Events session (as well as viewing events) in Azure SQL Database is slightly different than a typical SQL Server. Since you don’t have access to the file system of the server where your database live, you need to configure a storage account target for persistence of your extended event sessions. You can write them to the ring buffer, but since you do not have the ability to “view live events” in SQL Server Management Studio, this is of limited benefit. You read about what you need to do in docs here, but in a nutshell it’s create a storage account (or use an existing one) create a database scoped credential so you can use the storage account, and then create the xEvents session.

The reason why I’m writing this post is that there is a bit of a bug here that’s not fully documented. Many of us (especially those of us who are consultants) work across the scope of Azure Active Directory tenants. What that means is joeyd@dcac.com might manage a database in the contoso.com Azure AD tenant while still being logged in with the joeyd@dcac.com identity. Normally, this isn’t an issue but there are a couple of places where some odd things happen with cross-tenancy. When you try to create a credential in your database, you will receive the following error, even if you are the database owner.

blue white orange and brown container van
Photo by Pixabay on Pexels.com


Started executing query at Line 1

Msg 2760, Level 16, State 1, Line 1
The specified schema name "joeyd@dcac.com" either does not exist or you do not have permission to use it.

Total execution time: 00:00:00.195

You should note the rapid execution time of that error–this isn’t failing when going out to a storage account to validate the credential, the code is failing in the database. I posted something about this to the Microsoft MVP DL and the ever brilliant Simon Sabin emailed me and suggested that I try to create a schema called joeyd@dcac.com and then create the credential. Sure enough–that worked fine and I could proceed. In the customer system where this happened, we were fortunate enough to have global admin rights in AAD, and just created a new user in their subscription, and used it.

Contact the Author | Contact DCAC

Azure Site to Site VPN Blocking Some Traffic

I ran across an interesting a couple of weeks ago when working with a client. The client has several subsidiaries each with their own vNet. The client had a site to site VPN been the Azure vNets. All traffic was successfully crossing the Azure Site to Site VPN as expected. The sticking point was that a software licensing server running in one of the subsidiaries Azure infrastructure configurations. The software licensing software simply wasn’t working.

We fired up Wireshark on Azure VM which was running the software as well on the Azure VM which was running the licensing software. In Wireshark on the VM running the software, we could see the software trying to talk to the licensing software. On the licensing server, we could see the connection request come in, and we could see the response from the licensing software going back to the client. But we looked on the VM running the software we couldn’t see the packet coming back from the licensing server. So the network traffic was simply getting blocked, somewhere. We didn’t have any network security groups set up, and we didn’t have any software firewalls in place. So nothing should be blocking traffic.

We looked at the response that was coming from the licensing server, and it had the DoNotFragment bit set on the response network packet. Now the sure weird thing is that the packet was only 1430ish bytes in size. So it would have fit within the 1500 byte packet, so there was zero chance of the packet being fragmented. But the bit was being set within the vendor’s software, so we didn’t have any way to remove that flag.

We were able to fix it, by changing from a Site to Site VPN to a peered network connection between the two vNets. Changing the network connection to a peer allowed the software licensing process to work as expected and solved the problem.

So if you have software which requires the DoNotFragment bit in your connection, then an Azure Site to Site VPN isn’t going to work for you. If you are doing everything in Azure a peer can work while a Site to Site VPN doesn’t work.

Denny

Contact the Author | Contact DCAC

Building a Data as a Service Platform on Azure SQL Database

One of the benefits of cloud computing is flexibility and scale—I don’t need to procure hardware or licenses as you get new customers. This flexibility and platform as a service offerings like Azure SQL Database allow a lot of flexibility in what independent software vendors or companies selling access can provide to their customers. However, there is a lot of work and thought that goes into it. We have had success with building out these solutions with customers at DCAC, so in this post, I’ll cover at high level some of the architectural tenants we have implemented.

Authentication and Costing

The cloud has the benefit of providing detailed billing information, so you know exactly what everything cots. The downside to this is that the database provided is very granular and detailed and can be challenging to breakdown. There are a couple of options here—you can create a new subscription for each of your customers which means you will have a single bill for each customer, or you can place each of your customers into their own resource, and use tags to identity which customer is associated with that resource group. The tags are in your Azure bill and this allows you to break down your bill by each customer. While the subscription model in cleaner in terms of billing, however it adds additional complexity to the deployment model and ultimately doesn’t scale.

The other thing you need to think about is authenticating users and security. Fortunately, Microsoft has built a solution for this with Azure Active Directory, however you still need to think about this. Let’s assume your company is called Contoso, and your AAD domain is contoso.com. Assuming you are using AAD for your own business’s users, you don’t want to include your customers in that same AAD. The best approach to this is to create a new Azure Active Directory tenant for your customer facing resources—in this case called cust.contoso.com. You would then add all of the required accounts from contoso.com to cust.contoso.com in order to manage the customer tenant. You may also need to create a few accounts in the target tenant, as there are a couple of Azure operations that require an admin from home tenant.

black cassette tape on top of red and yellow surface
Photo by Stas Knop on Pexels.com

Deployment of Resources

One of the things you need to think about is what happens when you onboard a new customer. This can mean creating a new resource group, a logical SQL Server, and a database. In our case, it also means enabling a firewall rule, and enabling performance data collection for the database, and a number of other configuration items. There are a few ways you can do this—you can use an Azure Resource Manager (ARM) template, which contains all of your resource information, which is a good approach that I would typically recommend. In my case, there were some things that I couldn’t do in the ARM template, so I resorted to using PowerShell and Azure Automation to perform deployments. Currently our deployment is semi-manual as someone manually enters the parameters into the Azure Automation runbook, but it could be easily converted to be driven by an Azure Logic App, or a function.

Deployment of Data and Data Structures

When you are dealing with multiple databases across many customers, you desperately want to avoid schema drift that can happen.  This means having a single project for all of your databases. If you have to add a one-off table for a customer, you should still include it in all of your databases. If you are pushing data into your tables (as opposed the data being entered by the application or users) you should drive that process from a central table (more to come about this later).

Where this gets dicey is with indexes, as you have may have some indexes that are needed for specific customer queries. In general, I say the overhead on write performance of having some additional indexes is worth the potential benefit on reads. How you manage this is going to depend on the number of customer databases you are managing—if you are you have ten databases, you might be able to manage each databases indexes by themselves. However, as you scale to a larger number of databases, you aren’t going to be able to manage this by hand, Azure SQL can add and drop indexes it sees fit, which can help with this, but isn’t a complete solution.

Hub Database and Performance Data Warehoue

Even if you aren’t using a hub and spoke model for deploying your data, having a centralized data repository for metadata about your client databases. One of the things that is a common task is collecting performance data across your entire environment. While you can use Azure SQL Diagnostics to capture a whole lot of performance information in your environment, with one of our clients we’ve taken a more comprehensive approach combining the performance data from Log Analytics, Audit data that also goes to Log Analytics, and the Query Store data from each database. While log analytics contains data from the Query Store, there was some additional metadata that we wanted to capture that we could only get from the Query Store directly. We use Azure Data Factory packages (which were built by my co-worker Meagan Longoria (b|t) to a SQL Database that serves as a data warehouse for that data. I’ve even built some xQuery to do some parsing of execution plans, to identity which tables are most frequently queried. You may not need this level of performance granularity, but it is a talk you should have very early in your design phase. You can also use a 3rd party vendor tool for this—but the costs may not scale if your environment grows to be very large. I’m going to do a webinar on that in a month or so–I need to work it out the details, but stay tuned.

Final Things

You want to have the ability to quickly do something across your environment, so having some PowerShell that can loop through all of your databases is really powerful. This code allows you to make configuration changes across your environment, or if you use dbatools or invoke-sqlcmd to run a query everywhere. You also probably need to get pretty comfortable with Azure PowerShell, as you don’t want to have to change something in the Azure Portal across 30+ databases.

Contact the Author | Contact DCAC
1 2 3 30

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner   Denny Cherry & Associates Consulting LLC BBB Business Review    Microsoft MVP    Microsoft Certified Master VMWare vExpert
INC 5000 Award for 2020    American Business Awards People's Choice    American Business Awards Gold Award    American Business Awards Silver Award    FT Americas’ Fastest Growing Companies 2020   
Best Full-Service Cloud Technology Consulting Company       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers