Is On-premises SQL Server Still Relevant?

Unequivocally, yes on-premises SQL Server Instances are still relevant.

While I’m a firm believer that the cloud is not a fad and is not going away, it’s just an extension of a tool that we are already familiar with.  The Microsoft marketing slogan is “It’s just SQL” and for the most part that is indeed true.  However, that does not mean that every workload will benefit from being in the cloud.  There are scenarios where it does not make sense to move things to the cloud so let’s take a look at a few of them.

The cloud can cost a lot

There is no such thing as a free lunch and the cloud is not excluded.  I am sure that we’ve all heard horror stories of individuals leaving resources active which in turned costed large sums of money. While the cloud offers up a wide range of capabilities in aiding the day-to-day life of IT professionals everywhere, it might not be cost effective for your given workload or data volumes. Compute resources and all things associated with that cost money.  If you need higher CPU, more money.  If you need terabytes of storage, more money.  If you need a higher CPU to memory ratio for that virtual machine, more money.  All of the resources the cloud offers you essential rent and the bigger the space, the more money it takes. Of course, all of this is dependent on your organizational requirements and associated workloads.

By having an on-premises environment you can implement a lower cost of ownership for hardware.  This being said, the cloud offers up more efficient means of upgrade and scaling which is usually limited with on-premises ecosystems which can actually save you money.  It’s a trade-off that organizations have to weigh to see if moving to the cloud makes sense.

You want control of all things

Most things in the cloud require that organizations relinquish control.  That is just a plain fact and that’s not changing.  We are trading speed and agility from an infrastructure perspective for a lower ability to control certain aspects of the architecture.  For example, with Azure SQL Database (Platform as a Service), database administrators no longer can control database backup method or frequency.  In exchange for this loss of control, though, backups are taken automatically for us. In my opinion, this is a more than fair exchange and I sleep better knowing that a tried and vetted backup process is taking care of things without my intervention.

You have specific compliance or regulation requirements

While most of the players in the public cloud space (Azure, Amazon, Google) are all certified for a multitude of compliance regulations, it’s possible that you have a very specific one that the provider is unable to meet.  If this is the case, then your ability to move to the cloud is limited and you are forced to remain on-premises.  Regulations could also impose issues when moving to that cloud.  These regulations could be imposed by the governing body of the organization or be sourced from various places.  If this is the case, it’s possible that the cloud is not a viable solution for your organization.

I do suspect that as cloud technology continues to advance, regulations and compliances will slowly be brought into the fold and allow for appropriate cloud implementations.

You do not have the expertise

Put simply, you do not have the knowledge internally to successfully migrate to the cloud nor do you have the budget to hire someone to move you to the cloud.  Shameless plug, this one of our core competencies here at Denny Cherry & Associates Consulting.  We help organizations (big or small) get into the cloud to help push their data ecosystem forward.  However, not every organization can afford to hire consultants (short or long term) to help them with such a project.  In this instance, until you can get the expertise to help you are left with either staying on-premises or trying to figure it out on your own.  In some respects, the cloud opens new security exposures that must be accounted for when moving to it.  If these are not accounted for the organization severe issues could arise so I recommend not going down the “we’ll figure it out as we go” method without some level of guidance.

Your workloads do not perform in the cloud

Even though I am a huge fan of Azure, some workloads just won’t perform well unless you break out your wallet (see the first paragraph).  Even with proper performance tuning, the performance comparison between on-premises and the cloud is not going to be a true apples to apples comparison.  The infrastructure is just too vastly different to really get that “exact” level of comparison.  Organizations must find that sweet spot between performance infrastructure costs and frankly, sometimes that sweet spot dictates remaining with on-premises hardware.


There are probably many other reasons why on-premises infrastructures will continue to be relevant.  Each organization may have unique requirements that having SQL Server on their own hardware is the only solution.  Remember, regardless of where you deploy SQL Server, it is just SQL and it’ll behave the same (mostly).  This does not mean that you should not continue to expand your skill sets.  Make sure to continue to learn about cloud technologies so that when your organization is ready to make the leap, you can do so in a safe and secure manner.

© 2020, John Morehouse. All rights reserved.

The post Is On-premises SQL Server Still Relevant? first appeared on John Morehouse.

Contact the Author | Contact DCAC

5 Things You Should Know About Azure SQL

Azure SQL offers up a world of benefits that can be captured by consumers if implemented correctly.  It will not solve all your problems, but it can solve quite a few of them. When speaking to clients I often run into misconceptions as to what Azure SQL can really do. Let us look at a few of these to help eliminate any confusion.

You can scale easier and faster

Let us face it, I am old.  I have been around the block in the IT realm for many years.  I distinctly remember the days where scaling server hardware was a multi-month process that usually resulted in the fact that the resulting scaled hardware was already out of date by the time the process was finished.  With the introduction of cloud providers, the ability to scale vertically or horizontally can usually be accomplished within a few clicks of the mouse.  Often, once initiated, the scaling process is completed within minutes instead of months.  This is multiple orders of magnitude better than the method of having to procure hardware for such needs.

The added benefit of this scaling ability is that you can then scale down when needed to help save on costs.   Just like scaling up or out, this is accomplished with a few mouse clicks and a few minutes of your time.

It is not going to fix your performance issues

If you currently have performance issues with your existing infrastructure, Azure SQL is not going to necessarily solve your problem.  Yes, you can hide the issue with faster and better hardware, but really the issue is still going to exist, and you need to deal with it.  Furthermore, moving to Azure SQL could introduce additional issues if the underlying performance issue is not addressed before hand.   Make sure to look at your current workloads and address any performance issues you might find before migrating to the cloud.  Furthermore, ensure that you understand the available service tiers that are offered for the Azure SQL products.   By doing so, you’ll help guarantee that your workloads have enough compute resources to run as optimal as possible.

You still must have a DR plan

If you have ever seen me present on Azure SQL, I’m quite certain you’ve heard me mention that one of the biggest mistakes you can do when moving to any cloud provider is not having a DR plan in place.  There are a multitude of ways to ensure you have a proper disaster recovery strategy in place regardless of which Azure SQL product you are using.  Platform as a Service (Azure SQL Database or SQL Managed Instance) offers automatic database backups which solves one DR issue for you out of the gate.  PaaS also offers geo-replication and automatic failover groups for additional disaster recovery solutions which are easily implemented with a few clicks of the mouse.

When working with SQL Server on an Azure Virtual machine (which is Infrastructure as a Service), you can perform database backups through native SQL Server backups or tools like Azure Backup.

Keep in mind that high availability is baked into the Azure service at every turn.  However, high availability does not equal disaster recovery and even cloud providers such as Azure do incur outages that can affect your production workloads.  Make sure to implement a disaster recovery strategy and furthermore, practice it.

It could save you money

When implemented correctly, Azure SQL could indeed save you money in the long run. However, it all depends on what your workloads and data volume look like. For example, due to the ease of scalability Azure SQL offers (even when scaling virtual machines), secondary replicas of your data could be at a lower service tier to minimize costs.  In the event a failover needs to occur you could then scale the resource to a higher performing service tier to ensure workload compute requirements are met. Azure SQL Database offers a serverless tier that provides the ability for the database to be paused.  When the database pauses, you will not be charged for any compute consumption.  This is a great resource for unpredictable workloads.

Saving costs in any cloud provider implies knowing what options are available as well as continued evaluation of which options would best suit your needs.

It is just SQL

Azure SQL is not magical quite honestly.  It really is just the same SQL engine you are used to with on-premises deployments.  The real difference is how you engage with the product and sometimes that can be scary if you are not used to it.  As a self-proclaimed die-hard database administrator, it was daunting for me when I started to learn how Azure SQL would fit into modern day workloads and potential help save organizations money.  In the end, though, it’s the same product that many of us have been using for years.


In this blog post I’ve covered five things to know about Azure SQL.  It is a powerful product that can help transform your own data ecosystem into a more capable platform to serve your customers for years to come.  Cloud is definitely not a fad and is here to stay.  Make sure that you expand your horizons and look upward because that’s where the market is going.

If you aren’t looking at Azure SQL currently, what are you waiting for?  Just do it.

© 2020, John Morehouse. All rights reserved.

The post 5 Things You Should Know About Azure SQL first appeared on John Morehouse.

Contact the Author | Contact DCAC

Building a Data as a Service Platform on Azure SQL Database

One of the benefits of cloud computing is flexibility and scale—I don’t need to procure hardware or licenses as you get new customers. This flexibility and platform as a service offerings like Azure SQL Database allow a lot of flexibility in what independent software vendors or companies selling access can provide to their customers. However, there is a lot of work and thought that goes into it. We have had success with building out these solutions with customers at DCAC, so in this post, I’ll cover at high level some of the architectural tenants we have implemented.

Authentication and Costing

The cloud has the benefit of providing detailed billing information, so you know exactly what everything cots. The downside to this is that the database provided is very granular and detailed and can be challenging to breakdown. There are a couple of options here—you can create a new subscription for each of your customers which means you will have a single bill for each customer, or you can place each of your customers into their own resource, and use tags to identity which customer is associated with that resource group. The tags are in your Azure bill and this allows you to break down your bill by each customer. While the subscription model in cleaner in terms of billing, however it adds additional complexity to the deployment model and ultimately doesn’t scale.

The other thing you need to think about is authenticating users and security. Fortunately, Microsoft has built a solution for this with Azure Active Directory, however you still need to think about this. Let’s assume your company is called Contoso, and your AAD domain is Assuming you are using AAD for your own business’s users, you don’t want to include your customers in that same AAD. The best approach to this is to create a new Azure Active Directory tenant for your customer facing resources—in this case called You would then add all of the required accounts from to in order to manage the customer tenant. You may also need to create a few accounts in the target tenant, as there are a couple of Azure operations that require an admin from home tenant.

black cassette tape on top of red and yellow surface
Photo by Stas Knop on

Deployment of Resources

One of the things you need to think about is what happens when you onboard a new customer. This can mean creating a new resource group, a logical SQL Server, and a database. In our case, it also means enabling a firewall rule, and enabling performance data collection for the database, and a number of other configuration items. There are a few ways you can do this—you can use an Azure Resource Manager (ARM) template, which contains all of your resource information, which is a good approach that I would typically recommend. In my case, there were some things that I couldn’t do in the ARM template, so I resorted to using PowerShell and Azure Automation to perform deployments. Currently our deployment is semi-manual as someone manually enters the parameters into the Azure Automation runbook, but it could be easily converted to be driven by an Azure Logic App, or a function.

Deployment of Data and Data Structures

When you are dealing with multiple databases across many customers, you desperately want to avoid schema drift that can happen.  This means having a single project for all of your databases. If you have to add a one-off table for a customer, you should still include it in all of your databases. If you are pushing data into your tables (as opposed the data being entered by the application or users) you should drive that process from a central table (more to come about this later).

Where this gets dicey is with indexes, as you have may have some indexes that are needed for specific customer queries. In general, I say the overhead on write performance of having some additional indexes is worth the potential benefit on reads. How you manage this is going to depend on the number of customer databases you are managing—if you are you have ten databases, you might be able to manage each databases indexes by themselves. However, as you scale to a larger number of databases, you aren’t going to be able to manage this by hand, Azure SQL can add and drop indexes it sees fit, which can help with this, but isn’t a complete solution.

Hub Database and Performance Data Warehoue

Even if you aren’t using a hub and spoke model for deploying your data, having a centralized data repository for metadata about your client databases. One of the things that is a common task is collecting performance data across your entire environment. While you can use Azure SQL Diagnostics to capture a whole lot of performance information in your environment, with one of our clients we’ve taken a more comprehensive approach combining the performance data from Log Analytics, Audit data that also goes to Log Analytics, and the Query Store data from each database. While log analytics contains data from the Query Store, there was some additional metadata that we wanted to capture that we could only get from the Query Store directly. We use Azure Data Factory packages (which were built by my co-worker Meagan Longoria (b|t) to a SQL Database that serves as a data warehouse for that data. I’ve even built some xQuery to do some parsing of execution plans, to identity which tables are most frequently queried. You may not need this level of performance granularity, but it is a talk you should have very early in your design phase. You can also use a 3rd party vendor tool for this—but the costs may not scale if your environment grows to be very large. I’m going to do a webinar on that in a month or so–I need to work it out the details, but stay tuned.

Final Things

You want to have the ability to quickly do something across your environment, so having some PowerShell that can loop through all of your databases is really powerful. This code allows you to make configuration changes across your environment, or if you use dbatools or invoke-sqlcmd to run a query everywhere. You also probably need to get pretty comfortable with Azure PowerShell, as you don’t want to have to change something in the Azure Portal across 30+ databases.

Contact the Author | Contact DCAC


Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner   Denny Cherry & Associates Consulting LLC BBB Business Review    Microsoft MVP    Microsoft Certified Master VMWare vExpert
INC 5000 Award for 2020    American Business Awards People's Choice    American Business Awards Gold Award    American Business Awards Silver Award    FT Americas’ Fastest Growing Companies 2020   
Best Full-Service Cloud Technology Consulting Company       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers