SQL Server 2019 is Now Available on Windows Containers—Why You’re Doing It Wrong

Published On: 2019-07-05By:

I try to avoid writing blog posts that I like to call “hot takes”—quick crappy opinions on the news of the day, but this is a topic I feel particularly strongly about. I’m not sure how many of you were using Microsoft’s Azure Hadoop offering HDInsight, when it debuted in the 2012-13 timeframe, but it had a unique characteristic. Unlike virtually every other Hadoop offering at the time (and this was a hot era for Hadoop) HDInsight ran on Windows Server. That meant all the assorted utilities in and around Hadoop were always trailing what was current on Linux. It also meant that when you had issues, and you searched for help in various online forums, you were always challenged because you got weird error messages, and your cluster used oddball file pathing because Windows. Since 2013, Microsoft has gotten a new CEO, the stock price has shot way up, and the company has embraced open source software. SQL Server is on Linux now, which leads me to my next points.

Screen Shot 2019-07-03 at 12.37.00 PM

Ever since SQL Server Helsinki debuted in Docker, I’ve seen the benefit of using containers with databases. When SQL Server started supporting Kubernetes, I really saw the benefit and quickly embraced this, by writing and presenting on the topic, and evangelizing all the benefits of the Kubernetes platform (of which there are many). Before I start my rant, remember in a container platform, there is only one copy of the base operating system on a given host. Your container contains the libraries and binaries it needs but shares a kernel with the host operating system. This means the base operating system of the host must be the same as that of the container.

Much like Hadoop, Kubernetes was built from the ground up as a Linux based platform. When Google built the Borg cluster management system that eventually became Kubernetes, it was reportedly built on a custom build of OpenBSD. This means there a lot of assumptions about the way things work in Linux that are built into Kubernetes. While, I know I’ve heard a reasonable amount of community demand for Windows containers (clearly enough that Microsoft has made an effort to build support both into Windows Server and Azure Kubernetes Services), I can’t help but feel this is not a good long term plan.

When dealing with open source software, it’s good to be on a platform that is widely utilized. When you are searching for help on forums, or looking for the latest patch, the platform that is the most widely utilized. Another example I like to use for this, is Oracle on Windows, which I supported in a past life. Since Oracle was most commonly run on Linux/UNIX, patches for Windows were always days and weeks behind. While, I appreciate the effort of the Windows team to build container and Kubernetes support into the platform, Microsoft is going to be the only support/patching path for Kubernetes on Windows, which hampers one of the key benefits of OSS, rapid fixes.

There’s another elephant that’s in the room—in this scenario, Linux is free as in beer, and you will have to license (pay for) your Windows nodes. If you are running a supported version of Linux like Red Hat (now presented by IBM), you will pay about the same cost as Windows Server licensing, but in most cases organizations running Kubernetes are doing it on a free version of the operating system.

I don’t mean to slight anything Microsoft is doing (note: I’m a shareholder and current a contractor at MS), but I feel as though if you are implementing Kubernetes on Windows, you are likely doing containers wrong. With .NET Core and SQL Server being available on Linux there are few reasons to tie your development to the Windows platform. System administration reasons like domain authentication and group policy support make some sense to me, however I can’t help but think this feels like Hadoop on Windows. Also, the lack of community support can’t be overemphasized–this is a big deal, especially on a not fully mature platform like Kubernetes. By the way, Microsoft stopped offering HDInsight on Windows sometime in 2014-15, just saying.

 

 


Contact the Author | Contact DCAC

Adventures in Awful Application Design–Amtrak

Published On: 2019-06-10By:

I was going to New York last weekend from my home of Philadelphia. We were running late for the train, and for the first time ever, I had a booked an award ticket on Amtrak. For reasons unbeknownst to me, you can not make changes to an award ticket on their app (I didn’t try the website). Additionally, when you call the standard Amtrak line, the customer service reps can’t change an award ticket, unless you have defined a PIN. This PIN is defined by telling an awards customer service rep what you want your PIN to be. (Because that’s really secure). While this is all god awful business process, that is exacerbated by crappy IT, it’s really down to bad business processes.

photo of railway on mountain near houses

Photo by SenuScape on Pexels.com

The bad application design came into play, when the Awards rep, tried to change my ticket, and she asked “do you have it open in our app? I can’t make changes to it if you have it open.” My jaw kind of dropped when this happened, but I went ahead and closed the app. Then the representative was able to make this change. We had to repeat the process when the representative had booked us into the wrong class of service. (The rep was excellent and even called me back directly).

But let’s a talk about the way most mobile apps work. There are a series of front-end screens that are local to your device, and most of the interaction is handled through a series of Rest API calls. The data should be cached locally to your device after the API call, so it can still be read in the event of your device being offline. If you are a database professional, you are used to the concept of ACID (Atomicity, Consistency, Isolation, Durability), which is how we ensure things like bank balances in our databases can remain trusted. In general, a tenant of most databases applications is that readers should never block writers–if a process needs to write to a record someone reading the record should not effect that operation. There are exceptions to this rule, but these rules are generally enforced by the RDBMS in conjunction with the isolation level of the database.

Another tenant of good database development is that you should do what you are doing, and get the hell out of the database. Whether you are reading or writing, you should get the data you need and then return either the data or the status of the transaction to your app. Otherwise, you are going to keep your transaction open, and impact other transactions, in a generally unpredictable set of timings. That’s the whole point of using the aforementioned Rest API calls–they do a thing, return your data, or that you updated some data, and then get the hell out.

What exactly is Amtrak’s app doing? Obviously I don’t have any backend knowledge, but based on that comment from the CS rep, opening your reservation in the mobile app, opens a database transaction. That doesn’t close. I can’t fathom why anyone would ever design an app this way, and I can’t think of a tool that would easily let you design an app like this. At some point, someone made a really bad design choice.


Contact the Author | Contact DCAC

Let’s Talk about Backups, and How to Make Them Easier

Published On: 2019-04-26By:

Recently I’ve run into a couple of situations where customers had lost key business data due to several factors. Whether it is ransomware, a virus, or just hardware failure, it doesn’t really matter how you lose your data, it just matters that your data is lost, and your business is now in a really bad spot. When I was first thinking about writing this post this morning, I was going to tell you how important it was to backup your databases, and how the cloud is a great disaster recovery solution for those backups. The problem is, if you are reading this blog, you likely at least know that you should have backups. You probably even know how to optimize them to make them run faster, and you test your restores. You do test your restores, right?

 

analog audio backup broken

Photo by Anthony on Pexels.com

Then I thought a little harder, and I was reminded of a tweet that my good friend Vicky Harp (the Principal Program Manager of the SQL Server tools team at Microsoft) wrote a couple of years ago:

Backups are DBA 101, but most of the organizations who are having these types of issues don’t have a DBA. They might not even have a dedicated IT person, or if they do it’s someone who comes by once a week to make sure the printer and wifi still works and takes care of company laptops. The current situation is that we hope they go to a SQL Saturday, or user group, and learn about backups and start taking them. So, I thought, what could be done to make that easier, and faster. The database engine has technology to do backups automatically (maintenance plans) and even move them to secondary or tertiary location (backup to URL).

What I’m asking for (besides you to upvote that User Voice item) is for Microsoft, as part of SQL Server setup, to add two new screens. The first would be called “Backup”—it would offer a dire warning to the effect of:

In order to protect the data in your databases, Microsoft strongly encourages you to take backups of your data. In the event of hardware failure, data corruption, or malicious software, Microsoft support will be unable to help you recovery your data, and you will incur data loss. This box is checked by defaults to enable automatic daily backups of all of your databases.

The next screen would offer you options on where to store your data, and how long to retain it. It would offer the option to store the backups locally, or on a network share, and give you the ability to encrypt your databases. It would also allow you to backup your encryption key (and strongly encourage you to do so).

The next part of that screen is where I think this could become attractive to Microsoft. It would give you the option to backup your databases to URL in Azure, and if you didn’t have an Azure account, it would allow the user to create one. Frankly, for most organizations who would be using this for their backup solution, Azure is the best option.

Arguments Against This Feature

The main arguments I could make against this feature request are minimal. One could argue that you would like to use Ola’s scripts, or DBATools, or change striping or the buffer count for your backup. If you are making any of those arguments, this feature isn’t for you. If you’ve ever installed SQL Server with a configuration file, this feature isn’t for you. The only valid argument I can see against doing this, is that one could potentially fill up a file system with backup files. Maintenance plans do have the ability to prune old backups, so I would include that in my deployment. I might also build in some alerting and warnings into the SQL Agent to notify someone by default.

The other argument I see, is that Microsoft offers a similar product with Azure Backup for SQL VMs, and this would cannibalize that feature. It very well might, but that product is limited to Azure, and we are aiming for the greater good here helping more people protect their data is good for Microsoft, good for SQL Server, and good for the world.

Summary

If you are reading this, go upvote my User Voice request here. This feature isn’t about you, it’s about all the orgs who’ve IT decisions have them at the point of data loss, and they were really none the wiser. Let’s make life easier for folks.


Contact the Author | Contact DCAC

The Ransomware Breach You’re Going to Have

Published On: 2019-04-23By:

I don’t typically blog about network engineering here. However, in the last few weeks I’ve seen several major companies get hit with ransomware attacks. While this isn’t an uncommon thing in 2019, it is uncommon that their entire environments were taken offline because of it. So with that, let’s talk about how these attacks can do this. Since the best way to deal with any sort of an attack is to deal with an “Assume breach” model, let’s talk about the best way to defend against attacks: physical network security.

The Attack Vector: Dumb Humans

The easiest way to attack any company is via human targets. Whether it’s bribing a sysadmin to get credentials, or a standard phishing attack, or any sort of other malware, the best way to pwn a company is by getting an unknowing (or knowing) human to most of the work for you. There are ways to stop these sorts of things from getting in the front door–the first would be to use an email service like Office 365 or GMail, that have built-in phishing protections, and use machine learning based on their mass volume of exposure to attacks, that protect you. You should also educate your users to avoid these scams–there is good training that I’ve taken for a few clients that do this.

But the real assumption is to take an assume breach methodology. I’m currently working on a financial system for a Fortune 100 company. In order to reach the servers, I have to use a special locked down laptop, have two key cards, and go through two jump hosts to get there. Even if that laptop were to get hacked (and it wouldn’t because you can’t install software on it) you couldn’t do anything without my key cards and PINs.

Physical and Virtual Network Security

Can you connect to your production database servers over any port from your desktop? Or to your domain controllers? If you can do that you have a problem. Because once someone’s desktop gets pwned, then the malicious software that gets installed when the CEO tries to open a PDF with the new Taylor Swift album on it, can then run anywhere on your network. This is bad.

orange yellow green and blue abstract painting

Photo by Steve Johnson on Pexels.com

The networking gospel according to IBM.

The IBM white paper linked above is the gold standard of how to build and segment your network. In a common example there a few zones:

Black Zone: No outbound external traffic, inbound restricted to whitelisted IPs and ports from with in black and green zones. This is where your domain controllers and database servers with any sort of sensitive data should live.

Green Zone: Limited external traffic (think Windows Update, Power BI gateway, Linux package repos), can communicate to end use networks over controlled ports. This is where most of your application servers and some management services should live.

Blue Zone: Management zone–this is where you should have your jump boxes so that you don’t have to log directly onto production boxes. This can have limited external traffic, and should be able to talk to the servers in the black zones, but only over ports that you have specified.

Yellow Zone: This is typically where your DMZ will exist. That means you are allowing inbound traffic from the internet. This is obviously a big attack vector, which means it should live on an isolated sector of your network, and the traffic to and from this zone should be locked down to specific IPs and ports that need connectivity.

Red Zone: This is where the users live. There’s internet access, but communications from this network should nearly always stay within this network. You will have teams that want to deploy production workloads in this network. Don’t let them.

But That Sounds Hard

Good security is always hard. See my above server management issues. In my scenario, when the CEO got pwned you might have to deal with a bunch of ransomware laptops, but since your servers and domain controllers weren’t easily reachable from the desktop network, your company would keep moving, and you could simply re-image the pwned machines.

This is Trivial to Implement with Cloud Networks

In order to do this on-premises, you may have to buy a lot more networking gear than you already have, or at least restructure a whole lot of virtual lans (vLANs). However, in a cloud scenario, or even in some virtual infrastructures this kind of model is trivial to implement. Just look up network security zones in Azure (and you never have to run any cable in the cloud).

Technology, and especially enterprise technology isn’t easy, but it’s more important than ever to use good security methods across your environment.


Contact the Author | Contact DCAC
1 2 3 27

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner       Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers    VMWare Partner
Microsoft Certified Master    Microsoft MVP