I am very excited and lucky to be speaking at my very first international conference, SQLBits. SQLBits, the largest SQL Server conference in Europe, held in Manchester England February 27- March 2nd. It is a conference for leading data professionals with over 70 sessions from speakers all over the world. I commonly speak at User Groups, SQL Saturdays and other Data Conferences here in the United States and have always wanted to broaden my reach to other countries. I am very much looking forward to seeing how or if conferences differ in Europe. It is a huge honor to have been selected to share my performance tuning knowledge with attendees.
My session, Performance Tuning SQL Server on Crappy Hardware, will be Friday March 1st @ 4:50pm in Room 3.
Many of us must deal with hardware that doesn’t meet our standards or contributes to performance problems. This session will cover how to work around hardware issues when it isn’t in the budget for newer, faster, stronger, better hardware. It’s time to make that existing hardware work for us. Learn tips and tricks on how to reduce IO, relieve memory pressure, and reduce blocking. Let’s see how compression, statistics, and indexes bring new life into your existing hardware.
Be sure to also catch my colleagues at Denny Cherry and Associates Consulting , Joey D’Antoni (B|T) and John Morehouse (B|T), who are also speaking at SQLBits.
Joey D’Antoni –
Azure Managed Instances—Your Bridge to the Cloud – Friday March 1st @ 2:25pm in Room 9
Many organizations would like to take advantage of the benefits of using a platform as a service database like Azure SQL Database. Automated backups, patching, and costs are just some of the benefits. However, Azure SQL Database is not a 100% feature compatible with SQL Server—features like SQL Agent, CLR and Filestream are not supported. Migration to Azure SQL Database is also a challenge, as backup and restore and log shipping are not supported methods. Microsoft recently introduced Managed Instances—a new option that provides a bridge between on-premises or Azure VM implementations of SQL Server and Azure SQL Database.
Managed Instances provide full SQL Server surface compatibility and support database sizes up to 35 TB. In this session, you will learn about migrating your databases to Managed Instances, developing applications for managed instances. You will also learn about the underlying high availability and disaster recovery options for the solution.
SQL Server Databaseology: Deep Dive into Database Internals Saturday March 2nd, 2019 @ 4:10PM in Room 8
Have you ever taken apart a toaster or an alarm clock just to see how it worked? Ever wondered how that database actually functions at the record level, behind the scenes? SQL Server Databaseology is the study of SQL Server databases and their structures down to the very core of the records themselves. In this session, we will explore some of the deep inner workings of a SQL Server database at the record and page level. You will walk away with a better understanding of how SQL Server stores data and that knowledge that will allow you to build better and faster databases.
There is still time to register at www.sqlbits.com .
Contact the Author | Contact DCAC
Keys and secrets (AKA passwords) are an essential part of data protection management not only on-premises, but within the cloud as well. One of the many advantages of cloud is the ability to have a secure, persisted key store. If you have used a password manager like Keepass or 1Password, you can consider Azure Key Vault to be an enterprise level password manager, but also a lot more. One of the functions that Azure Key Vault supports is for you to keep small secrets such as passwords, tokens, connection strings, API keys as well as encryption keys and certificates in a safe tightly controlled secure location in the cloud. It is a centralized location for storing all your management keys removing the need for application owners to store and manage keys. This in turn helps by reducing the risk of keys being accidentally disclosed or lost.
This service allows you to manage not only your keys but also those who have access to them. You can grant granular permissions to each key to only the users and applications who need access. It also allows for separation of duties as shown in the diagram below.
Monitoring for compliance and audit is another crucial component to key management. Azure Key Vault also provides logging into what and whom accesses what is in your vault. By enabling logging for Key Vault, it saves data in an Azure storage account you create and stores all the information in needs for reporting within a retention range you set. My next blog in this series will show you step by step how to set up and configure logging using Azure Log Analytics.
As with any critical component of your infrastructure, your keys and secrets should be safe guarded against failures. Thankfully, Azure gives us the ability to store these keys with georedundancy in case of a disaster. You no longer have to worry about where those keys are stored and backing up those keys off site. However, one large caveat to storing your keys in the cloud is that you must always have internet access. Storing and using keys requires the application layer to retrieve those keys for use, redundant strong internet access is essential to any cloud operations.
Key Vault is also great for creating a secure login to SQL DB. My co-worker Joey D’Antoni (B|T) blogged about it recently here. In this recent blog he also dives in automation using this secured method and give you a great PowerShell script where he defines a variable called password, and gets from the Key Vault, and then passes it into the –SQLAdministratorCredentials in New-AzureRMSQLServer.
Lastly, part of key management is key rotation. Every company has a different rotation strategy, however, most of the time changing out these keys is a manual time-consuming process. Azure Automation can help you with this in conjunction with Azure Key Vault. This link gives you all the steps you need to set this up.
Azure Key Vault is definitely a service worth looking to it. It is a relatively low-cost alternative to managing and storing your companies passwords, tokens, connection strings, API keys as well as encryption keys and certificates. Plu it is a great way to get your company’s footprint into the cloud.
Contact the Author | Contact DCAC
Azure offers a lot of value-added services included with the price of what you pay . One of the things really, I like is Azure Advisor. this service provides information about your entire Azure landscape. It gives you a complete overview of your system needs including possible ways to save money.
High Availability shows you ways to improve the continuity of your business-critical applications.
Security detects threats and vulnerabilities that might lead to security breaches.
Performance shows you ways to speed up your applications.
Cost give you ways to reduce your overall Azure spreading.
As you can see from the above screenshot, we have several recommendations on how we can make improvements, let’s investigate each.
Cost is a big one we all care about when it comes to cloud computing, so let’s dive into that one first.
It’s tells us that we should look into buying reserved instances rather that pay as you go. Reserve instances (RIs) are instances you can pre-purchase with base cost for your virtual machines. If you know your overall workload is predictable pre-paying these significantly reduce costs— according to Microsoft up to 72 percent cheaper compared to pay-as-you-go prices— especially with one-year or three-year terms. Even more they state discounts are automatically applied to new and existing VMs that have the same size and region of your reserved instances.
Now let’s look at High Availability.
It gives us three recommendations.
First to Enable soft Delete to protect your blob storage. By enabling this we can preserve soft deleted data for a predetermined amount of time to make sure you don’t need it before it is overwritten. You can configure retention period to suit your requirements.
Next, Advisor tells us we should add another endpoint to the Traffic Manager profile in another region. This is a great suggestion as it allows us to better control network traffic for failovers in different data centers.
Finally, we can see it’s telling us to create an Azure service health alert. It is vital that we stay one step ahead of a catastrophe when managing a company’s data. These alerts will let us know if there’s a problem in the underlying Azure infrastructure and are free to add.
Moving on to Performance. Managing performance is always key to saving money in Azure. If you can manage to use fewer resources in a more optimal fashion, you can reduce the cost of your cloud solution. Thus, this is key for me when looking at recommendations. Here you can see we have just two for this subscription. Advisor is recommending that we enable Azure SQL Database recommendations, which isn’t the clearest suggestion and to migrate a Storage Account to Azure Resource Manager.
When I dive farther into Follow SQL DB Azure Recommendations, I can see it’s referring to Automatic Tuning suggestions. It’s alerting us to some duplicate indexes we may need to remove. Removing unneeded indexes can reduce overall resource consumption and save us money. For more information on Automatic tuning you can refer to my blog on what it is here.
The last performance recommendation is suggesting we move to an ARM deployment model. ARM deployments allow us to take advantage of templates that optimize security and upgrade paths. They group things in resource groups like virtual machine, storage account, and virtual network, web apps, database, and database servers for a consistent management layer to your environment. Here is a great link for more information on Resource Manager.
Lastly let’s look at the Security Center recommendations. This one gives us 18 different things to look at and is one of the most critical things we should evaluate. Security should always be a priority whether it is in the cloud or on premises. Because there are so many recommendations listed, I will only point out a few.
As you can see it looks over our entire subscription recommending things for our SQL DBs, Storage account, Endpoints, Web apps, Key vaults and Virtual machines, going down all the way to our subnet configuration. It recommends we apply things like disk encryption, firewalls, making web application accessible over HTTPS only, enable auditing on SQL Server and enabling diagnostic logs. It even gives you a column showing Secure Score Impact where it rates your impact change on your environment.
This dashboard gives you a wealth of informative recommendations I would highly consider implementing. Azure has taken leaps forward in what it offers on its subscriptions. Be sure to take a look at the improvement recommendations it offers you in your cloud environment.
Contact the Author | Contact DCAC
A key part of the SQL Server Agent is the ability to schedule jobs. While you can create one schedule for each agent job, frequently with applications like Reporting Services, users use Shared Schedules across multiple jobs. For instance, you can set a schedule to run at 8 am on week days or run every 2 hours or pick from a predefined list of schedules that already exist in the MSDB database. These are very convenient. However, if you choose to use these be sure you are keeping track of what is running for each of these shared schedules. You should not have everything running at once.
This is an example of a job schedule in a Management Studio SQL Server Agent Job. You can create a new one or pick from a list of already made schedules.
There’s nothing inherently wrong with using shared schedules—some very small operations can all run at the same time, however when you start to use them for larger operations you can really impact the overall performance of your server.
Many times, I have seen high CPU or locks as well as many other performance issues due to the system being overloaded with jobs running on shared schedules or all at the same time (midnight is a frequently common choice). Not every report to should run at 8 am and every data load run every 2 hours. If you not using shared schedules or added a separate schedule per job it is also important to make sure you are not running up against other things running. If you are using applications like SSRS, then you need to pay attention to when the report subscription refreshes are happening. Don’t overload your system by having everything run at once. Job and subscription schedules need to be analyzed and evaluated just like everything else you care for in your database.
To keep this from happening consult your agent jobs to see what other jobs are running before scheduling additional ones. You can easily get a glimpse in Job activity monitor to see what’s running and when it will run next. You can also really dive in using scripts provided in this really great post by Rick Dobson https://www.mssqltips.com/sqlservertip/5019/sql-server-agent-job-schedule-reporting/ .
Just a quick tidbit I thought I’d share as I am seeing this way too often in environments.
Contact the Author | Contact DCAC