SQL Server Vulnerability Assessment

Published On: 2018-12-12By:

SQL Server Vulnerability Assessment (VA) in SQL Server Management Studio 17.4 or later lets SQL Server scan your databases for potential security vulnerabilities and can be run against SQL Server 2012 or higher. If you are not on a newer version on SSMS, don’t worry, you can download it here.

Running any kind of scans against data always concerns me as performance impacts can really ruin your day. Luckily VA is light weight and runs without performance impacts while still giving you an in-depth view of where you could improved your SQL Server’s security. The process is designed to meet data privacy standards and compliance using knowledge base rules that look for deviations from Microsoft’s best practices.

To run an assessment simply choose a database, right click and choose Tasks. Here you will see Vulnerability Assessment choose that and Scan for Vulnerabilities. If you have run one previously you can access it here by choosing Open Existing Scan.

It will pop up a window to choose where you want the results saved. Once you click ok the process will run.

Here you can see my results from the AdventureworksDW2016CTP3 database. It has 6 failed items and 47 passed. It lists each item and assigns an appropriate a risk level.

Clicking on one of the listed items under failed gives you more details and remediation steps with scripts to fix it. Let’s look.

In this example, I chose an easy one. This database is not auditing for successful and failed login attempts. You can see below it gives us a description of the best practice rule not followed and provides us a query that we can run to see the results. I really like this feature and it’s a handy script to keep for later use when evaluating another server’s health. It even gives us a little copy button to copy out the script and the option to open it in a query window.

If you scroll down further, you will get to the recommended remediation steps and script. If there is no script provided it will give you a link to where to find the proper documentation on how to fix the issue. In my opinion, from what I have seen, VA does a good job explaining what’s needed to fix the issue. Always keep in mind, although this is created by Microsoft, I suggest running these in test first before production and taking the time to fully understand what it is doing.

You may have noticed in the two above screen shots the I have drawn a box around BASELINE and Approve as Baseline. A baseline allows you to add a customization of how the results are reported. This helps to reduce clutter on future scans.

By marking the result as a BASELINE you are telling VA that this is acceptable in your environment although it may not meet best practices or regulatory standards.  Anything in the future that matches the baseline are marked as passed in subsequent scans and will note reason for passing as Per Custom Baseline.

We can see this when I run another scan. You’ll note the report now shows I only have 5 failed (without me fixing the issue) and the additional information column shows baseline for the reason.

SQL Server Vulnerability Assessment is a great non-third party starting place for evaluating data privacy, security and compliance standards and is very easy to use. Give it a try and see where your severs stand.

SQL Data Discovery and Classification in SSMS 18.0

Published On: 2018-12-05By:

Data exploration and documentation can be cumbersome, classifying data can be one of those tedious but necessary things. With the introduction of such things as GDPR there is an ever greater need to classify and label your sensitive data. Microsoft attempts to help us out with this in the new SQL Data Discovery and Classification feature in SQL Server Management Studio (SSMS). This a new feature is available in Azure SQL Database and SSMS 18.0 and higher. It can be used against databases in SQL Server 2008, compatibly mode 100, and greater.

This feature will scan through your database tables and identifies columns that contain possible sensitive data. It then allows you to categorize that data as well provides a detailed report for auditing and compliance purposes.

Let’s see how it is done.

In SSMS on the database you want to use Right click on Tasks, then choose Data Discovery and Classification and Choose Classify Data. In this example, I am using the AdventureworksDW2016CTP3 database.

A results window will pop up showing how many field recommendations it has found. Click on it to view them..

When you view the data, you will see your data broken down by Schema, Table and Column. SSMS then attempts to categorize (information types) the information and estimate a sensitivity level (sensitively label) . It allows you to accept the recommendation by checking the box on left hand side. If it’s not quite what you wanted, you can adjust the Information Types and Sensitivity. Once you are satisfied with the category assignments,  click on the blue “Accept selected recommendations”  button located at the top and choose Save.

 

As shown below,  I have classified 10 columns and have 64 left unclassified

It also gives me the ability to manually add a field and classification by choosing the Add Classification button at the top.

 

As I mentioned previously, this feature provides a way to report on the data classification.

You can retrieve the report by clicking View Report, located adjacent to the Add Classification button at the top.

*Note if your report is blank you forgot to hit SAVE while classifying.

The report breaks all your data down nicely by schema, information types and sensitivity. You can see how this can be very useful and insightful when reviews data for compliance. It does take a little time to go through and validate the results of the classification. While this process might be lengthy to accomplish, in the long run, it is  well worth the time.

New Resumable Online Index Create SQL Server 2019

Published On: 2018-11-28By:

SQL Server 2019 brings a very exciting new feature that, is long overdue. Resumable online index create is one of my favorite new things. This paired with the Resumable Index Rebuilds introduced with SQL Server 2017 really gives database administrators much more control over index processes.

Have you ever started to build a new index on very large table only to have users call and complain their process is hung, not completing, or system is slow? That’s when you realize you’re the cause because you tried to sneak in a new index. I have many times, because creating a new index can impact performance and can be a problematic process for users when you have no or little downtime windows available. When you kill the create process it rolls back requiring you to start from the beginning the next time. With resumable online index creation you now have the ability to pause and restart the build at the point it was paused.  You can see where this can be very handy.

To use this option for creating the index you must include the RESUMABLE=ON

CREATE INDEX MyResumableIndex on MyTable (MyColumn) WITH (ONLINE=ON, RESUMABLE=ON, MAX_DURATION=30)

Let’s say you have only two 30-minute windows available to create this new index over the next two days. You could use the MAX_DURATION option with the new RESUMABLE=ON to specify the time interval for an index being built. Once the 30 minutes is up the index build automatically gets paused if it has not completed. When you’re ready the next day you can RESUME right where it left off allowing you to complete the process. Very cool.

Another added benefit is managing transaction log growth. As we all know, creating indexes especially large ones can cause hefty log growth events and can unfortunately lead to running out of disk space. This new functionality allows us to better manage that. We can now pause the process and truncate or backup the log mid process building the index in chunks.

In the case of when you create an index only to get complaints from users or manage your log growth you can simply do the below to PAUSE and restart it when a time is better, or your transaction log maintenance has completed.

You can KILL the SPID creating the index or run the below.

ALTER INDEX MyResumableIndex ON MyTable PAUSE;

To restart run the below or  simply re-execute your CREATE INDEX statement

ALTER INDEX MyResumableIndex on MyTable RESUME

According to MSDN Resumable online index create supports the follow scenarios:

  • Resume an index create operation after an index create failure, such as after a database fail over or after running out of disk space.
  • Pause an ongoing index create operation and resume it later allowing to temporarily free system resources as required and resume this operation later.
  • Create large indexes without using as much log space and a long-running transaction that blocks other maintenance activities and allowing log truncation.

*Note: SORT_IN_TEMPDB=ON is not supported when using RESUMABLE=ON

Once you pause it, how do you know how far the index got and how much is left to be created. With the Resumable REBUILD Index feature added in SQL Server 2017 we also got a new sys.index_resumable_operations system view. This view shows us the percentage complete, current state, start time, and last pause time.

I am very excited about this new Index Create feature I think this is a big WIN for SQL Server 2019.

How Much Will Compression Really Gain Me?

Published On: 2018-10-31By:

Did you know compression can gain you more than just space on a disk, it can allow you to fit more data into the buffer pool? That means queries have faster and more efficient reads which can help reduce I/O. It is a performance tool I highly recommend you consider for your environments. But how do you know how much compression savings you can gain. SQL Server gives you a handy system stored procedure sp_estimate_data_compression_savings that will help you calculate compression gains. It’s a great tool to use when trying to decide if compression is right for your environment and what type to use.

This procedure allows you to estimate compression gains on whole tables, indexes or partitions. You can test indexes, indexed views, heaps, and index partitions and starting with SQL Server 2019 you will be able to estimate savings on columnstore and columnstore archival compression. One of the great things you can do with it is to compare compression types to see which type you can get the biggest gain from. That’s what we will do in this blog. So, let’s see it in action!

Using AdventureWorks 2014 we will look at the table WorkOrderRouting and see which if any compressing type will give us the most gain. But first we need to get familiar with what arguments this sp_estimate_data_compression_savings uses and what the result set will tell us.

Straight from MSDN here are the arguments this procedure uses.

Arguments Definition
 [ @schema_name= ]  Is the name of the database schema that contains the table or indexed view. schema_name is sysname. If schema_name is NULL, the default schema of the current user is used.
[ @object_name= ]  Is the name of the table or indexed view that the index is on. object_name is sysname.
[ @index_id= ]  Is the ID of the index. index_id is int, and can be one of the following values: the ID number of an index, NULL, or 0 if object_id is a heap. To return information for all indexes for a base table or view, specify NULL. If you specify NULL, you must also specify NULL for partition_number.
[ @partition_number= ] Is the partition number in the object. partition_number is int, and can be one of the following values: the partition number of an index or heap, NULL or 1 for a nonpartitioned index or heap.
[ @data_compression= ] Is the type of compression to be evaluated. data_compression can be one of the following values: NONE, ROW, PAGE, COLUMNSTORE, or COLUMNSTORE_ARCHIVE.

In the result sets you will see 4 important values. These are the ones we want to pay attention to as it gives an a before and after estimation.

Column Definition
size_with_current_compression_setting  Size of the requested table, index, or partition as it currently exists.
size_with_requested_compression_setting   Estimated size of the table, index, or partition that uses the requested compression setting; and, if applicable, the existing fill factor, and assuming there is no fragmentation.
sample_size_with_current_compression_setting   Size of the sample with the current compression setting. This includes any fragmentation.
sample_size_with_requested_compression_setting  Size of the sample that is created by using the requested compression setting; and, if applicable, the existing fill factor and no fragmentation.

Now let’s have some fun.

Using NONE

USE AdventureWorks2014;

GO

EXEC sp_estimate_data_compression_savings 'Production', 'WorkOrderRouting', NULL, NULL, 'NONE' ;

GO

You will note that there is a slight difference using NONE, what this actually will show you what a REBUILD of an index will reclaim. Also note that fill factor of an index can also play a part in these  I personally expected the numbers to be exactly the same.

Using ROW

USE AdventureWorks2014;

GO

EXEC sp_estimate_data_compression_savings 'Production', 'WorkOrderRouting', NULL, NULL, 'ROW' ;

GO

Here we a pretty good gain a difference of 1576KB about 27%, not too bad. Remember row compression simply converts fixed length data types into variable length types, so you don’t normally get high levels of compression.

Using PAGE

USE AdventureWorks2014;

GO

EXEC sp_estimate_data_compression_savings 'Production', 'WorkOrderRouting', NULL, NULL, 'PAGE' ;

GO

Here we start to see some real gains at the page level we get significant compression savings 4184 kb that’s almost 75% reclaimed. That’s a HUGE savings!

Compression can be a powerful tool and a real benefit to you but there are some considerations you need to look at before deciding on which type of compression to use. While data compression was an enterprise edition feature, it is now included in all editions of SQL Server starting with SQL Server 2016 Service Pack 1. Although these estimates give you insight don’t just rely on these numbers. Take a look here first. Also keep in mind compression can cause query plans to recompile you may see a slight short-term performance hit on the newly compressed object. Since compressing a table or index will require a rebuild, this is generally a non-issue

*Caution when running on tables that have migrated legacy LOB datatypes. Please read this before running.

1 2 3 15

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.