SQL Data Discovery and Classification in SSMS 18.0

Published On: 2018-12-05By:

Data exploration and documentation can be cumbersome, classifying data can be one of those tedious but necessary things. With the introduction of such things as GDPR there is an ever greater need to classify and label your sensitive data. Microsoft attempts to help us out with this in the new SQL Data Discovery and Classification feature in SQL Server Management Studio (SSMS). This a new feature is available in Azure SQL Database and SSMS 18.0 and higher. It can be used against databases in SQL Server 2008, compatibly mode 100, and greater.

This feature will scan through your database tables and identifies columns that contain possible sensitive data. It then allows you to categorize that data as well provides a detailed report for auditing and compliance purposes.

Let’s see how it is done.

In SSMS on the database you want to use Right click on Tasks, then choose Data Discovery and Classification and Choose Classify Data. In this example, I am using the AdventureworksDW2016CTP3 database.

A results window will pop up showing how many field recommendations it has found. Click on it to view them..

When you view the data, you will see your data broken down by Schema, Table and Column. SSMS then attempts to categorize (information types) the information and estimate a sensitivity level (sensitively label) . It allows you to accept the recommendation by checking the box on left hand side. If it’s not quite what you wanted, you can adjust the Information Types and Sensitivity. Once you are satisfied with the category assignments,  click on the blue “Accept selected recommendations”  button located at the top and choose Save.

 

As shown below,  I have classified 10 columns and have 64 left unclassified

It also gives me the ability to manually add a field and classification by choosing the Add Classification button at the top.

 

As I mentioned previously, this feature provides a way to report on the data classification.

You can retrieve the report by clicking View Report, located adjacent to the Add Classification button at the top.

*Note if your report is blank you forgot to hit SAVE while classifying.

The report breaks all your data down nicely by schema, information types and sensitivity. You can see how this can be very useful and insightful when reviews data for compliance. It does take a little time to go through and validate the results of the classification. While this process might be lengthy to accomplish, in the long run, it is  well worth the time.

Who should own your SQL Agent jobs?

Published On: 2018-12-03By:

There’s been a lot of discussions recently about SQL Agent jobs, proxy accounts, and job ownership. I wanted to try and clarify some of the myths out there, including job ownership and permissions.

The owner of a job is the context of the account, that the SQL Agent Job runs as. This account by default will be the user that creates this job. Normal application level jobs can have this be changed to an application level account, or another non-privileged login which has permissions inside SQ: Server to do the work that it needs to do. ]

The account that SQL Server Agents runs as must have sysadmin rights within the SQL Server Instance otherwise the SQL Server Agent will not be able to run. Also maintenance jobs that rebuild indexes or update statistics are going to need to have at least Database Owner rights within the databases, if not sysadmin rights within the instance.

Jobs for things like Replication and CDC should generally be left alone with sysadmin rights as that’s what they are going to need to function.

Jobs that are created by your application should run with whatever permissions that they need to function.

If you want (or need) to run job steps, where each job step executes as it’s own account, then proxy agents are going to be the way to you. You configure Proxy Accounts per job step in SQL Agent, so you can create different proxies for each job step if you want/have to. These proxy accounts can have whatever permissions the job needs to complete the task at hand. If a job step needs sysadmin rights to complete then you can create a SQL Server proxy for that step, if a job step needs minimal rights for the job step to run, then you can safely grant the proxy just the permissions that it needs.

If you want to give more into the security of SQL Server than I’d highly recommned that you look over a copy of my book, Securing SQL Server and check out my precon at the PASS Summit in 2018.

Denny

The post Who should own your SQL Agent jobs? appeared first on SQL Server with Mr. Denny.

New Resumable Online Index Create SQL Server 2019

Published On: 2018-11-28By:

SQL Server 2019 brings a very exciting new feature that, is long overdue. Resumable online index create is one of my favorite new things. This paired with the Resumable Index Rebuilds introduced with SQL Server 2017 really gives database administrators much more control over index processes.

Have you ever started to build a new index on very large table only to have users call and complain their process is hung, not completing, or system is slow? That’s when you realize you’re the cause because you tried to sneak in a new index. I have many times, because creating a new index can impact performance and can be a problematic process for users when you have no or little downtime windows available. When you kill the create process it rolls back requiring you to start from the beginning the next time. With resumable online index creation you now have the ability to pause and restart the build at the point it was paused.  You can see where this can be very handy.

To use this option for creating the index you must include the RESUMABLE=ON

CREATE INDEX MyResumableIndex on MyTable (MyColumn) WITH (ONLINE=ON, RESUMABLE=ON, MAX_DURATION=30)

Let’s say you have only two 30-minute windows available to create this new index over the next two days. You could use the MAX_DURATION option with the new RESUMABLE=ON to specify the time interval for an index being built. Once the 30 minutes is up the index build automatically gets paused if it has not completed. When you’re ready the next day you can RESUME right where it left off allowing you to complete the process. Very cool.

Another added benefit is managing transaction log growth. As we all know, creating indexes especially large ones can cause hefty log growth events and can unfortunately lead to running out of disk space. This new functionality allows us to better manage that. We can now pause the process and truncate or backup the log mid process building the index in chunks.

In the case of when you create an index only to get complaints from users or manage your log growth you can simply do the below to PAUSE and restart it when a time is better, or your transaction log maintenance has completed.

You can KILL the SPID creating the index or run the below.

ALTER INDEX MyResumableIndex ON MyTable PAUSE;

To restart run the below or  simply re-execute your CREATE INDEX statement

ALTER INDEX MyResumableIndex on MyTable RESUME

According to MSDN Resumable online index create supports the follow scenarios:

  • Resume an index create operation after an index create failure, such as after a database fail over or after running out of disk space.
  • Pause an ongoing index create operation and resume it later allowing to temporarily free system resources as required and resume this operation later.
  • Create large indexes without using as much log space and a long-running transaction that blocks other maintenance activities and allowing log truncation.

*Note: SORT_IN_TEMPDB=ON is not supported when using RESUMABLE=ON

Once you pause it, how do you know how far the index got and how much is left to be created. With the Resumable REBUILD Index feature added in SQL Server 2017 we also got a new sys.index_resumable_operations system view. This view shows us the percentage complete, current state, start time, and last pause time.

I am very excited about this new Index Create feature I think this is a big WIN for SQL Server 2019.

Think a Kubernetes Cluster is to Expensive to Build?

Published On: 2018-11-26By:

Some of the complaints that I hear about building a new Kubernetes (K8) cluster is that a Cluster for testing out K8 is too expensive for companies to spin up. In the modern days, that just isn’t true. DCAC was able to build our brand new K8 customer lab for about $2,000 all in. Granted our cluster is a smaller two-node cluster, and there are only 128 Gigs of RAM per server (we’ll increase this if we need to). But this is a big enough cluster to show clients how K8 can be helpful to them and work on scripts for clients (and to publish online).

Is K8 running SQL Server that all DBAs should be able to test out to see if this is where they’re company should be going in the future? Yes for sure.

Is this something that’s too expensive for companies to try out? No, not at all. If your employer has a couple of thousand dollars for hardware, you can make a test lab happen. Even if the equipment gets thrown away six months later, it’s worth the small expense.

Needless to say, our lab for us to test out customer ideas is growing again.

Denny

The post Think a Kubernetes Cluster is to Expensive to Build? appeared first on SQL Server with Mr. Denny.

1 2 3 4 395

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.