Changing MaxDop For Resumable Index Create Operations

Published On: 2020-01-17By:

I’ll admit it, sometimes I’m wrong.

Recently I blogged about the usefulness of a newer feature that was enhanced with the release of SQL Server 2019.  Resumable Indexes operations was introduced with SQL Server 2017 however it was limited to only REBUILD operations.  With the release of SQL Server 2019, Microsoft has extended that ability to CREATE operations now.  Two features that accompany these new abilities, which I consider very versatile, is the ability to specify a maximum degree of parallelism (MAXDOP) or a maximum duration.   Both provide database administrators more granular control over their index operations and how much impact they will affect to their production environment.

In my previous blog, I discussed how you could use the ability to resume an index create operation with a different specified MAXDOP value.  However, in the course of observing additional behavior related to the new system view, sys.resumable_index_operations along with a discussion with Microsoft, it turns out that it doesn’t behave the way I originally thought.  According to Microsoft, the command will accept the syntax of a higher MAXDOP value but internally it is reverted to the originally specified value.  Therefore, my previous blog post was slightly incorrect.  While you can adjust the MAXDOP value during a REBUILD operation, you cannot do the same with a resumable CREATE operation.

Let’s see it in action, but first let’s talk a little bit about  my test environment to make sure everyone gets on the same page

In both examples, I’ll be using the AdventureWorksDWCTP3 sample database along with the dbo.FactResellerSalesXL_PageCompressed table.  This has approximately 11.6 million rows in it.

Also, since my laptop has 8 cores available to it, I will be limiting my test SQL Server instance to only 2 cores by using processor affinity.  I will also use the % Processor Time metric in Performance Monitor to view CPU utilization.

Here is what my processor affinity looks like:

WARNING – DO NOT DO USE PROCESSOR AFINITY IN PRODUCTION!! MODIFYING PROCESSOR AFFINITY CAN CAUSE SERIOUS ISSUES!!

I have also limited the cost threshold for parallelism to 0 and the maximum degree of parallelism.  I did this because I wanted to ensure that the CREATE operation would go parallel if allowed to do so and to limit it to only two cores.

Alright now let’s get started.

Increasing MAXDOP

In this example, first, I’ll create a new non-clustered index by specifying ONLINE, RESUMABLE, and a MAXDOP of two.  With the given configuration of process affinity, a very low-cost threshold of parallelism, and an appropriate maximum degree of parallelism, SQL Server will utilize two cores to build this index.

Here is the synax:

CREATE NONCLUSTERED INDEX ix_factresellersalesXL_pagecompressed_OrderDate
ON dbo.FactResellerSalesXL_PageCompressed (OrderDate)
WITH (ONLINE=ON,RESUMABLE=ON,MAXDOP=2)
GO

We can see in the above video that SQL Server does indeed take the two cores available to it in order to build the index. Once the CREATE operation has completed, we can see CPU utilization drop to within normal limits.

Now, I’ll repeat that process, however, this time I will specify a MAXDOP of one.  In the middle of the CREATE operation I will pause it.  Once the process is paused, I’ll verify that it is in a resumable state and then attempt to restart it with a higher MAXDOP value of two.

CREATE NONCLUSTERED INDEX ix_factresellersalesXL_pagecompressed_OrderDate
ON dbo.FactResellerSalesXL_PageCompressed (OrderDate)
WITH (ONLINE=ON,RESUMABLE=ON,MAXDOP=1)
GO

During this process we can see that only a single core is used even though we’ve specified to resume the operation with a maxdop of two.

Decreasing MAXDOP

If you attempt to lower the MAXDOP value when you resume the operation, it will throw an error:

Msg 8622, Level 16, State 1, Line 15
Query processor could not produce a query plan because of the hints defined in this query.
Resubmit the query without specifying any hints and without using SET FORCEPLAN.

Since it’s a CREATE operation, it must reuse the original execution plan so therefore it cannot adjust the MAXDOP value in either case.

Summary

While resumable index operations are still a powerful addition to the database administrator arsenal of tools to help maintain their environments, this is a gotcha that might sneak up on you if you aren’t aware.  If you were planning on utilizing this feature to utilize off hours to speed up the creation of new indexes, you might need to adjust accordingly.   Remember that index rebuilds do not show this behavior so you can adjust as needed!

Make sure to subscribe so that you don’t miss out on future posts!

© 2020, John Morehouse. All rights reserved.

Contact the Author | Contact DCAC

Memory Optimizer Advisor

Published On: 2020-01-15By:

Previously I wrote about In-Memory Optimized Tables, in this blog I am going to discuss how to determine which tables could benefit from being In-Memory by using a tool called Memory Optimization Advisor (MOA). This a is a tool built into SQL Server Management Studio (SSMS) that will inform you of which tables could benefit  using In Memory OLTP capabilities and which may have non supported features. Once identified, MOA will help you to actually migrate that table and data to be optimized. Let’s see how it works by walking through it using a table I use for demonstrations in AdventureWorks2016CTP3. Since this is a smaller table and doesn’t incur a ton writes it is not a good use case, however, for simplicity I am using it for this demo.

Once create Right click on the table and choose Memory Optimization Advisor.

Click Next in the subsequent window that appears.

Next it will validate if your table is able to be migrated. It looks for things like unsupported data types, sparse columns, seeded columns, foreign keys, constraints and replication just to name a few. If any item fails, you must make changes and/or remove those features in order to move the table to a memory optimized table. This would be things like foreign keys, constraints, triggers, replication and partitioning to name a few.

Next it will go over some warnings. These items, unlike the ones in the validation screen, won’t stop the migration process but can cause behavior of that option to fail or act abnormally so be aware of this. Microsoft goes a step further here and gives us links to more information so that we can make an informed decision as whether or not to move forward with the migration.

The next screen below is very important as it lets you choose options for your migration to a memory optimized table. I want to point out a few things in this next screen shot.

First, in the RED box, you will be a requirement for a file group name. A memory optimized table must have a special file group when migrating. This is a requirement to allow you to rename the original table and keep it in place thus avoiding naming conflicts. You will note also in this screen you can choose what to rename the original table.

Next in the PURPLE box, you will see the option to check to also have the data moved to the new table. If you do not check this option, your table will be created with no rows and you will have to manually move your data.

Next in YELLOW box, this is the create able option that is equivalent to DURABILITY= SCHEMA_ONLY or SCHEMA_AND_DATA I talked about in my last blog. If you do check this box, then you will not have any durability and your data will disappear due to things like restart of SQL Services or reboots (this may be what you want if you are using this table as though it was a TEMP TABLE and the data is not needed). Be very aware of these options because by default this is not checked. If you are not sure which option to choose, don’t check the box. That will ensure the data is durable. Click NEXT.

Remember this is making a copy of your table for migration so the new optimized table cannot have the same primary key name. This next screen assists with renaming that key as well as setting up your index and bucket counts. I’ll explain bucket counts more below.

Note in the screen above it provides you a space to rename your primary key and create a new index. As we know a primary key is an index so you must set that up. We have two options to do this. We can use a NONCLUSTERED INDEX which is great for tables with many range queries and needs a sort order or we can use a NONCLUSTERED HASH index which is better for those direct lookups. If you choose the latter you also need to provide a value for the Bucket Count. Bucket count can dramatically impact the performance of the table, and you should read the documentation on how to properly set this value. In the case above I am leaving it to the pre-populated value and choosing NEXT.

This table has existing indexes so the next step is to run through the setup up of those for conversion. If you do not have any existing indexes this part is bypassed.

Note the two index migration options on the left. This means I have two indexes to migrate.

The next screen to come up is just a summary of all our migration options we choose in the set up. By choosing to migrate, you will migrate your table and its data to be an In Memory optimized table so proceed with caution. This maybe a good time to hit the SCRIPT button and script this out for later use. Keep in mind that I already have a memory optimized file group for this database so one is not created for me. If one didn’t already exist,  you would see its creation in Summary screen.

As shown below, the migration was successful. A new table was created while the old table was renamed and the data was copied over.

Let’s look at the results.

 

If I script out the new table now you will see that it notates it is a memory optimized table and has appropriate  bucket counts. Also note I DID NOT check the box that would have made my table SCHEMA_ONLY durable and you see that reflected with the DURABILTIY = SCHEMA_AND_DATA below.

Summary

As you can see the Memory Optimization Advisor makes it fairly simplistic to identify and migrate tables to In Memory Optimized Tables. I highly advise testing this process before trying to convert any tables in your databases. Not all workloads are viable candidates for this feature, so be sure to do your due diligence before implementation.  When you are ready to implement, this tool can help make that process a lot easier for you.

 

Contact the Author | Contact DCAC

IT Career Energizer Podcast

Published On: 2020-01-13By:

A couple of weeks ago I was on the IT Career Energizer Podcast. During the course of the episode, we’ll cover some career highlights, where I think IT is doing in the next few years and all other sorts of things IT Career-related.

You can subscribe to the podcast on Apple (the episode is here) or you can get the podcast from Stitcher as well.

There’s a ton of more information available via the show notes.

Hopefully, you’ll find the podcast useful and entertaining.

Denny

The post IT Career Energizer Podcast appeared first on SQL Server with Mr. Denny.

Contact the Author | Contact DCAC

My SQL Saturday Chicago Precon: Leveling Up Your Azure Skills

Published On: 2020-01-08By:

I’m doing another precon at SQL Saturday Chicago on March 20th, 2020. The event is at Benedictine University in Lisle, IL. This time we’re going to dive a little bit deeper into Azure. While last years precon focused on basic infrastructure skills in Azure, this time we’re going to focus a little deeper into the specifics of the Data Platform in Azure. I did a variant of this topic in India last August, but I’ve made a few changes based on a number of conversations with customers I had at Microsoft Ignite last year.

aerial photography of building city lights

Photo by Nate on Pexels.com

In this full day session, here’s what we’re going to cover:

  • VMs and Storage (like it or not this is still 80% of the cloud spend, and it’s how people migrate to the cloud)
  • Understanding the Azure services you need to manage your environment (this is going to focus heavily on Azure Automation, Azure Key Vaults, and Logic Apps)
  • Azure Data Platform–This is a Big Topic, but here’s how it breaks down
    • Understand the differences between
      • SQL Server in a VM
      • Azure SQL Database
      • Azure Managed Instance
      • Azure Synapse (nee Azure SQL Data Warehouse)
      • Cosmos DB
    • We’ll talk about the costs/benefits/performance of each of these options, and how they are incorporated into your broader architecture
    • Don’t worry, I’m a DBA, we’ll cover HA/DR here
  • We’ll also talk about a bit about cloud focused application architecture
    • Why Redis is front of your database is a good idea
    • Taking advantage of dynamically scaling resources

That’s a lot of topics–it’s a full day, and the deep dive part will be on the data platform, but any and all questions are welcome. There are still a few discount tickets left–you can sign up here.

 

Contact the Author | Contact DCAC
1 2 3 448

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner    Microsoft MVP    Microsoft Certified Master    VMWare vExpert
   Best Full-Service Cloud Technology Consulting Company    Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers
Share via
Copy link