Memory Optimizer Advisor

Published On: 2020-01-15By:

Previously I wrote about In-Memory Optimized Tables, in this blog I am going to discuss how to determine which tables could benefit from being In-Memory by using a tool called Memory Optimization Advisor (MOA). This a is a tool built into SQL Server Management Studio (SSMS) that will inform you of which tables could benefit  using In Memory OLTP capabilities and which may have non supported features. Once identified, MOA will help you to actually migrate that table and data to be optimized. Let’s see how it works by walking through it using a table I use for demonstrations in AdventureWorks2016CTP3. Since this is a smaller table and doesn’t incur a ton writes it is not a good use case, however, for simplicity I am using it for this demo.

Once create Right click on the table and choose Memory Optimization Advisor.

Click Next in the subsequent window that appears.

Next it will validate if your table is able to be migrated. It looks for things like unsupported data types, sparse columns, seeded columns, foreign keys, constraints and replication just to name a few. If any item fails, you must make changes and/or remove those features in order to move the table to a memory optimized table. This would be things like foreign keys, constraints, triggers, replication and partitioning to name a few.

Next it will go over some warnings. These items, unlike the ones in the validation screen, won’t stop the migration process but can cause behavior of that option to fail or act abnormally so be aware of this. Microsoft goes a step further here and gives us links to more information so that we can make an informed decision as whether or not to move forward with the migration.

The next screen below is very important as it lets you choose options for your migration to a memory optimized table. I want to point out a few things in this next screen shot.

First, in the RED box, you will be a requirement for a file group name. A memory optimized table must have a special file group when migrating. This is a requirement to allow you to rename the original table and keep it in place thus avoiding naming conflicts. You will note also in this screen you can choose what to rename the original table.

Next in the PURPLE box, you will see the option to check to also have the data moved to the new table. If you do not check this option, your table will be created with no rows and you will have to manually move your data.

Next in YELLOW box, this is the create able option that is equivalent to DURABILITY= SCHEMA_ONLY or SCHEMA_AND_DATA I talked about in my last blog. If you do check this box, then you will not have any durability and your data will disappear due to things like restart of SQL Services or reboots (this may be what you want if you are using this table as though it was a TEMP TABLE and the data is not needed). Be very aware of these options because by default this is not checked. If you are not sure which option to choose, don’t check the box. That will ensure the data is durable. Click NEXT.

Remember this is making a copy of your table for migration so the new optimized table cannot have the same primary key name. This next screen assists with renaming that key as well as setting up your index and bucket counts. I’ll explain bucket counts more below.

Note in the screen above it provides you a space to rename your primary key and create a new index. As we know a primary key is an index so you must set that up. We have two options to do this. We can use a NONCLUSTERED INDEX which is great for tables with many range queries and needs a sort order or we can use a NONCLUSTERED HASH index which is better for those direct lookups. If you choose the latter you also need to provide a value for the Bucket Count. Bucket count can dramatically impact the performance of the table, and you should read the documentation on how to properly set this value. In the case above I am leaving it to the pre-populated value and choosing NEXT.

This table has existing indexes so the next step is to run through the setup up of those for conversion. If you do not have any existing indexes this part is bypassed.

Note the two index migration options on the left. This means I have two indexes to migrate.

The next screen to come up is just a summary of all our migration options we choose in the set up. By choosing to migrate, you will migrate your table and its data to be an In Memory optimized table so proceed with caution. This maybe a good time to hit the SCRIPT button and script this out for later use. Keep in mind that I already have a memory optimized file group for this database so one is not created for me. If one didn’t already exist,  you would see its creation in Summary screen.

As shown below, the migration was successful. A new table was created while the old table was renamed and the data was copied over.

Let’s look at the results.

 

If I script out the new table now you will see that it notates it is a memory optimized table and has appropriate  bucket counts. Also note I DID NOT check the box that would have made my table SCHEMA_ONLY durable and you see that reflected with the DURABILTIY = SCHEMA_AND_DATA below.

Summary

As you can see the Memory Optimization Advisor makes it fairly simplistic to identify and migrate tables to In Memory Optimized Tables. I highly advise testing this process before trying to convert any tables in your databases. Not all workloads are viable candidates for this feature, so be sure to do your due diligence before implementation.  When you are ready to implement, this tool can help make that process a lot easier for you.

 

Contact the Author | Contact DCAC

Memory Optimized Tables in SQL Server

Published On: 2020-01-08By:

Sometimes when I try to learn about a concept my brain blocks out everything about it. Talking about anything that uses the “In Memory” concept tends to do this to me on occasion. It’s important to note that “In Memory” is a marketing term for a series of features in SQL Server that have common behaviors but are not inherently related. So, in my next few blogs I am going to attempt to explain some In-Memory concepts as it relates to SQL Server starting with a dive into Memory Optimized Tables. I’ve already written about Columnstore which has vastly different use cases to In Memory OLTP, and you can find those here. Columnstore  is a perfect example of an In-Memory concept that took me sometime to wrap my head around.

What are Memory Optimized Tables?

A Memory Optimized Table, starting in SQL Server 2014, is simply a table that has two copies one in active memory and one durable on disk whether that includes data or just Schema Only, which I will explain later. Since memory is flushed upon restart of SQL Services, SQL Server keeps a physical copy of the table that is recoverable. Even though there are two copies of the table, the memory copy is completely transparent and hidden to you.

What is the added benefit for using these in-memory tables?

That’s always something I ask when looking at SQL Server options or features. For in-memory tables, it’s the way SQL Server handles the latches and locks. According to Microsoft, the engine uses an optimistic approach for this, meaning it does not place locks or latches on any version of updated rows of data, which is very different than normal tables. It’s this mechanism that reduces contention and allows the transactions to process exponentially faster. Instead of locks In-Memory uses Row Versions, keeping the original row until after the transaction is committed. Much like Read Committed Snapshot Isolation (RCSI) this allows other transactions to read the original row, while updating the new row version. The In-Memory structured version is “pageless” and optimized for speed inside active memory,  giving a significant performance impact depending on workloads.

SQL Server also changes it’s logging for these tables. Instead of fully logging, this duality of both on disk and in memory versions (row versions) of the table allows less to be logged. SQL Server can use the before and after versions to gain information it would normally acquire from a log file. In SQL Server 2019, the same concept applies to the new Accelerated Data Recovery (ADR) approach to logging and recovery.

Finally, another added benefit is the DURABILITY option shown in the below example. The use of SCHEMA_ONLY can be a great way to get around the use of # TEMP tables and add a more efficient way to process temporary data especially with larger tables. You can read more on that here.

Things to Consider

Now this all sounds great, you would think everyone would add this to all their tables, however like all SQL Server options this is not for all environments. There are things you need to consider before implementing In Memory Tables. First and foremost, the amount of memory and the configuration of that memory before considering this. You MUST have that setup correctly in SQL Server as well adjust for the increased use of memory which may mean adding more memory to your server before starting. Secondly know that, like Columnstore indexes, these tables are not applicable for everything. These table are optimized for high volume WRITEs,  not a data warehouse which is mostly for reads for example.

Let’s see how we create one

The key to having a table “In-Memory” is the use of the key word “MEMORY-OPTIMIZED” on the create statement when you first create the table. Note there is no ability to ALTER a table to make an existing one memory optimized, you will need to recreate the table and load the data in order to take advantage of this option on an existing table.  There’s just a couple more setting you need to have configured to make this work as you can see from below.

First step is to make sure you are on compatibility level >=130

SELECT d.compatibility_level
    FROM sys.databases as d
    WHERE d.name = Db_Name();

If you are not you will need to change it.

ALTER DATABASE AdventureWorks2016CTP3; 
SET COMPATIBILITY_LEVEL = 130;

Next you must alter your database in order to take advantage of In- Memory you must alter your database and enable this setting.

ALTER DATABASE AdventureWorks2016CTP3; 
SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON;

Lastly your database will need to have a memory optimized file group added.

ALTER DATABASE AdventureWorks2016CTP3 
ADD FILEGROUP AdventureWorks2016CTP3_mod CONTAINS MEMORY_OPTIMIZED_DATA;

The below actually creates the file into the new filegroup.

ALTER DATABASE AdventureWorks2016CTP3 
ADD FILE (name=' AdventureWorks2016CTP3_mod1', 
filename='c:\data\AdventureWorks2016CTP3) 
TO FILEGROUP AdventureWorks2016CTP3_mod

Now let’s create a table

USE AdventureWorks2016CTP3

CREATE TABLE dbo.InMemoryExample

    (

        OrderID   INTEGER   NOT NULL   IDENTITY

            PRIMARY KEY NONCLUSTERED,

        ItemNumber   INTEGER    NOT NULL,

        OrderDate    DATETIME   NOT NULL

    )

        WITH

            (MEMORY_OPTIMIZED = ON,

            DURABILITY = SCHEMA_AND_DATA);

You can see inserting and selecting against the table is syntactically exactly the same as any other regular table, however internally it is far different. Above and beyond the table creation its structured behavior is basically the same in these actions including adding or removing a column.  Now one caveat to these tables is that you cannot CREATE or DROP and Index the same way. You must use ADD/DROP Index to accomplish this, believe me I tried.

Remember the DURABILITY option I briefly mentioned before? This is important. The example above has it set to SCHEMA_AND_DATA which means upon database going offline both the schema and data are preserved on disk. If you choose SCHEMA_ONLY this means that only the structure will be preserved, and data will be deleted. This is very important to note as it can introduce data loss.

Summary

As you can see, In-Memory tables are not as complicated as my brain wanted to make them. It’s a relatively simple concept that just in corporates row versioning and two copies of the table. Once you pull the concept into its parts it really makes it easier to understand. Stay tuned for more In Memory in my upcoming blogs.

Contact the Author | Contact DCAC

Trace Flags That Are Now Defaulted Behaviors in SQL Server

Published On: 2019-12-18By:

If you have ever attended one of my performance tuning sessions, you know I tend to talk about  trace flags.  Trace Flags can help fix performance issues and some are now defaulted in later SQL Server versions. In my opinion, when a trace flag’s behavior defaulted in a version, then you should potentially put them in place within environments that do not have them implemented. Below, are a few of these particular traces flag along with Microsoft’s definition of what each trace flag does, taken straight from MS documents.  I have also included a brief commentary on each one.  As with any change, you should be sure to thoroughly test before implementing these trace flags into any production environment.

Trace Flag 1117 When a file in the filegroup meets the autogrowth threshold, all files in the filegroup grow. This trace flag affects all databases and is recommended only if every database is safe to be grow all files in a filegroup by the same amount. Starting with SQL Server 2016 (13.x) this behavior is controlled by the AUTOGROW_SINGLE_FILE and AUTOGROW_ALL_FILES option of ALTER DATABASE ME: This forces all data files to grow at the same rate when a growth event is trigger. Keeping all files, the same sizes helps reduce contention and shares the work load evenly across the files. This especially useful for Tempdb.

Trace Flag 1118 Forces page allocations on uniform extents instead of mixed extents, reducing contention on the SGAM page. When a new object is created, by default, the first eight pages are allocated from different extents (mixed extents). Afterwards, when more pages are needed, those are allocated from that same extent (uniform extent). The SGAM page is used to track these mixed extents, so can quickly become a bottleneck when numerous mixed page allocations are occurring. This trace flag allocates all eight pages from the same extent when creating new objects, minimizing the need to scan the SGAM page. SQL Server 2016 (13.x) this behavior is controlled by the SET MIXED_PAGE_ALLOCATION option of ALTER DATABASE , and trace flag 1118 has no effect.  ME: This has to with tempdb contention. This forces SQL Server to allocate an entire extent (8 pages) when tables are created, rather than a single page. This can reduce contention within certain TempDB internal structures (GAM and SGAM pages).

Trace Flag 2371 Changes the fixed update statistics threshold to a linear update statistics threshold. SQL Server 2016 (13.x) and under the database compatibility level 130 or above, this behavior is controlled by the engine  and trace flag 2371 has no effect. ME: This changes the auto update statistics algorithm from using a flat rate of 20% + 500 rows being changed to trigger an auto update stats event, to using a sliding scale. This drastically increases the frequency of statistics updates. You can find my blog on this one here.

Trace Flag 2453 Allows a table variable to trigger recompile when enough number of rows are changed. This became table variable deferred compilation in SQL 2019 ME: Previously, table variables had an estimated row count of 1 which causes poor execution performance.  This now allows the optimizer to get a better estimate verses actual rows resulting in a better execution plan.

Trace Flag 3427 Enables a fix for the issue when many consecutive transactions insert data into temp tables in SQL Server 2016 (13.x) where this operation consumes more CPU than in SQL Server 2014 (12.x). This trace flag applies to SQL Server 2016 (13.x) SP1 CU2 through SQL Server 2016 (13.x) SP2 CU2. Starting with SQL Server 2016 (13.x) SP2 CU3 and SQL Server 2017 (14.x), this trace flag has no effect. ME: This simply helps with TEMPDB contention and CPU consumption especially for those workloads with a very large number of inserts and updates.

Trace Flag 4199  Enables Query Optimizer (QO) fixes released in SQL Server Cumulative Updates and Service Packs. QO changes that are made to previous releases of SQL Server are enabled by default under the latest database compatibility level in a given product release, without trace flag 4199 being enabled. Starting with SQL Server 2016 (13.x), to accomplish this at the database level, see the QUERY_OPTIMIZER_HOTFIXES option in ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL).
Starting with SQL Server 2016 (13.x) SP1, to accomplish this at the query level, add the USE HINT ‘ENABLE_QUERY_OPTIMIZER_HOTFIXES’ query hint instead of using this trace flag. ME: My blog explains more about 4199 and hotfixes here be sure to read it for a full understanding. This one is a little different as it is still optional and not defaulted however they have changed this to a scope configuration so that’s why I am mentioning it.

Trace Flag 6498 Enables more than one large query compilation to gain access to the big gateway when there is sufficient memory available. This trace flag can be used to keep memory usage for the compilation of incoming queries under control, avoiding compilation waits for concurrent large queries. It is based on the 80 percentage of SQL Server Target Memory, and it allows for one large query compilation per 25 gigabytes (GB) of memory. Starting with SQL Server 2014 (12.x) SP2 and SQL Server 2016 (13.x) this behavior is controlled by the engine and trace flag 6498 has no effect. ME: This on was new to me but has to do with the wait type RESOURCE_SEMAPHONE_QUERY_COMPILE. It’s when a large query needs to access more memory then the threshold it allows it to proceed if the memory is available. You can see what your memory threshold is set at by using the DBCC MemoryStatus command.

Trace Flag 6532 Enables performance improvement of query operations with spatial data types in SQL Server 2012 (11.x) and SQL Server 2014 (12.x). The performance gain will vary, depending on the configuration, the types of queries, and the objects. Starting with SQL Server 2016 (13.x) this behavior is controlled by the engine and trace flag 6532 has no effect. ME: This is not one I have actually had to use since I rarely address systems with spatial data.

Trace Flag 6533 Enables performance improvement of query operations with spatial data types in SQL Server 2012 (11.x) and SQL Server 2014 (12.x). The performance gain will vary, depending on the configuration, the types of queries, and the objects. Starting with SQL Server 2016 (13.x) this behavior is controlled by the engine and trace flag 6533 has no effect. ME: This is not one I have actually had to use since I rarely address systems with spatial data.

 Trace Flag 7412 Enables the lightweight query execution statistics profiling infrastructure. The performance overhead to collect information about execution plans was reduced with the introduction of lightweight profiling. This trace flag applies to SQL Server 2016 (13.x) SP1 and higher builds. Starting with SQL Server 2019 (15.x) this trace flag has no effect because lightweight profiling is enabled by default. ME: I used this a lot. Unlike standard statistics profiling, lightweight profiling does not collect CPU runtime information. However, lightweight profiling still collects the very useful row count and I/O usage information.

Trace Flag 7752 Enables asynchronous load of Query Store. Note: Starting with SQL Server 2019 (15.x) this behavior is controlled by the engine and trace flag 7752 has no effect. ME: Queries that are executed can be held up while QS loads data. This trace flag prevents this from happening. You will need this trace flag for 2016 and 2017 SQL Versions.

Summary

Anytime trace flags are replaced with defaulted behaviors we should be paying attention and fixing these issues in our lower environments. This blog is intended to just bring these to your attention and to encourage you to take the time to look into the ones I have mentioned above. If Microsoft has taken the thought that these should be defaulted behaviors in later versions, then “maybe” you should really consider applying the fixes in the prior versions. A full list of trace flags including these can be found on docs.Microsoft. Be sure to education yourself on these if you are not already familiar and once again ALWAYS test.

Contact the Author | Contact DCAC

Introduction to the performance features on SQL Server on Linux

Published On: 2019-12-04By:

I’ve been tinkering with SQL Server on Linux lately and noted a few things in regards to performance I thought I would share with you. SQL Server 2019 on Linux uses the same SQL Server database engine with many of the performance features and services you would find on Windows. There are more similarities than you would initially think. However, if you’re a Linux user who is new to SQL Server, I thought the following introduction to some performance features on SQL Server 2019 on Linux will be helpful.

Columnstore index

As I’ve written about before in my 3-part blog series that you can find here, a columnstore index allows you to store and query large stores of data in a columnar data format, called a columnstore. A columnstore index eliminates the need to maintain separate systems for operational and analytics workloads. The columnstore index maintains a copy of the data so that OLTP and analytics workloads run against separate copies of the data. This minimizes the impact of both workloads running simultaneously.

Let’s take a look again at creating one. You’ll note it’s exactly the same as in Windows.

Create a columnstore index

Using TSQL to add a Columnstore index to the SalesOrderDetail table

CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_SalesOrderDetail_ColumnStore]

   ON Sales.SalesOrderDetail

   (UnitPrice, OrderQty, ProductID)

Now let’s run a query to use our new index.

SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice,
   SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty
FROM Sales.SalesOrderDetail
   GROUP BY ProductID
   ORDER BY ProductID

To verify that the columnstore index was used,  we can view the execution plan which we can easily do just like in Windows or we can look up the object_id for the columnstore index and confirm that it appears in the usage stats for the SalesOrderDetail table:

SELECT * FROM sys.indexes
WHERE name = 'IX_SalesOrderDetail_ColumnStore'

SELECT * FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID('AdventureWorks')
   AND object_id = OBJECT_ID('AdventureWorks.Sales.SalesOrderDetail');

In-Memory OLTP

So, here is something I haven’t yet written about. SQL Server’s In-Memory OLTP features can significantly improve the performance of your application systems. However, it’s not for everyone. Here is a couple example that a particularly good candidates for OLTP:

  • Environments that are highly transactional with large numbers of SQL INSERTs concurrently
  • Environments that process heavy calculations using TSQL

Some things In-Memory gives us, whether Linux or Windows  are

  • Memory-optimized tables—A memory-optimized table stores data of itself in active memory and a secondary copy on the disk. It does this in the event the data needs recovery after a shutdown-then-restart of the server or database. The memory-plus-disk duality is unseen and happens only behind the scenes.
  • Natively compiled modules—A native module references memory-optimized tables only it does not pertain to the disk-based copy. Microsoft has seen native compilation result in durations that are 1/100th of the interpreted duration. You can read more about the performance here.

So how do we configure and use In-Memory OLTP

First step is to set the database compatibility level to at least 130, SQL Server 2016. If you don’t know your compat level you can check it by doing the below. If you find you need you change it I have given you the script to do so.

Use the following query to check the current compatibility level of AdventureWorks:

USE AdventureWorks
GO
SELECT d.compatibility_level
FROM sys.databases as d
WHERE d.name = Db_Name();
GO

If necessary, update the level to 130.

ALTER DATABASE CURRENT
SET COMPATIBILITY_LEVEL = 130;
GO

When a transaction involves both a disk-based table (regular table) and a memory-optimized table, the memory-optimized portion of the transaction must operate at the transaction isolation level named SNAPSHOT. Execute the following to turn on MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT  to enforce this level for memory-optimized tables in a cross-container transaction:

ALTER DATABASE CURRENT 
SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
GO

Now let’s create a Memory Optimized FILEGROUP and a container for data files. Remember we are using Linux, so we use containers for the files, but its just a file group like in windows.

ALTER DATABASE AdventureWorks 
ADD FILEGROUP AdventureWorks_mod CONTAINS memory_optimized_data
GO 

ALTER DATABASE AdventureWorks 
ADD FILE (NAME='AdventureWorks_mod', FILENAME='/var/opt/mssql/data/AdventureWorks_mod') 
TO FILEGROUP AdventureWorks_mod
GO

It’s that easy. Now let’s talk about one of my favorite performance tuning tools, Query Store which is also available in Linux.

Query Store

Query store collects detailed performance information about queries, execution plans, and runtime statistics. Query store isn’t active by default, so you need to enable it on database level for each database you want it. I could write many blogs on Query Store, this is only meant to let you know it exists on Linux too, so I am only scratch the very top surface for you here.

Enable query store with ALTER DATABASE

ALTER DATABASE AdventureWorks 
SET QUERY_STORE = ON;

Here is a query you can run to get information about queries and plans in the query store

SELECT Txt.query_text_id, Txt.query_sql_text, Pl.plan_id, Qry.*
FROM sys.query_store_plan AS Pl
   JOIN sys.query_store_query AS Qry
      ON Pl.query_id = Qry.query_id
   JOIN sys.query_store_query_text AS Txt
      ON Qry.query_text_id = Txt.query_text_id ;

Query dynamic management views

SQL Server on Linux as well as Windows gives us Dynamic management views that provide server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. This is no different then what we are used to in Windows.

Here is one for example. Query the dm_os_wait stats dynamic management view and you can see for yourself, there is nothing different.

SELECT wait_type, wait_time_ms
FROM sys.dm_os_wait_stats;

Here I just went over a few performance related things inside SQL Server 2019 in regards to SQL Server on Linux. I did this just to shed some light on the fact that Linux and Windows offer the same tools with just a different underlying OS. Hopefully you can put some of them to use in your own environment. To learn more about what you can do with Microsoft SQL 2019, Microsoft has a free Packt guide Introducing Microsoft SQL 19.

Contact the Author | Contact DCAC
1 2 3 24

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner    Microsoft MVP    Microsoft Certified Master    VMWare vExpert
   Best Full-Service Cloud Technology Consulting Company    Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers