My SQL Saturday Chicago Precon: Leveling Up Your Azure Skills

Published On: 2020-01-08By:

I’m doing another precon at SQL Saturday Chicago on March 20th, 2020. The event is at Benedictine University in Lisle, IL. This time we’re going to dive a little bit deeper into Azure. While last years precon focused on basic infrastructure skills in Azure, this time we’re going to focus a little deeper into the specifics of the Data Platform in Azure. I did a variant of this topic in India last August, but I’ve made a few changes based on a number of conversations with customers I had at Microsoft Ignite last year.

aerial photography of building city lights

Photo by Nate on Pexels.com

In this full day session, here’s what we’re going to cover:

  • VMs and Storage (like it or not this is still 80% of the cloud spend, and it’s how people migrate to the cloud)
  • Understanding the Azure services you need to manage your environment (this is going to focus heavily on Azure Automation, Azure Key Vaults, and Logic Apps)
  • Azure Data Platform–This is a Big Topic, but here’s how it breaks down
    • Understand the differences between
      • SQL Server in a VM
      • Azure SQL Database
      • Azure Managed Instance
      • Azure Synapse (nee Azure SQL Data Warehouse)
      • Cosmos DB
    • We’ll talk about the costs/benefits/performance of each of these options, and how they are incorporated into your broader architecture
    • Don’t worry, I’m a DBA, we’ll cover HA/DR here
  • We’ll also talk about a bit about cloud focused application architecture
    • Why Redis is front of your database is a good idea
    • Taking advantage of dynamically scaling resources

That’s a lot of topics–it’s a full day, and the deep dive part will be on the data platform, but any and all questions are welcome. There are still a few discount tickets left–you can sign up here.

 

Contact the Author | Contact DCAC

Memory Optimized Tables in SQL Server

Published On: By:

Sometimes when I try to learn about a concept my brain blocks out everything about it. Talking about anything that uses the “In Memory” concept tends to do this to me on occasion. It’s important to note that “In Memory” is a marketing term for a series of features in SQL Server that have common behaviors but are not inherently related. So, in my next few blogs I am going to attempt to explain some In-Memory concepts as it relates to SQL Server starting with a dive into Memory Optimized Tables. I’ve already written about Columnstore which has vastly different use cases to In Memory OLTP, and you can find those here. Columnstore  is a perfect example of an In-Memory concept that took me sometime to wrap my head around.

What are Memory Optimized Tables?

A Memory Optimized Table, starting in SQL Server 2014, is simply a table that has two copies one in active memory and one durable on disk whether that includes data or just Schema Only, which I will explain later. Since memory is flushed upon restart of SQL Services, SQL Server keeps a physical copy of the table that is recoverable. Even though there are two copies of the table, the memory copy is completely transparent and hidden to you.

What is the added benefit for using these in-memory tables?

That’s always something I ask when looking at SQL Server options or features. For in-memory tables, it’s the way SQL Server handles the latches and locks. According to Microsoft, the engine uses an optimistic approach for this, meaning it does not place locks or latches on any version of updated rows of data, which is very different than normal tables. It’s this mechanism that reduces contention and allows the transactions to process exponentially faster. Instead of locks In-Memory uses Row Versions, keeping the original row until after the transaction is committed. Much like Read Committed Snapshot Isolation (RCSI) this allows other transactions to read the original row, while updating the new row version. The In-Memory structured version is “pageless” and optimized for speed inside active memory,  giving a significant performance impact depending on workloads.

SQL Server also changes it’s logging for these tables. Instead of fully logging, this duality of both on disk and in memory versions (row versions) of the table allows less to be logged. SQL Server can use the before and after versions to gain information it would normally acquire from a log file. In SQL Server 2019, the same concept applies to the new Accelerated Data Recovery (ADR) approach to logging and recovery.

Finally, another added benefit is the DURABILITY option shown in the below example. The use of SCHEMA_ONLY can be a great way to get around the use of # TEMP tables and add a more efficient way to process temporary data especially with larger tables. You can read more on that here.

Things to Consider

Now this all sounds great, you would think everyone would add this to all their tables, however like all SQL Server options this is not for all environments. There are things you need to consider before implementing In Memory Tables. First and foremost, the amount of memory and the configuration of that memory before considering this. You MUST have that setup correctly in SQL Server as well adjust for the increased use of memory which may mean adding more memory to your server before starting. Secondly know that, like Columnstore indexes, these tables are not applicable for everything. These table are optimized for high volume WRITEs,  not a data warehouse which is mostly for reads for example.

Let’s see how we create one

The key to having a table “In-Memory” is the use of the key word “MEMORY-OPTIMIZED” on the create statement when you first create the table. Note there is no ability to ALTER a table to make an existing one memory optimized, you will need to recreate the table and load the data in order to take advantage of this option on an existing table.  There’s just a couple more setting you need to have configured to make this work as you can see from below.

First step is to make sure you are on compatibility level >=130

SELECT d.compatibility_level
    FROM sys.databases as d
    WHERE d.name = Db_Name();

If you are not you will need to change it.

ALTER DATABASE AdventureWorks2016CTP3; 
SET COMPATIBILITY_LEVEL = 130;

Next you must alter your database in order to take advantage of In- Memory you must alter your database and enable this setting.

ALTER DATABASE AdventureWorks2016CTP3; 
SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON;

Lastly your database will need to have a memory optimized file group added.

ALTER DATABASE AdventureWorks2016CTP3 
ADD FILEGROUP AdventureWorks2016CTP3_mod CONTAINS MEMORY_OPTIMIZED_DATA;

The below actually creates the file into the new filegroup.

ALTER DATABASE AdventureWorks2016CTP3 
ADD FILE (name=' AdventureWorks2016CTP3_mod1', 
filename='c:\data\AdventureWorks2016CTP3) 
TO FILEGROUP AdventureWorks2016CTP3_mod

Now let’s create a table

USE AdventureWorks2016CTP3

CREATE TABLE dbo.InMemoryExample

    (

        OrderID   INTEGER   NOT NULL   IDENTITY

            PRIMARY KEY NONCLUSTERED,

        ItemNumber   INTEGER    NOT NULL,

        OrderDate    DATETIME   NOT NULL

    )

        WITH

            (MEMORY_OPTIMIZED = ON,

            DURABILITY = SCHEMA_AND_DATA);

You can see inserting and selecting against the table is syntactically exactly the same as any other regular table, however internally it is far different. Above and beyond the table creation its structured behavior is basically the same in these actions including adding or removing a column.  Now one caveat to these tables is that you cannot CREATE or DROP and Index the same way. You must use ADD/DROP Index to accomplish this, believe me I tried.

Remember the DURABILITY option I briefly mentioned before? This is important. The example above has it set to SCHEMA_AND_DATA which means upon database going offline both the schema and data are preserved on disk. If you choose SCHEMA_ONLY this means that only the structure will be preserved, and data will be deleted. This is very important to note as it can introduce data loss.

Summary

As you can see, In-Memory tables are not as complicated as my brain wanted to make them. It’s a relatively simple concept that just in corporates row versioning and two copies of the table. Once you pull the concept into its parts it really makes it easier to understand. Stay tuned for more In Memory in my upcoming blogs.

Contact the Author | Contact DCAC

Let’s Talk About Ransomware (Again)

Published On: 2020-01-07By:

I’ve posted about ransomware a few times before. At DCAC, we’ve worked with a few customers who’ve fortunately been able to help them recover from their attacks. But ransomware attacks are trending upward, as seen below–there’s a ransomware attacker that has pulled in $10 million dollars this year (this year being so far in 2020. 7 days in). This had been a number of announcements about ransomware attacks in recent days– a small company that went bankrupt and money exchange Travelex who was recently attacked.

Ransomware attacks are quite different from traditional hacking. Traditionally hackers are in pursuit of high value data, whether it be from a government agency or a large enterprise business, or someone with a vast array of customer data like Equifax. Ransomware is targeting easy targets, that the attackers think will pay, or hope they will pay. What that means is that if you work for a smaller firm, you are far more likely to be attacked with ransomware than you would be in a traditional hacking attack. Theoretically,  large enterprises have more network segmentation, and better security controls, that make them less vulnerable to these attacks. As the the last year has shown, this is definitely not always the case.

It’s All About the Network

The way ransomware typically works, is to find a vulnerability via user accounts, in order to run exploit software on your network. If the only thing that can be reached on your network is user PCs, that’s the only data that’s going to be encrypted. Where organizations get exploited by this, is that their user network is either directly connected to their business network, or there is just one flat network structure. This means if a user is attacked (typical attack vectors are emailing infected office documents or PDFs, but others do include more advanced attack vectors like the aforementioned Pulse VPN vulnerability) the ransomware can go after file servers and domain controllers and start encrypting all of your files.

Assume Breach

One of the most important approaches to this, is that in order to build your network structures appropriately, you need to assume that your user network is going to be breached. It’s what Microsoft does with Azure, and it’s what you should do on your network as well. This means you need to do a few things:

  • Segment your network
  • Separate accounts, and maybe even a separate domain for your all of your servers
  • Disconnected backups, you can encrypt a safe full of tapes
  • Frequent restore testing
  • Basically, if you can connect to a file or database server from your corporate laptop without multi-factor auth, changing credentials, and/or network, you’re at the mercy of your dumbest user to not launch an infected Word file

All of these controls make our jobs harder, and it sucks, but it’s what we have to do. The one benefit of modern technology is that the cloud makes all these things approachable to even the smallest of businesses. I can build a segmented network in about 20 min in Azure, whereas it would take me a few hours to find the the right equipment at the right price point from a networking vendor if you were trying to do it on-premises.

I’m Just the DBA, What Can I Do?

I get it, and I’ve been there–you’re the DBA at a company where IT isn’t exactly the priority. There’s a few approaches you can take–the first is the most self-serving for me, is to engage a consulting firm like DCAC. While you know what you are talking about, your management sees you as a worker bee, and may not listen to your complaints about having an open WiFi network that your regulatory agency can logon to and see the public file servers (yes, this actually happened to me). However, when they are paying a consultant $ALot/hr they tend to be more receptive of taking advice, even if it’s the same thing the DBA has been talking about for years.

Another approach is to hire a penetration testing firm–many large organizations are required to do this by regulation, and they do a good job of identifying vulnerabilities in a firm.

Finally, and the easiest thing to execute is to PATCH YOUR SHIT. This is where you can assist the most–you can patch Windows and SQL Server (or Linux if that’s your bag), and you can work with the sysadmin teams in your organization to ensure that all OS, application, and mouse software is patched regularly.  It’s not perfect, and it doesn’t fix all of your problem, but it’s a start. Also, test your god damned restores, it’s not enough to have backups if you don’t know that they work.

 

 

Contact the Author | Contact DCAC

Our EGPAF Click to Donate Campaign has wrapped up for 2019

Published On: 2020-01-06By:
Over the month of December (from the 1st through the 24th) we had a donation program set up with the Elizabeth Glaser Pediatric AIDS Foundation.  To help us, all you had to do, was to click through to the page that we set up for the donation program.  This simple act would cause a $1 donation to be made. If you wanted to help out more (which several people did, and we thank you) then you could click through and donate more directly to EGPAF, and we’d then match those donations up to $10,000.  While we were out donation program our marketing firm, Mayfield Consulting was doing one as well which followed along with ours. Between the two programs, we had a fantastic fundraiser, and we thank everyone for participating.  Between the two donation programs, I’m thrilled to say that we were able to raise $15,662 for the Elizabeth Glaser Pediatric AIDS Foundation. Thanks again everyone for helping us make this fundraiser a great success.  I’m sure that EGPAF will be able to use these donations in order to help save more lives.  This is a great way to finish up 2019, I hope everyone has a great New Years and we’ll see everyone as we kick off the 20s in just a couple of days’ time. If you haven’t had a chance to watch the video that we made with the team at EGPAF, the video is above. Denny The post Our EGPAF Click to Donate Campaign has wrapped up for 2019 appeared first on SQL Server with Mr. Denny. Contact the Author | Contact DCAC
1 2 3 4 448

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner    Microsoft MVP    Microsoft Certified Master    VMWare vExpert
   Best Full-Service Cloud Technology Consulting Company    Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers
Share via
Copy link