Open Letter to PASS

Published On: 2020-06-19By:

As you know, I spoke to 12 Chapter Leaders and 4 Regional Mentors yesterday. All are very upset with PASS right now and feel slighted, which as a volunteer myself, I empathize. I have given the feedback, to you.

After talking with them here is where the problem lies IMHO: communication and transparency. We received an email taking away our token of appreciation PASS gives us for all our yearlong hard work that helps to build and amplify this community. In the email we also were asked to please promote and encourage Summit registrations among other things. You cannot take something away and then turn around and ask the leaders for more. It is like cutting off our feet and asking us to run. Of course, there will be backlash, such as leaders considering removing their groups from the PASS umbrellas altogether. Believe me when I say it is not about the money, it’s about the principle.

So how to you fix it?
Communication and transparency. I am talking REAL communication, no more political answers. We, as volunteers and leaders, support PASS and we want it to succeed, but we really need to know the truth. Many comments I hear are in regard to the financials and the perception that it’s C&C, a for profit company that is running the show and using the board\community as puppets to further their agenda. This is further amplified especially since the owner of C&C’s has an executive seat on the BoD. (don’t shoot the messenger, me). It is this perception that needs fixed, until it does the community will continue to be disgruntled and unsupportive.

There is a feeling the community is being swindled for lack of a better word and the community aspect of PASS is lost to a for profit company. I challenge the board and leaders to change this through better communication and transparency. Tell us the numbers, show us a revised budget and yearly cost per member, and/or(?) cost per summit registration. The more we know the more we will help. Do you have a must have break even number of registrants you need, how many do you have now? Tell us, we will help you get there.

We really want PASS to succeed and be there in the future. I urge you to try to fix this.

Perception is key. I am an ally and one of your strongest supporters, I get it believe me. Which is the only reason I have reached out the last few days. I am sincerely hope that has been conveyed.

Contact the Author | Contact DCAC

Restoring a SQL Server Database in Docker

Published On: By:

Last month I blogged about using Docker to run SQL Server as a quick and easy way to get SQL Server up and running.  While it continues to be immensely useful, there aren’t any user databases  running on it.  The only databases present are the system databases. Yes, I could manually create a database, but it would be a lot easier to have a sample database available.

How do we do restore a sample database, in a container, that is running on a Mac laptop?  Let’s check it out!

Disclaimer: Not a container expert so there might be various ways of doing it.  This is just how I’ve figured out how to make the magic happen.

Also, if you have not read the first blog on how to get SQL Server running in Docker on your Mac, take a few minutes to read through it.

Here are the steps that we will take to make this work:

  1. Download one of the sample databases from I have a “mssql” directory in my local profile to make things easier
  2. Make sure the container is started.  You can issue a “docker ps” command terminal to see which containers are running
  3. Create a directory within the container
  4. Copy the sample database backup file into the directory you just created
  5. Restore the database onto the SQL instance that is running within the container

Sounds simple, right? Let’s see!

Create a directory

To create a directory within the container (remember that the container is running Linux), we can issue a command to the container that will create the directory.  Using Terminal (you can go to Spotlight to find the Terminal program or it is under Utilities in the Applications folder), execute the following command,

Docker exec -it sql2019 mkdir /var/opt/mssql/backups

Let us break that command down:

  1. Docker – this indicates that we are going to be doing something with Docker
  2. Exec – this tells the specified container to exec the command we pass into it
  3. -it – this basically allows for an interactive session with the container
  4. Sql2019 – this is the name of the container. You can specify the container name when you start the container or Docker will name it for you
  5. Mkdir – this is short for “make directory”
  6. /var/opt/mssql/backups – this is the directory path that is to be created.

Copy the Backup File

Now that the directory has been created, we need to get the backup file of the sample database into the container.  In my case, I am using AdventureWorks2017.bak

Docker cp ./mssql/AdventureWorks2017.bak sql2019:/var/opt/mssql/backups

Here is how that command breaks down:

  1. Docker – this indicates that we are going to be doing something with Docker
  2. cp – this is short for “copy”
  3. ./mssql/AdventureWorks2017.bak – this is the path of the source file that is being copied into the container. The “.” Indicates start with whatever working directory I am in, which is my profile directory as indicated by the “jmorehouse$”
  4. Sql2019 – this is the name of the container.
  5. :/var/opt/mssql/backups – this is the destination directory that is within the container.

Once the command is complete, we can check to make sure that the file was copied successfully.

Docker exec -it sql2019 ls /var/opt/mssql/backups

The “ls” refers to “list”.  This is equivalent to executing a “dir” command in DOS.

Restore the Database

The backup file now resides within the container and we just need to tell SQL Server to restore it.  In this section, I will be using Azure Data Studio and native T-SQL commands.

Let us first check that SQL Server can see the file.

RESTORE FILELISTONLY FROM DISK=N’/var/opt/mssql/backups/AdventureWorks2017.bak’
GO

Excellent!  SQL Server can see the backup file which means that we can restore it.  Notice on the left-hand side, there are no user databases, just system databases. Also notice that the physical names of the database shown above are from the Windows Operating System.  Since SQL Server is running on Linux within the container, we will have to move the physical files to a different location.

RESTORE DATABASE AdventureWorks2017 FROM DISK=N’/var/opt/mssql/backups/AdventureWorks2017.bak’ WITH
MOVE ‘AdventureWorks2017’ to ‘/var/opt/mssql/data/AdventureWorks2017.mdf’,
MOVE ‘AdventureWorks2017_log’ to ‘/var/opt/mssql/data/AdventureWorks2017_log.ldf’
GO

Above we can see that the database was restored and then subsequently upgraded to the SQL Server 2019 database version.  If you refresh the Databases branch on the left-hand side, the AdventureWorks2017 database is now present!

Summary

Docker continues to be my current choice of “go to” when I need a quick and easy SQL Server solution to play around.  While I absolutely recommend Azure and its offerings, utilizing Docker on my local laptop is just faster, and frankly, fun to play around it.   Now that I can easily restore databases, it just makes it that much better of a solution for non-production uses.

Like these posts?  Make sure to subscribe to get alerted when new posts arrive!

© 2020, John Morehouse. All rights reserved.

Contact the Author | Contact DCAC

Power BI Data Viz Makeover: From Drab to Fab

Published On: 2020-06-18By:

On July 11 at 3pm MDT, Rob Farley and I will be hosting a webinar on report design in Power BI. We will take a report that does not deliver insights, discuss what we think is missing from the report and how we would change it, and then share some tips from our report redesign.

Rob and I approach data visualization a bit differently, but we share a common goal of producing reports that are clear, usable, and useful. It’s easy to get caught up building shiny, useless things that show off tech at the expense of information. We want to give you real examples of how to improve your reports to provide the right information as well as a good user experience.

We’ll reserve some time to answer your questions and comments at the end. Come chat Power BI data viz with us.

You can register for the webinar at https://www.powerbidays.com/virtualevent/colorado-power-bi-days-2020-07-11/.

Come for the data viz tips, stay for the witty banter!

Contact the Author | Contact DCAC

What is Batch Mode on Rowstore in SQL Server?

Published On: 2020-06-16By:

Under compatibility level 150, in both SQL Server 2019 and Azure SQL Database, you now can use batch mode for CPU-bound analytic type workloads without requiring columnstore indexes. There is no action needed to turn on batch mode aside from being on the proper compatibility mode. You also have the ability to enable it as a database scoped configuration option (as shown below), and you can hint individual queries to either use or not use batch mode (also shown below). If you recall in my earlier blogs on columnstore, it is batch mode in conjunction with page compression that drastically increases query performance. This feature, Batch Mode on Rowstore, allows all operators enabled for batch mode to operate as such in SQL Server.

What does this mean? It means that query operations can process data faster, more efficiently and mimic what makes columnstore so fast. Instead of reading row by row (row store mode) it can read in chunks i.e. batches of 900 rows at a time. The performance impact of this can be tremendous which effectively uses CPU more efficiently.

Just like columnstore this only benefits analytic type workloads or data warehouses, as mentioned above. This is meant for aggregations and joins that process thousands of rows. It will not benefit you when processing singleton lookups. If where clause that does not look up a range of values and is just satisfying predicates, then batch mode does not provide a benefit.

How does the engine know when to use batch mode? According to docs.microsoft.com  the query processor uses heuristics and will make decision based on three checks. An initial check on tables sizes, operators used and cardinality estimates. Then the optimize checks to see if there are cheaper plans it can use. If no alternative better plans are available, the optimizer will choose batch mode. There are some limitations that will prevent the use of batch mode such as, in-memory OLTP tables or for any index other than B-Trees or on-disk heaps. It will also not work for LOB columns including sparse and XML columns.

You can easily decipher when batch mode is used to run query inside the operator’s properties. Let’s see a demo.

To demo I want to first show you a plan NOT using Batch Mode on Row Store, so let’s turn the feature off because as I mentioned earlier it is already enabled for compatibility mode 150 by default. Run the below database scope configuration script to turn it off.

USE AdventureworksDW2016CTP3
GO
ALTER DATABASE SCOPED CONFIGURATION SET BATCH_MODE_ON_ROWSTORE = OFF;

Now let’s run this query and make sure we capture the execution plan.

SELECT FS.[ProductKey],

       AVG([SalesAmount]),

       [UnitPrice]

FROM [dbo].[FactResellerSalesXL] FS

    JOIN dbo.DimProduct DP

        ON DP.ProductKey = FS.ProductKey

GROUP BY FS.[ProductKey],

            [UnitPrice]

Note the Table Scan. By hovering over it you can see the operator’s properties and see the Actual Execution Mode says ROW and it processed 11,669,653 rows.

Now let’s run it again in Batch. Instead of changing compatibility lets just turn on the feature with an OPTION HINT.

SELECT FS.[ProductKey],

       AVG([SalesAmount]),

       [UnitPrice]

FROM [dbo].[FactResellerSalesXL] FS

    JOIN dbo.DimProduct DP

        ON DP.ProductKey = FS.ProductKey

GROUP BY FS.[ProductKey],

            [UnitPrice]

OPTION(RECOMPILE, USE HINT('ALLOW_BATCH_MODE'));

You can clearly see the optimizer chose to use BATCH mode based on our HINT. In addition, you can see it ran significantly faster at only 405 ms versus 1.119s using row mode. In general, we’ve seen queries that benefit from batch mode running in almost half of what row mode performance is and columnstore in batch mode performance.

Let’s go ahead and change back to the default Batch Mode again for our database just to prove it would have used batch mode without the use of our hint. Run the below and look at the plan.

ALTER DATABASE SCOPED CONFIGURATION SET BATCH_MODE_ON_ROWSTORE = ON;

GO

SELECT FS.[ProductKey],

       AVG([SalesAmount]),

       [UnitPrice]

FROM [dbo].[FactResellerSalesXL] FS

    JOIN dbo.DimProduct DP

        ON DP.ProductKey = FS.ProductKey

GROUP BY FS.[ProductKey],

            [UnitPrice]

BINGO! There you have it!

If you are already using compatibility mode 150 you are already taking advantage of this feature and may not even realized. For those that have not made the leap to 2019 I highly recommend it, if only for this little gem which is one of the many reasons why you should upgrade.

Contact the Author | Contact DCAC
1 2 3 4 5 466

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner   Denny Cherry & Associates Consulting LLC BBB Business Review    Microsoft MVP    Microsoft Certified Master VMWare vExpert
American Business Awards Gold Award    American Business Awards Gold Award    FT Americas’ Fastest Growing Companies 2020       Best Full-Service Cloud Technology Consulting Company
   Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers
Share via
Copy link