Automating TempDB Configuration in Azure

One of the unique things about managing SQL Server on Azure VMs is that we use the local D: drive for TempDB. The D: drive (or /dev/sdb1 for those of you running on Linux) is a direct-attached solid state drive (on nearly all VM tiers) which offers the lowest latency storage in Azure, making it ideal for TempDB’s busy workload. There is only one catch–that temporary volume in Azure is ephemeral, meaning the data gets wiped whenever your VM is rebooted.

gear-472008_640

You may think this isn’t a big deal–TempDB gets recreated every time your instance restarts, so who cares if the files get wiped? Well, it’s not so much the files as the directory that the files live in. You could just place your files at the root of the D:\ drive, however that would require your SQL Server service start as admin. Since we like to follow security best practices, we aren’t going to do that. I usually follow this process as defined by Jim Donahoe b|t) in this post.

I was teaching Azure infrastructure last week, and decided that it might be a good idea to do this using Desired State Configuration (DSC) which is part of automation. However, first I had to code it up in PowerShell. You’ll note I’m also calling Dbatools which greatly simplifies my TempDB config. By using Set-DBATempDBConfig I just need to pass in the volume size, which I can get from WMI–I’m allocating 80% of the volume to TempDB (I use 80% because it’s below the cutoff of most monitoring tools) and then the script does the rest.

You can import PowerShell modules into Azure Automation now–this is a relatively recent change. I don’t have this fully baked into DSC yet, but you can see the PowerShell to create the PowerShell script (yes, read that correctly) and the scheduled task in Github.

I’d welcome any feedback and I will add a new post when I finish the DSC piece of this.

Share

Trust DCAC with your data

Your data systems may be treading water today, but are they prepared for the next phase of your business growth?