Managing Statistics in Azure SQL Database Serverless

One of the only things platform as a service databases like Azure SQL Database do not do for you is actively manage column and index statistics. While backups, patches, and even integrity checks are built into the platform services, managing your metadata is not. Since Azure SQL Database lacks a SQL Sever Agent for scheduling, you have to use an alternative for job scheduling. One of the more common approaches is to use Azure Automation runbooks which are PowerShell (or Python) jobs that are run from within Azure. I like to call Automation “cron for Azure”. My good friend Erin Stellato (b|t) just recorded a YouTube video that walks through the process of setting up an Azure Automation runbook for stats maintenance. There are a lot of steps to get your runbook up and running, and if I had a very small environment, I might just recommend using an Azure Logic App, with a schedule trigger–for a handful of databases, you could be up and running in a few minutes.

silver and gold coins
Photo by Pixabay on Pexels.com

However in my case I’m working on a customer subscription, and I need to have my solution automatically deal with all of the databases in their subscription. My customer has a data as a service model, and has been using the “serverless” tier of Azure SQL Database in order to reduce their compute costs. The serverless tier is effectively auto-close (and auto-scale) for Azure SQL Database, which means the first time you attempt to connect to the gateway (the gateway is what you actually connect to–yourdatabase.database.windows.net is a public IP that in turn connects to your actual database(s). When you connect to that gateway, Azure will begin the process of turning on your database–this can take up to 45 seconds, which means the first connection will most likely fail.

I was running into failures, so I looked at my code, and made some minor modifications.

{
$svr=(get-AzSqlServer -ResourceGroupName $rgs).ServerName
#write-host 'rg:'$rgs
foreach ($svrs in $svr)
{
$sql=$svrs+'.database.windows.net'
write-host $sql
$d =Get-azSqlDatabase -ResourceGroupName $rgs -ServerName ` $svrs|Where-Object {$_.DatabaseName -NE 'master'}
$db = $d.DatabaseName
$servicetier = $d.RequestedServiceObjectiveName
$size=$d.maxSizeBytes

if ([string]::IsNullOrEmpty($db) -eq $false)
{
$Params = @{
'ServerInstance' = "$sql";
'Database' = "$db";
'Username' = "$adminlogin";
'Password' = "$pwd";
'Query' = "$query";
}

# write-host $Params.Query
Invoke-Sqlcmd @params

Start-Sleep -Seconds 45
$query = "EXEC dbo.usp_AdaptiveIndexDefrag;"


$Params = @{
'ServerInstance' = "$sql";
'Database' = "$db";
'Username' = "$adminlogin";
'Password' = "$pwd";
'Query' = "$query";
}
#


Invoke-Sqlcmd @params

}

}

In this case, I’ve made my first query select @@servername, but you can have any query issued–it’s going to fail anyway. I then add a sleep command in PowerShell, I’m using 45 seconds, but you could probably drop that to 30 seconds if you have a tight time window. I’m then using the AdaptiveIndexDefrag script that Pedro Lopes (t) has written.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trust DCAC with your data

Your data systems may be treading water today, but are they prepared for the next phase of your business growth?