Denny Cherry & Associates Consulting Launches Vital Microsoft Azure Database for Elisabeth Glaser Pediatric AIDS Foundation

Published On: 2019-08-08By:

Solution Now Available in the Microsoft Azure Marketplace

Microsoft Azure customers worldwide now gain access to Denny Cherry & Associates Consulting (DCAC) Cloud Migration Solution to take advantage of the scalability, reliability, and agility of Microsoft Azure to drive application development and shape business strategies.

OCEANSIDE, CA, U.S. — August 8, 2019 — Denny Cherry & Associates Consulting (DCAC), today announced the launch of a vital new Microsoft Azure SQL Database on behalf of the nationally renowned non-profit, the Elisabeth Glaser Pediatric AIDS Foundation [EGPAF]. The Cloud Migration solution they developed is now available in the Azure Marketplace a market for buying and selling cloud solutions certified to run on Azure. DCAC customers can now take advantage of the scalability, high availability, and security of Azure, with streamlined deployment and management.

The database will drive communications between EGPAF and more than 5,200 medical facilities spanning 19 countries across the globe, enabling the rapid transfer of medical information, HIV prevention, diagnosis, and treatment services for pregnant mothers, children, and families. Power BI reports will give EGPAF staff the ability to view data as soon as it is imported, allowing them to view the data for any country or time range. Permissions and encryption are handled within the Azure SQL Database to ensure all legal and medical compliance issues are met. Finally, the database simultaneously allows the foundation to provide donors with high-quality business intelligence and analytics so they can see what their donations are accomplishing.

Architecting the Azure data warehouse solution involved informational, compliance, and financial challenges across international lines for the globally renowned and award-winning experts at DCAC. From an informational standpoint, EGPAF works with non-standard data from multiple African countries that each utilize unique databases in ever-changing formats. This meant migration to the cloud would be one challenge, while ongoing data entry would be another. From a compliance perspective, the database had to convey medical data across international lines. Finally, as a non-profit, cost was a tremendous issue. But DCAC demonstrated how an All-Azure solution and working with the globally renowned experts at DCAC could help EGPAF manage costs. According to EGPAF, DCAC’s innovation has the potential to save up to 300,000 administrative hours per year, a 75% reduction in time spent gathering/aggregating/cleaning data.

Stephanie Bruno, the Senior Manager of Informatics at EGPAF, commented, “Access to data about the mothers and children we assist is literally a matter of life and death. Having clean, trustworthy data available improves faster access to services and improves medical outcomes. Thanks to DCAC and Microsoft Azure, we can provide more tailored services to health facilities to improve outcomes for children and families. Additionally, our donors are requiring detailed data so they know their donations are being used effectively. DCAC’s solution also offers us enormous administrative savings in terms of hardware and software while providing better data to save lives.”

Senior Consultant and lead architect on the project, Joey D’Antoni, commented, “This was a very challenging project. From an Azure perspective, the infrastructure is relatively straightforward, however, the variety and lack of consistency with data sources makes the ETL piece of the project extremely challenging.”

DCAC’s database for EGPAF has already garnered multiple accolades at the American Business Awards including the bronze for Technical Innovation of the Year 2019, Most Innovative Tech Company of the Year, and Technical Professional of the Year, Joey D’Antoni.

Sajan Parihar, Senior Director, Microsoft Azure Platform at Microsoft Corp said, “We’re pleased to welcome Denny Cherry & Associates Consulting and their Cloud Migration Solution to the Microsoft Azure Marketplace, which gives our partners great exposure to cloud customers around the globe. Azure Marketplace offers world-class quality experiences from global trusted partners with solutions tested to work seamlessly with Azure.”

About Denny Cherry & Associates Consulting
Award-winning Microsoft Partner and Gold Platform certified Denny Cherry and Associates Consulting assist companies with reliably attaining IT goals such as Azure Migration, HA, scalability, SQL Server virtualization and acceleration while finding ways to save on costs. With clients ranging from Fortune 50 corporations to small businesses, their commitment to each is the same: to provide a deft, high-speed IT environment that maximizes every aspect of their platform: from architecture to infrastructure, to network.

For more information, press only:
Kathleen Hannon, publicist, DCAC, 704-912-0209 or via email at kathleen@dcac.com.


Contact the Author | Contact DCAC

My Preferences for SSIS Design

Published On: By:

Lately, I have been using SSIS execution frameworks and Biml created by other people to populate data marts and data warehouses. It has taught me a few things and helped me clarify what I like and dislike compared to my usual framework. I’ve got the beginning of my preferences list started below. There are probably situations where I would want to deviate from my preferences, but I think they make a good starting point.

Populating Data

  • For self-service BI environments, a date dimension that doesn’t go out much further than the greatest date in your data. This can be a view or stored procedure that limits and updates dates rather than a static date dimension that goes out until the end of time.
  • Unknown values are included in normal dimension loads, not in separate scripts that must be run on deployment. This way, if an unknown value is ever left out or deleted, it will be added in the next data load rather than requiring a special execution of a script.
  • Every table should have InsertDateTime and UpdateDateTime columns. The UpdateDateTime column should be populated with the same value as the InsertDateTime column upon creation of the row, rather than being left null.
  • Whatever you use to create tables, include primary keys, foreign keys, and indexes with your table definitions. Provide explicit constraint names to simplify database comparisons. You can disable your foreign keys, but they need to be there to provide that metadata.
  • Separate your final dimensional/reporting tables from audit tables and staging tables. This can be done with separate schemas or even separate databases.

Data Integration Process

  • There should be consistent error handling in each layer (staging, dims, facts, etc.). If you write errors to another location (flat file, database table), have a process that notifies the right people that errors occurred. The process of consuming corrected data must be built, tested, and integrated into the existing process.
  • Make your error handling process reflect what end users need to see when an error occurs. Does it make sense to have a partial load when there is an issue? Or should it be all or nothing?
  • Have smart master packages that determine which packages to run. Don’t check whether the package should run inside of the package itself – do that in the master package.
  • Master packages should execute child packages in parallel as much as possible rather than defaulting to sequential execution.
  • Have an audit log with one row per package. Include the SSIS ServerExecutionID in the audit log – not the package -specific ID but the execution ID for the entire run. If there are incremental loads, the where clause used to filter the load should be captured in the audit table. Include row counts as well as package start and stop time in your audit log.
  • Add an AuditLogID column on your dimension, fact, and staging tables so you can trace each row back to the process that populated it.
  • For dims and facts, perform change detection/deduplication of records, usually through hash values and either SSIS lookups or SQL queries with WHERE NOT EXISTS.
  • Avoid T-SQL MERGE statements. Write individual insert/update/delete statements. This avoid any bugs in MERGE and makes your SQL easier to understand and troubleshoot.
  • Use consistent naming of tasks, source, destinations, packages, connection managers, etc. Connection managers pointing to databases should have names that refer to the database rather than the server.
  • If you are downloading files, move the files to an archive folder once files are processed. You can have rules in place if you have retention limits. But you probably need to keep files from at least the last load for audit and troubleshooting purposes. This could change if you are importing very sensitive data.
  • Even if you need to copy all columns from a table, write a select statement for database sources that explicitly names fields rather than using SELECT *. or just selecting the table or view.
  • SSIS lookups should use an explicit query rather than referencing an entire table.
  • Implement restartability at the package level for most packages (you should have single-purpose packages executed by a master package). Checkpoints are ineffective within a package. If you build your audit log table correctly, you can get the list of packages that have not run in the last X minutes/hours and feed that to your master package.
  • Send email from your scheduling tool rather than within an SSIS package.
  • Track data lineage in your tables. This can be as simple as having a table that lists all of your data sources with an ID column and including that ID value in each row of your staging, fact, and dimension tables.
  • Dims and facts are not truncated. Data should be inserted and updated (and deleted, if necessary).
  • Connection strings used in multiple packages should be project-level connection strings.

Biml Specifics

  • Understand whether you need a flexible Biml Framework or just an accelerator for a current project. If you need flexibility, don’t hardcode connection strings and other things that change when you add/change sources and destinations. If you just need to accelerate development of a simple data mart, total flexibility may be overkill and actually cause more work.
  • Have a single place where you add synthetic metadata, as much as possible. BimlScript gets messy and difficult to understand when you have some extended properties that are read in, some annotations added directly, and some variables defined in your code. This is why I like synthetic metadata stored in a database. Also, extended properties don’t exist in Azure SQL Data Warehouse, so if you need your framework to work there you can’t go that route.
  • Don’t repeat your code in multiple files. If you have some logic that gets reused, move it to a separate file and reference it from other files.

What Do You Think?

What’s on your SSIS preferences list? Do you disagree with one of my preferences and want to share your knowledge? Let’s chat in the comments.


Contact the Author | Contact DCAC

Run a PowerShell Script Against all of Your Azure SQL Databases

Published On: 2019-08-06By:

I started working on this bit of code a few months ago, and it’s served me really well. Just about every command you run against a SQL Database requires you to supply the server name and the resource group name at parameters. And in order to get the list of server names you have to do it for each for resource group.

abstract art circle clockwork

Photo by Pixabay on Pexels.com

This code is pretty simple and looks for an Azure SQL Server in each resource group, and then looks for the databases that aren’t master on each server. In this example I’m setting the storage account for Azure Threat Detection, but you could do anything you wanted in that last loop.

$rg=(Get-AzResourceGroup).ResourceGroupName

foreach ($rgs in $rg)

{

  $svr=(get-azsqlserver -ResourceGroupName $rgs).ServerName

  #write-host 'rg:'$rgs

    foreach ($svrs in $svr)

    {

    #write-host 'server:'$svrs

    if ($svr.Location -eq 'West US' ) {set-variable $stg='storage2'}

    {

     $dbs=(Get-azSqlDatabase -ResourceGroupName $rgs -ServerName  $svrs|Where-Object {$_.DatabaseName -NE 'master'}).DatabaseName|Set-AzSqlDatabaseThreatDetectionPolicy -ResourceGroupName $rgs -ServerName $svrs -DatabaseName $dbs -NotificationRecipientsEmails "bob@contoso.com" -EmailAdmins $True -StorageAccountName $stg




    else ($svr.Location -eq 'West US 2') {set-variable $stg='storage1'}




        $dbs=(Get-azSqlDatabase -ResourceGroupName $rgs -ServerName  $svrs|Where-Object {$_.DatabaseName -NE 'master'}).DatabaseName|Set-AzSqlDatabaseThreatDetectionPolicy -ResourceGroupName $rgs -ServerName $svrs -DatabaseName $dbs -NotificationRecipientsEmails "bob@contoso.com" -EmailAdmins $True -StorageAccountName $stg

    }

    }

}
The last bit of complication in this code, is specifying the storage account based on the location of the Azure SQL Server, which is a property of the server’s object.

Contact the Author | Contact DCAC

Moving a disk in Azure that’s in Managed Storage

Published On: 2019-08-05By:

We’ve been using Azure for several years now at DCAC. Back when we started in Azure their were no PaaS services for MySQL (there was but it was stupid expensive, and from a third party). When we created VMs we put some in US West and some in US Central. Now that we’ve been able to move all our web properties from IaaS to PaaS we wanted to consolidate the VMs into one region so we could kill the site to site VPN (and save the $50 a month we were spending on keeping the VPN Gateway up). This required moving some VMs from US Central to US West as there’s a couple of VMs that we need to keep and I’m to lazy to set back up.

We ended up needing to move two VMs from US Central to US West. 1 has standard disks and one is a managed disk. The question became how to move these. The answer was using Start-AzStorageBlobCopy. The command is fairly straight forward.

$sourcekey = “17SZrnd…Q==”
$destkey = “UUSkMu…A==”
$sourceContext = New-AzStorageContext -StorageAccountKey $sourcekey -StorageAccountName SourceName
$destinationContext = New-AzStorageContext -StorageAccountKey $destkey -StorageAccountName DestinationAccountName
Start-AzStorageBlobCopy -Context $sourceContext -DestContext $destinationContext -SrcContainer vhds -DestContainer vhds -SrcBlob “VMdisk0.vhd” -DestBlob “VMDisk0.vhd”

Now this will nicely handle the VM that’s in a storage account. So how to do handle the VM that’s no in a storage account? Pretty much the same way with Start-AzStorageBlobCopy, there’s just a different parameter to use. Let’s look at that code.

$destkey = “UUSkMu…A==”
$destinationContext = New-AzStorageContext -StorageAccountKey $destkey -StorageAccountName DestinationAccountName
Start-AzStorageBlobCopy -AbsoluteUri “https://md-h1fl3bgpqvq3.blob.core.windows.net/f1mx3lrkhl1q/abcd?sv=2017-04-17&sr=b&si=994de0ad-04eb-46a7-8d91-62e827064bf4&sig=Q…3D” -DestContext $destinationContext -DestContainer vhds -DestBlob “VMdisk0.vhd”

Now, the first question is how to we get that URI that we need to pass in. First you’ll want to stop and delete the VM (don’t worry, this won’t delete the disk for the VM). Then in the Azure Portal find Disks and select the disk that you want to copy. On the menu that opens when you select the disk, you’ll see a “Disk Export” option. Select that. It’ll ask you how long you want to create a SAS key for. It defaults to 3600 seconds, I changed it to 7200 seconds to give it plenty of time, then click the Generage URL button. When that’s done it’ll give you a SAS URL (don’t close the window until you’ve copied the URL as the portal will only show it to you once). Take that URL and drop it into the AbsoluteUri parameter of Start-AzStorageBlobCopy.

Now these commands are going to take a while to run, and we want to see how far along they are. We can do that with another simple PowerShell cmdlet.
Get-AzStorageBlobCopyState -Blob “VMdisk0.vhd” -Context $destinationContext -Container vhds -WaitForComplete

This command throws a nice message up in my PowerShell window and the PowerShell window waits for the copy of the file to complete. The really nice thing about Start-AzStorageBlobCopy is that is doesn’t download the blob, all the copying happens in Azure so the copy is actually pretty quick.
Once this is done you have your data file sitting in a storage account, so you need to move it back into managed storage. This can be done in the GUI, unless you really want to do this in PowerShell.

Simply go into the disks section of the Azure Portal, and create a new disk (the plus right in the upper left). When the next screen opens one of the questions will be what the source of the new disk should be. One of the answers to this will be Storage Blob, you’ll want to select this option.
After you set the blob (there’s a GUI, use it, love it) make sure you set the OS of the disk and the correct size, and the performance tier (HDD, BSSD, PSSD) and click OK. Once that’s done, select your new disk and you can create a new VM based off of the disk. Once that’s done your VM is up and running in the new site. (Don’t forget to delete the old disk after you ensure the new VM is working as you don’t want to get charged for the old disk any more.)

Denny

The post Moving a disk in Azure that’s in Managed Storage appeared first on SQL Server with Mr. Denny.


Contact the Author | Contact DCAC
1 2 3 4 427

Video

Globally Recognized Expertise

As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.

And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.

Awards & Certifications

Microsoft Partner    Microsoft MVP    Microsoft Certified Master    VMWare Partner    VMWare vExpert
   Best Full-Service Cloud Technology Consulting Company    Insights Sccess Award    Technology Headlines Award    Golden Bridge Gold Award    CIO Review Top 20 Azure Solutions Providers
Share via