One of the challenges of cloud computing, is what I like to call the “single bill problem”. While cloud platforms can be more flexible, and therefore more cost effective for companies, the striking notion of having a single bill that includes storage, networking, power, and compute, can cause sticker shock for executives who receive the bill. While the total costs is likely the same as it was before, seeing it all on a single bill can be a surprise. Cloud cost management usually starts to take place after you receive that first invoice from Microsoft.
At the end of last year, I built a new training course, which I had the opportunity to deliver to a team of architects and project managers at a large enterprise. We focused on a few things–identifying types of resources where you can have major cost overruns, surprise costs like network egress, and most importantly how to best identify cloud costs and track them back to business units. We also focused on building proper architecture for different types of solutions and how pricing works for various services.
While DCAC spends a lot of time working on databases like SQL Server, we also spend a lot of time doing Azure architecture work and helping customers move to the cloud. We’re experience with a wide variety of Azure deployments involving data, networking, and more. We are happy to help you optimize your cloud costs, train your team, or perform an architectural review for your new build or cloud migration. We can help you analyze and optimize your costs in Azure.
Database storage has evolved in many ways in recent years. Besides dramatic increases in performance from NVMe interfaces, and ultra-fast SSDs, different approaches to storing data have provided increased flexibility. For example, Azure SQL Database stores its data and log files as HTTP targets in Azure Storage. This concept is known as object-based storage, whereby the storage is aware of all of the files on the system, and can store additional metadata about those files. Tintri has been at the forefront of this object-based approach for years with their VMstore systems, which greatly reduced the effort to deploy and configure storage for VMware and Hyper-V virtual machines. In addition to ease of deployment, storing files as objects allows for better data collection and integration features like the ability to snapshot your files.
Tintri has recently introduced database awareness into their VMstore systems with SQL Integrated Storage. This means that instead of storing your data and log files on say a volume you created within your VM, you create the database files as SMB shares, where they are stored in the file system of the storage. This approach provides a number of benefits over storing your databases in the file system of your VM. The first being that the storage system can capture metrics around latency detail and IOPs in real-time.
It also means you can truly control the IOPs for data or log files—this is particularly useful for problematic databases associated with operational applications (think SCOM or McAfee EPO) that are impacting your storage and the performance of mission-critical databases. Another performance benefit is that each database can have its own I/O channel, which avoids queuing in Windows – something that can occur when a device becomes overwhelmed with I/O.
Beyond capturing metrics and providing automated quality of service, you can back up your databases using crash-consistent snapshots. Snapshots can be a controversial topic amongst DBAs—clumsy solutions from some vendors have broken availability groups and, in the worst cases, even caused corruption. Since the Tintri VMstore contains the file system where the database file lives, it is not using the traditional VSS framework that requires freezing (or stunning) SQL Server’s I/O. This means you can realize all of the benefits of snapshots without disruption when performing tasks like moving production databases to lower tier environments and creating new copies for availability group replicas – in addition to traditional backup and restore operations.
The process for cloning database files can be executed either via the Tintri Global Center user interface for the VMstore system, or by using REST API calls. Tintri is releasing a set of PowerShell cmdlets in the near future, so you will be able to integrate your automation workflows for snapshots and volume management.
As mentioned earlier, by storing the files directly on the VMstore system, Tintri SQL Integrated Storage is able to collect a great deal of metadata about the operations on your data files. This capability provides real-time insight into the number of IOPs your databases are performing as well as information on throughput, deduplication, and utilization. The closest thing to that kind of information and insight is the dynamic management function sys.dm_io_virtual_file_stats, which has good information, but is reset every time the instance restarts. As a result it can be harder to analyze trends over time. Additionally, it can be harder to identify spikes in demand because a database can be extremely busy, say during a batch process, and idle the rest of the time. Tintri SQL Integrated Storage includes an intelligent dashboard which helps you identify your busiest volumes and individual database. Having a storage solution that is fully aware of your databases and their I/O activity is another tool in the arsenal for a DBA. This enables DBAs to both better understand and control I/O performance and take advantage of features like snapshots to provide much better service to your organization.
You can learn more about Tintri SQL Integrated Storage at https://tintri.com/sql.
Note: DCAC was compensated by Tintri for this post
As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.
And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.