Last week I was introduced to Avassa. As the presentation started I had no idea what their solution did. By the end of the presentation, I wanted to know more about how it worked and my mind was racing with the deployment options that I see from it, both for our existing clients as well as future clients.
The Avassa solution is designed to assist you in pushing docker images to the edge devices that on running within your environment. The demonstration that Avassa delivered at Tech Field Day 24 really drove home just how powerful their solution was. In their demo (which was probably 90% of their presentation) they had a fictitious movie theater company that had 100 theaters all over the world. Each theater had a small docker environment that was used to run the backend software that was needed to run the movie theater. Using the Avassa software they were able to deploy new images to all 100 theaters within seconds with just a few mouse clicks. Upgrading those docker images to a new version was just as simple. You simply tell the solution that there’s a new image available on GitLab and the solution downloads the image and pushes the image to all the edge devices.
The beauty of this solution is that network connectivity to the edge site doesn’t need to be stable. In the event that the network is unstable, the Avassa solution downloads the image and deployed the image once it’s been downloaded. So if the edge isn’t on the internet for days, weeks, or months, the command to upgrade that edge node will be processed as soon as it is back online. When I heard this I immediately thought of use-cases in fields such as oil and gas where you have oil rigs in the middle of the ocean and they aren’t always able to be connected to the Internet due to weather or satellite uplink issues, or the shipping/cruise industry where you’ve got ships that aren’t in their home port for weeks or months and we want to be able to push software changes to the ships as soon as they get internet access back.
Even edge devices that are always connected such as studios that have spin classes, retail operations, etc. could take advantage of this by lowering the time needed to deploy software to all the retail locations. When retail locations need to do a software upgrade, they will either have someone from the head office handling the upgrade through a remote desktop solution or if they need physical hands on a keyboard they will have a local IT partner that can put hands on a keyboard. However, either of these solutions is cost-free. But being able to push docker images to all the stores either in a rolling fashion or at scale that cost goes to next to nothing.
The ability to be able to quickly and easily push changes to the edge devices has always been a problem with environments which large numbers of edge devices. As the number of edge devices increases the management headaches which are involved with upgrading software increase.
If you’re in an environment with edge devices, I highly recommend checking out the videos from Avassa’s presentation at Tech Field Day 24. During the session, we pointed out some things to the Avassa team that we’d love to see improved in their UI, and the Avassa team’s response was basically “yes we know, it’s already in planning for a later release” which tells me that they are going in the right direction for sure, and I personally can’t wait to see what their team is able to do with their solution as it gets more mature.
Registering for one of these sessions is quick and easy. Simply register for the free conference, (you’ll probably need to select the Create Account option). After you fill out all the demographic questions, select the conference pass, which will get you into the free event which is being presented from November 10th-November 12th.
You select the free conference, select Monica’s session on Monday, and/or Denny’s session on Tuesday (there is an additional fee to attend these day-long sessions).
Then complete your registration.
If you have already registered for the free conference, then login to your existing account, and select the Add package button on the right.
From here you can select the precon sessions that you want to add to your registration and complete your order.
The cost for the pre-con sessions is $200 USD plus any VAT that’s due based on where you live and where RedGate has operations (as RedGate is hosting the event).
Don’t forget to register and we’ll see you on the 8th or the 9th.
If you have questions about the event, check out the event’s FAQ.
Blue/Green deployments are a common development practice. This allows you to deploy the application code for the next release and swap to it after testing. This gives you the ability to when you’ve got several servers to deploy the application to, allows you to push the code change to two of the servers, then swap those servers to live while leaving the other two servers with the older version of the code. If there’s a problem with the release the application is able to be failed back, all of this is done with no outage to the application.
In databases, this blue/green deployment technique is much, much harder. This is because making changes to databases takes time, and depending on the change that is being made the change may require blocking in the database while making the change.
Often times the idea that is used for a Blue/Green deployment process is to break the DR link and push the change to the DR server and then pushing the changes to this server. The problem them because that any data which is written to the production database server isn’t being pushed to the DR server. And you can’t write schema changes to one server without pushing the changes to the other server. Stateful systems like databases just don’t allow for this level of flexibility like stateless systems (like websites) do.
We’ve had a client get around these problems by still doing the blue/green release process on their application tier, but on the database tier taking a different approach. On the database tier, they design all of their application code and stored procedures to be able to run even if there are extra columns or extra parameters on stored procedures (they do this by assigning a null default value to the new variables) allowing the application to use either the n version of the n+1 version of the database. This way when the old version of the application code runs the stored procedure it runs without issue even though some parameters are “missing”.
Columns that are to be removed from the database are left in place for 1-2 versions past when the column is no longer needed. The same applies when stored procedure parameters are to be removed. They are left in the stored procedure simply with a so that as the application calls the procedures and uses the parameters their values are simply ignored as the parameter isn’t used.
While this doesn’t give the database true Blue/Green deployments it is probably as close as you can come from a database perspective.
One of the big mistakes that I see when working with clients on their cloud migrations is the size of VMs in the cloud compared to their size on-prem. Typically when people are moving machines from on-premises to the cloud they pick a Virtual Machine size that is the same as (or close to) the size of the on-prem machine. For non-database servers (and even for database servers sometimes) this isn’t needed.
One thing to remember that the cloud gives you, that you didn’t have on-premises is the ability to scale without any upfront commitment. When you are running your Virtual Machines on-premises you still have to budget for and purchase the hardware. So the Virtual Machines that you create on-premises need to be sized for their peak workloads throughout the lifetime of the machine so that the hardware can be correctly purchased, allocated, and not overcommitted.
In the cloud, we don’t need to “reserve” CPU or RAM when we don’t need it. In fact sizing VMs to larger sizes in the cloud has a direct financial cost, larger VMs costs more than smaller VMs do. So by sizing up a VM, that VM will be costing more money to run. But we don’t have that cost until we actually scale the VM up.
This can make services in the cloud must more cost-effective than a direct CPU/RAM equivalent from the on-premises server.
Are you new to Azure Data Factory and wondering what you don't know you don't know? The learning curve with new technologies can sometimes lead to some major refactoring down the line once we realize our mistakes. Join Meagan Longoria and Kerry Tyler to learn how to set up your data factory for success. They will start by discussing naming conventions, parameterization, Key Vault usage, and deployment with Azure DevOps. Then they'll share their recommendations on pipeline hierarchies, activity dependencies, error handling, and monitoring. Watch this webinar to help your organization avoid Data Factory regrets!
Watch Denny and Joey from DCAC, and Rob Krug from Avast as they talk about enterprise security, where companies fail from a security perspective, and what small / medium companies can do to get enterprise-grade security features without breaking the bank.
As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.
And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.