Blockchain is the new hot thing in IT. Basically, every company out there is trying to figure out where Block Chain fits into their environment. Here’s the big secret of blockchain; your company doesn’t need it.
Blockchain is simply a write one technology that allows you to change records, but it keeps track of every change that was made. Most systems need some auditing to see when specific changes were made for example thing about an order system that your company may have. You probably have auditing of some sort so that you can see when the new order comes in (it’s probably the create date field on the table), and there’s probably some sort of auditing recorded when the shipment is sent out. If the customer fixes their name, you probably aren’t keeping a record of that, because odds are you don’t care.
Think about what systems you have at your company. Do you need to keep a record of every single change that happens to the data, or do you care about what happens to only some of the tables? Blockchain is a great technology for the systems that need that sort of data recording. But that’s going to be a small number of systems, and we shouldn’t be fooling ourselves into believing that every company needs a system like this.
I’m not g0ing to argue that there are no systems that need this; there definitely are some systems that due. But those systems are going to be in the minority.
Executives are going to read about how blockchain is this great new thing, and they are going to want to implement it. The thing about blockchain is that there’s one major thing that building a system on blockchain requires, and that’s lots of drive space. If you want to purge data from the system after 5-6 years, that’s great; you’ll need more drive space as deleting data from a blockchain database just means that you need more space as you aren’t actually deleting those records.
A friend of mine described Blockchain as a database in full recovery mode, and you can’t ever back up (and purge) the transaction log. That’s how the database is going to grow. Remember those lovely databases that were on the Blackberry Enterprise Server back in the day? The database would be 100 Megs and the transaction log would be 1 TB in size. That’s precisely what blockchain is going to look like, but it’s going to be a lot worse because all your customers and/or employees are going to be using the application. If you have a database that’s 100 Gigs in size after a few years (which is a reasonable size for an application) the blockchain lot for this could easily be 15-20 TB in size, if not 100TB in size. And you’ll have to keep this amount of space online and available to the system.
So if you like buying hard drives (and the nice car that they get from their commissions) then blockchain is going to be great. If you don’t want to spend a fortune on storage for no reason, then blockchain is probably something you want to skip.
The post Why your company doesn’t need block chain appeared first on SQL Server with Mr. Denny.
Recently Intel announced some major upgrades to their Xeon CPU line. The long and short of the CPU announcement was that Intel was releasing their 56 Core CPUs for public release. That’s just a massive amount of CPU power that’s available in a very small package. A dual socket server, with two of these CPUs installed, would have 112 cores of CPU power, 224 with Hyper-Threading enabled. That’s a huge amount of CPU power. And if 112 cores aren’t enough for you, these CPUs can scale up to an eight-socket server if needed.
With each one of the processors, you can install up to 4.5TB of RAM on the server, per socket. So a dual socket server could have up to 9TB of RAM. (That’s 36TB of RAM for an eight-socket server if you’re keeping track.)
For something like a Hyper-V or a VMware host, these are going to be massive machines.
My guess is that we won’t see many of these machines are companies. Based on the companies that Intel had on stage at the keynote (Amazon, Microsoft, and Google) we’ll be seeing these chips showing up in the cloud platforms reasonably soon. The reason that I’m thinking this way is two-fold; 1. the power behind these chips is massive, and it makes sense that these are for a cloud play; 2. the people who were on stage at the Intel launch were executives from AWS, Azure and GCP. By using these chips in the cloud, the cloud providers will be able to get their cloud platforms probably twice as dense as they have them now. That leads to a lot of square feet being saved and reused for other servers.
As to how Intel was able to get 56 cores on a single CPU, is through the same technique that they’ve used in the past. They took two dies, each with 26 cores on them and made one socket out of that. In the olden days, we’d say that they glued two 26 core CPUs together to make one 56 core CPU. The work that Intel had to do, to make this happen was definitely more complicated than this, but this thought exercise works for those of us not in the CPU industry.
These new CPUs use a shockingly small amount of power to run. The chips can use as little as 27 Watts of power, which is amazingly low, especially when you consider the number of CPU cores that we are talking about. Just a few years ago, these power numbers would be unheard of.
The post Servers running too slow, just add all the cores! appeared first on SQL Server with Mr. Denny.
I’ve seen a couple of conversations recently about companies that want to be able to script out their database schema on a daily basis so that they have a current copy of the database; or systems that have to change permissions with the database frequently, and they need to export a copy of those permissions so that they have a copy of those settings.
My question to follow up on these sorts of situations is, why aren’t these settings in Source Control?
Pushing these changes to production requires a change control process (and the approvals that go with these). That means that you have to document the change in order to put it into the change control ticket, so why aren’t these changes pushed into your source control system?
Anything and everything that goes into your production systems should be stored in your source control system. If the server burns down, I should be able to rebuild SQL (for example) from the ground up, from source control. This includes instance level settings, database properties, indexes, permissions, table (and view, and procedures) should all be in your source control system. Once things are stored in your source control system, then the need to be able to export the database schema goes away, as does the need to export the permissions regularly. As these have no point in doing them, there is no need to do them.
Think I’m wrong, convince me in the comments.
The post If It Requires A Change Control Ticket To Change It, It Should Be In The Change Control System appeared first on SQL Server with Mr. Denny.
With Microsoft Azure now supporting Virtual Machines with NVMe storage; things get a little different when it comes to handling recoverability. Recoverability becomes very important because NVMe storage in Azure isn’t durable through reboots. This means that if your shutdown the server, or there is a host problem, or the VM host has to be patched and rebooted than anything on the NVMe drive will be gone when the server comes back up.
This means that to keep data on the VM past a shutdown you need to think about high availability and disaster recovery.
You need to have High Availability built into the solution (with Availability Sets or Availability Zones) which probably means Always On Availability Groups to protect the data. The reason that you need to have Availability Groups is that you need to be able to keep the data in place after a failover of the VM. When the VM comes back up, you’ll see the server is up, but it may not have any data. So what needs to be done at this point? You need to create a job on every node that will automatically look to see if the databases are missing and if they are then remove the databases from the AG, drop the databases, and reseed the databases from the production server.
Because of the risk of losing the data that you are protecting, you probably want at least three servers in your production site so that if one server goes down, you still have redundancy of your system.
You need to have Disaster Recovery built into your solution as well as high availability. Because of the risk of losing data if a set of VMs fails you need to plan for a failure of your production site. The servers that you have in DR may or may not need to have NVMe drives in them; it all depends on why you need NVMe drives. If you need the NVMe for reads then you probably don’t need NVMe in DR; if you need NVMe for writes, then you probably do need NVMe in DR.
While a full failure of your production Azure site is improbable, it is possible, and you need to plan for it correctly.
If you have NVMe in DR, then you’ll want to the same sort of scripts to reseed your databases in the event of a SQL Server restart.
But this is expensive
Yes, it yes.
If the system is important enough to your business that you need the speed of NVMe drives, then you can afford the extra boxes required to run the system probably. Not having HA and DR, then complaining that there was an emergency and the system wasn’t able to survive won’t get a whole lot of sympathy from me. By not having HA and DR you made the decision to have the data go away in the event of a failure. If these solutions are too expensive, then you need to decide that you don’t need this solution and that you should get something else to run the system.
Sorry to be brutal, but that’s the way it is.
The post Azure NVMe Storage and Redundancy appeared first on SQL Server with Mr. Denny.