For those of you who know me, or have heard me talk at a Code Camp in the last year, you’ve heard me talk about a data center migration that I want to do from Rackspace in Texas to our own equipment in the LA area. Well that day has finely come.
Our current environment has served us well, but we have outgrown the services that Rackspace can offer us, and we have purchased our own production environment. This isn’t any rinky dink environment either. We are starting out with a fully redundant, highly available environment which can be scaled by simply deploying more VMs, and in the event that the VMware hardware is over tasked by simply plugging another VMware server into the mix, and shifting the VMs from one node of the cluster to another.
We are very proud of our new environment, so I figured that I’d give you some of the tech specs of it (yeah, I’m totally bragging here).
On the storage side of things we’ve got an EMC CX4-240 with ~35TB of storage laid out in three tiers. This is connected via multiple 4 Gig fibre cables to a pair of Cisco fibre switches. Each fibre switch is connected to each of the SAN attached servers.
We went with Dell servers (I would have preferred HP servers, but I was overruled).
The SQL Servers and the VMware servers are identical. Quad chip, quad core servers each with 64 Gigs of RAM. Each pair will be clustered for High Availability. The VMware servers will look a little like they puked cables out of the back. Because of all the various subnets and to ensure that each subnet is connected to each of the redundant network switches each of the VMware ESX servers will have 11 Ethernet cables, and 2 fibre cables coming out of the back.
The VMware vCenter services are running on a little single chip quad core server. This is the only part of the system which isn’t redundant, but ESX can run fine for up to 14 days without the License server running, and since this machine has a 4 hour turn around on parts we’ll be fine if the machine dies.
The file servers which host the screenshots, emails, etc which have been captured by our application and will be served to the website upon request are a pair of dual chip, quad core servers also clustered for high availability.
All the servers are SAN attached via the fibre and all data will be saved on the SAN.
Our current environment is much smaller. A single SQL Server, three web servers, and two file servers. The only redundant pieces are the fibre cables from the SQL Server to the SAN, and the fact that we have three web servers. However if the newer web server goes out in the middle of the day, the other two will choke at this point.
Rackspace has been pretty good to us over the years. It just wasn’t cost effective for us to purchase this level of hardware before now, and Rackspace was able to provide us with a good service level for a reasonable price. But at this point, because of the amount of hardware we were looking to move into, and the amount of bandwidth we are going to be using it simply became more cost effective for us to host the systems at a local CoLo.
The main reason that I’m telling everyone this is that if you have been trying to find me for the last two weeks or so this is why I can’t be found. I’ve been spending pretty much every waking moment this together and getting it all setup so that we can migrate over to it.
Needless to say its an awesome project. How many people get the chance to build a new data center and design it the way they want to from scratch. Pretty much no one. Data centers usually grow from a small config of a server or two in a sporadic way, and they are inherited from one person to the next. But this time I get to design everything they way I want to from the grown up. It’s going to be a blast.