EMC World 2011 Day 1

So today was the first official day of EMC world. I had a great time meeting some fantastic people like Chuck Hollis from EMC.

There were also some fantastic sessions. This year EMC is doing something new and putting the sessions online through an iPad app as well as putting them on their Facebook page.

First Session

The first session that I went to today was listed as a Federated Live Migration hands on workshop which I thought was a VMware session on doing live migration between data centers, which sounded really cool.  Turns out it was a hands on lab about doing live migrations from a DMX to a new VMAX array.  This sounded pretty cool as well so I stuck through it.  The concept is pretty cool.  When you have an EMC DMX array and you purchase a new EMC VMAX array you can migrate the data from the old array to the new array with no down time.  There was about 10-15 minutes of slides then we moved onto the lab.  Sadly one of the two EMC DMX arrays which we were using for the lab had stopped accepting logins so half of us (including me) couldn’t do the lab.  We walked through the lab process, but other than that it was kind of a bust.

Second Session

The second session that I made it to was a little disappointing as well.  The session ended up being a little more basic than I personally was looking for.  This sessions was “What’s new in VNX operating environment”.  As the VNX has been released and is shipping I assumed that this would be pretty in depth and would go over the hardware changes between the CX4 and the VNX.  This however wasn’t the case.  The session did go over some of the new features of the VNX array such as the new EMC Storage Integrator which is a Windows MMC application which allows Windows admins to provision and manage storage from the server without needing a login to the EMC VNX array.  A similar plugin is also being released for VMware’s vCenter server.

Unlike the CX line of arrays the VNX supports being both a NAS and a SAN in a single unit.  The array is basically a SAN, with a NAS bolted on.  When a LUN is created you’ll be asked if you want it to be a block level LUN which would be a traditional LUN or if you want a file system volume which would be a NAS mount point.  When you create the NAS mount point a LUN is created which is then automatically mounted to the NAS and the NAS then formats the LUN.

They also talked about the FAST cache which is available within the VNX array.  This cache takes their flash drives and mirrors them in a RAID 10 mirror which is then used as a second stage cache.  As blocks are accessed if the same block is touched three times the block is then loaded into the FAST cache so that on the fourth request for the block the block is now loaded from the cache instead of the spinning disks.  Blocks can be moved into the FAST cache because of either reads or writes as the cache is writable.  You can add up to 2.1 TB of FAST cache per VNX array.  When using sequential workloads these won’t work well with FAST cache very well because the FAST cache doesn’t work with pre-fetched data.

The really cool thing about FAST cache for SQL Server databases is that all FAST cache work is done in 64k blocks the same IO size that SQL Server users.  The downside that I see about this is that during a reindex process you might see stale data loaded into the FAST cache as the 64k blocks are read and written back during an index rebuild process, especially is an update stats is then done which would be a third operation on the block causing the block to move into the FAST cache.  This will take me getting my hands on an array and doing a lot of testing to see how this works.

One thing which I thought was really interesting was a graph that was showed were EMC tested Microsoft Exchange on an iSCSI setup, a NFS setup, and a fiber channel setup.  In these setups the iSCSI traffic had the slowest reads of the three followed by NFS then fiber channel being the faster. For writes NFS was the slowest, iSCSI was next then fiber channel was the fastest.  For the IOPs throughput fiber channel had the most bandwidth, followed by NFS with iSCSI being the slowest.  (I don’t have the config of network speeds which were used in the tests.)

Check back tomorrow for more information about day 2.

Denny

Share

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trust DCAC with your data

Your data systems may be treading water today, but are they prepared for the next phase of your business growth?