New Dell EqualLogic Arrays

Dell unveiled an update to 2 of their EqualLogic PS series array platforms today along with their first sub-$10k array. The new PS6100 and PS4100 series arrays are a refresh of their PS6000 and PS4000 units. The new boxes are being touted as having up to a 67% improvement in I/O performance. 

Here are the major new features for each:
PS41000
– shrinks down to 2U
– 24 x 2.5″ drives – up to 21.6TB
– 12 x 3.5″ drives – up to 32TB
– Now starting at under $10,000

PS6100
– 2U version with 24 x 2.5″ drives – up to 21.6TB
– New 4U design with 24 x 3.5″ drives – up to 72TB
– NEW Dedicated management port

Capture

Both arrays will ship with the latest 5.1 firmware and are certified for VMware’s vSphere 5.0 storage APIs (VASA, VAAI, etc.). The SSD options will go up to 400GB per drive, which I’m sure will be slightly over the $10,000 starting price in the PS4100. 

This may sound lame, but the addition of the dedicated management port on the PS6100 is something that I’m very excited about. I never understood why there was one on the PS4000 but not the PS6000. It was maddening to lose 25% of my total network throughput on an array if I needed to attach it to a dedicated management network.

Being in the market for a Sumo (Dell’s EqualLogic Monster PS6500 series array), I was hoping that those would get the same refresh, and even though I knew it wasn’t going to be refreshed yet, I’m still a bit bummed that I may have to purchase it just before it gets its own upgrade.

New Role and Opportunity

For the last 4 years I’ve operated as a Windows Systems Administrator, primarily focusing on (surprise!) Microsoft technologies – patching, security, Active Directory, Group Policy, etc. When I took this position, our virtualization environment was quite small, not very complex, not needing a lot of love or development, and not really my job. We had about 30 virtual machines, 4 hosts running ESX 2.5 all with internal or direct attached storage, 3 hosts running EXS 3.5 with still more internal storage and one single controller NetApp FAS270 with a whopping 1.25TB of iSCSI storage! These ESX 3.5 hosts were also running un-clustered.

With demands growing much faster than our budget (centralized backup, Antivirus, patching, deployment, file and print services, CMS, LMS, better-than-just-pop-email), it was obvious that we could no longer afford physical servers. We had neither the budget nor the physical space, power, cooling, etc and had to come up with a better plan. Virtualization was the answer, and somebody had to do it. I fell in love with the technology and jumped right in. As most of you have probably experienced, it soon became the majority of my daily functions.

We quickly added one more ESX 3.5 host, consolidated 2 of the ESX 2.5 hosts into the 3.5 hosts, added a second shelf to the NetApp (now all of 3.5TB) and added a Dell PowerVault MD1000 attached to a PowerEdge 1950 running Red Hat serving as an NSF store (3TB also).

Sounds great. We should be set, right? Boy was I wrong. I had no idea how fast we could chew through storage and host resources. With our NetApp nearing End of Life (not to mention being well out of warranty), it was time to consider new storage and another host or 2. While we loved the performance of our NetApp, we couldn’t afford a system with multiple controllers, couldn’t afford death by licensed features and found it difficult to administer. Through a process I won’t detail here, and with a price my Dell AE swore me me to protect, we decided to migrate to and standardize on EqualLogic. So we purchased a PS6000XV for primary storage (6.5TB usable) and a PS4000X for replication.??

We’re now sitting with a single ESXi 4.1 cluster with 5 hosts and 3 EqualLogic arrays in two groups. We’re still using the old NetApp iSCSI and MD1000 NFS SANs as tier 2 storage and now have a grand total of 26TB of storage (96TB more coming).

With the evolution of my workload and focus, as well as a new project building a remote data center in Houston as both a multi site cluster and DR site, I was offered the new position of Sr. Systems Administrator – Virtualization and Storage, which I gladly accepted. While this in part realigns my job title and description with what I actually do and where the Datacenter and IT services field is headed, it also adds more opportunities for growth. I will be taking on the role of Scrum Master (Srum is our internal project management framework), operate as lead/backup technician for the rest of the Sys Admin team and be responsible for server/service patch management oversight.

It’s big and a little bit scary, but if im?? not a little bit scared of what I’m doing, I get complacent and don’t learn nearly as much.

Here’s to being scared.

LA VMUG – vCenter Operations

The Los Angeles VMUG was held today at the DoubeTree Hotel at LAX and the primary topic was a product discussion and demo of vCenter Operations. Much of the time was dedicated to what needs and gaps it fills.

 

The dilemma now is that we have essentially 3 layers: Hardware, Hypervisor, OS/App. For each of those 3 layers there are a multitude of ways to monitor capacity, get health checks and gain deep visibility into performance metrics and bottlenecks. This is the goal of the vCenter Operations along with the promise of capacity planning, compliance checks and change management.

Continue reading

vSphere 5 Licensing – post grief post

There have been gobs of reactions to VMware’s new license model that was announced last week, and the vast majority of it was negative. I will admit that I took part in some of the initial back lash. We sysadmins don’t like change, especially when we’ve engineered systems to maximize a certain licensing model and then that model changes. But then I started to think it’s possible likely that I’m overreacting. Maybe it won’t have any effect on us. So I started doing the math. Licenses are still purchase by the CPU, at minimum one for each socket in your system, with each license having a vRAM Entitlement. For reference, one of the best license summaries I’ve found is on Alan Renouf’s blog http://www.virtu-al.net. (the following is borrowed from his blog post http://www.virtu-al.net/2011/07/14/vsphere-5-license-entitlements/)

License Type Essentials Essentials Plus Standard Enterprise Enterprise Plus
vRAM Entitlement per license 24GB 24GB 24GB 32GB 48GB

 

    We currently have Enterprise Licensing with the following specs:

    5 hosts – each with 2 CPU and a total of 256GB physical RAM (pRAM here on out)

    70 Virtual Machines wth a total of 140GB Virtual RAM allocated (vRAM). But we also have about 20 powered off virtual machines for test/dev with an average of 4GB RAM each. So worst case scenario for vRAM is 220GB

    The Math

    10 Licensed CPUs @ 32GB Entitlement per CPU  = 320GB of RAM Entitlements.

    So as you can see, at the moment, we’re totally fine. Primarily because we have too many hosts, which I plan to fix with eliminating 2 hosts (pRAM is much cheaper these days than when we started building our cluster, which allows for greater consolidation). This works out like so:

    6 Licensed CPUs @ 32GB Entitlement per CPU  = 192GB of RAM Entitlements

    So I’ll still have 52GB vRAM overhead, but will have to careful with how many test/dev servers we turn on at the same time. I’m just glad there isn’t a ‘hard stop’ when you hit your entitlement limit. I’m just not excited to one day tell my CIO that I have to purchase more CPU licenses than we have CPUs.

    Not All Bad

    I understand where VMware was going with this new model. One of the major tenets of the Cloud is the ‘pay for what you need/use, not what you don’t’. VMware’s philosophy is now a ‘license only how much vRAM you need, not your pRAM’. When I lamented that new virtual machines are becoming more of a business decision, this isn’t all bad. VM sprawl is very much real. Especially with RAM and CPU speeds and feeds exploding for such a minimal cost increase, we vAdmins have to think less and less about what resources our machines actually need. New Windows Server 2008 R2? Ah, just go ahead and give it 8GB off the bat. Why not?

    I think it will help admins and users think critically about the resources they allocate to machines and force people to ‘right-size’ them. Really, we’ll be thankful 5-10 years down the road when if we migrate virtualization platforms. (oh no he didn’t)

    For those that are adopting or have adopted a charge back model, this will make it much easier to manage and explain the costs of your tiered environment. You want a big beefy server with tons of RAM? Groovy. That’ll be $90/1GB RAM/year. But you can have all CPU you need.

    Left Over Beef

    • VMware that says they removed “two physical constraints (core and physical RAM)”, but they introduced a virtual constraint and I never had either of those 2 previous ones.
    • The 8GB vRAM limit on free ESXi might be a home/test lab buster. Aren’t all the other restrictions enough?
    • I think VMware could do lot by increasing the entitlement just a bit: 48GB for Enterprise, 64GB or 96GB for Enterprise+ would silence a lot of critics (but also cut into their profit margins)

    But, these are just my thoughts as a sysadmin in an SMB shop. There doesn’t seem to be a huge impact us, yet. Oh, did I mention we’re in the beginning phases of a DR site? Yeah, this will influence some discussions there now. 

    vSphere 5 Fab 2

    Well, the announcement came and went for vSphere 5.0 yesterday and a lot of new technology and new capability was put out there. You may have also heard of the new licensing scheme, but I’m not going to cover that yet as I want to take more time to evaluate how it will impact me (but I’m currently in stage 2 of The Five Stages of VMware Licensing Grief). Here are some quick hits of 2 the new tech that will primarily affect me, small shop in a small EDU:

    New vMotion (aka Storage DRS goodness)

    svMotion has a new copy mechanism that now allows for migrating storage for guests that have snapshots or have linked clones. A Mirror Drive was also created on the destination datastore that holds all the changes during a copy so when the copy is done, the changes are synced from the Mirror Drives rather than having to make several passes back to the original datastore. This should decrease svMotion times by quite a bit.

    Expanding on the amazing DRS feature for VM/host load balancing, storage DRS brings the same capability to storage. Although this is all wrapped up in the new and improved Storage vMotion, it could stand alone as quite the feature. As introduced with vSphere 4.1, if your storage vendor of choice support VAAI (storage acceleration APIs), this all happens on the SAN rather than over the network, bringing joy to your network admins.

    VMFS-5

    Lots of new features here. 

    • 1MB block size – gone are the 1, 2, 4 and 8M block sizes
    • 60TB datastores. Yes, 60. Yes, Terabytes
    • Sub-blocks down to 8k from 64k. Smaller files stay small
    • Speaking of smaller files, files smaller than 1k are now kept in the file descriptor location until they’re bigger than 1k
    • 100,000 file limit up from 30,000
    • ATS (part of the locking feature of VAAI) improvements. Should lend itself to more VMs per datastore

    VMFS-3 file systems can be upgrades straight to VMFS-5 while the VMs are still running. VMware is calling this an “online & non-disruptive upgrade operation”.

    A couple hold over limitations for a VMFS-5 datastore:

    • 2TB file size limit for a single VMDK and non-passthru RDM drives (passthru RDM can be the full 60TB)
    • Max LUNS is still 256 per host (I personally could never see hitting this, but I’m sure larger implementations can)

    More vSphere 5 posts will be coming, but these are the 2 things that got me the most excited.

        Dell Management Plug-in for vSphere

        With our ever growing complexity within our virtualization environment, it’s getting a bit un-wieldy to manage all the disparate pieces (physical servers, virtual servers, storage, network, etc.). Actually, managing the pieces is getting easier. It’s managing the management pieces that’s becoming difficult. I’ve got SANHQ and Group Manager for my SAN, vCenter/Veeam for my vSphere, OpenManage for my Dell servers, and on and on. Anything that cuts down on the number of management infrastructure components is a god send.

        Enter the Dell™ Management Plug-In for VMware vCenter, which is billed as a way to “seamlessly manage both your physical and virtual infrastructure.”. I’ve downloaded the trial (version 1.0.1) and will blog about my experience with it after I run it through some paces. The intial difference I see from the older one is that the older version’s download (1.0.0.40) came with the Users Guide built in to the extract, but the new one did not. Had to go find it here along with the Quick Install Guide and the Release Notes.