vSphere 5 Licensing – post grief post

There have been gobs of reactions to VMware’s new license model that was announced last week, and the vast majority of it was negative. I will admit that I took part in some of the initial back lash. We sysadmins don’t like change, especially when we’ve engineered systems to maximize a certain licensing model and then that model changes. But then I started to think it’s possible likely that I’m overreacting. Maybe it won’t have any effect on us. So I started doing the math. Licenses are still purchase by the CPU, at minimum one for each socket in your system, with each license having a vRAM Entitlement. For reference, one of the best license summaries I’ve found is on Alan Renouf’s blog http://www.virtu-al.net. (the following is borrowed from his blog post http://www.virtu-al.net/2011/07/14/vsphere-5-license-entitlements/)

License Type Essentials Essentials Plus Standard Enterprise Enterprise Plus
vRAM Entitlement per license 24GB 24GB 24GB 32GB 48GB

 

    We currently have Enterprise Licensing with the following specs:

    5 hosts – each with 2 CPU and a total of 256GB physical RAM (pRAM here on out)

    70 Virtual Machines wth a total of 140GB Virtual RAM allocated (vRAM). But we also have about 20 powered off virtual machines for test/dev with an average of 4GB RAM each. So worst case scenario for vRAM is 220GB

    The Math

    10 Licensed CPUs @ 32GB Entitlement per CPU  = 320GB of RAM Entitlements.

    So as you can see, at the moment, we’re totally fine. Primarily because we have too many hosts, which I plan to fix with eliminating 2 hosts (pRAM is much cheaper these days than when we started building our cluster, which allows for greater consolidation). This works out like so:

    6 Licensed CPUs @ 32GB Entitlement per CPU  = 192GB of RAM Entitlements

    So I’ll still have 52GB vRAM overhead, but will have to careful with how many test/dev servers we turn on at the same time. I’m just glad there isn’t a ‘hard stop’ when you hit your entitlement limit. I’m just not excited to one day tell my CIO that I have to purchase more CPU licenses than we have CPUs.

    Not All Bad

    I understand where VMware was going with this new model. One of the major tenets of the Cloud is the ‘pay for what you need/use, not what you don’t’. VMware’s philosophy is now a ‘license only how much vRAM you need, not your pRAM’. When I lamented that new virtual machines are becoming more of a business decision, this isn’t all bad. VM sprawl is very much real. Especially with RAM and CPU speeds and feeds exploding for such a minimal cost increase, we vAdmins have to think less and less about what resources our machines actually need. New Windows Server 2008 R2? Ah, just go ahead and give it 8GB off the bat. Why not?

    I think it will help admins and users think critically about the resources they allocate to machines and force people to ‘right-size’ them. Really, we’ll be thankful 5-10 years down the road when if we migrate virtualization platforms. (oh no he didn’t)

    For those that are adopting or have adopted a charge back model, this will make it much easier to manage and explain the costs of your tiered environment. You want a big beefy server with tons of RAM? Groovy. That’ll be $90/1GB RAM/year. But you can have all CPU you need.

    Left Over Beef

    • VMware that says they removed “two physical constraints (core and physical RAM)”, but they introduced a virtual constraint and I never had either of those 2 previous ones.
    • The 8GB vRAM limit on free ESXi might be a home/test lab buster. Aren’t all the other restrictions enough?
    • I think VMware could do lot by increasing the entitlement just a bit: 48GB for Enterprise, 64GB or 96GB for Enterprise+ would silence a lot of critics (but also cut into their profit margins)

    But, these are just my thoughts as a sysadmin in an SMB shop. There doesn’t seem to be a huge impact us, yet. Oh, did I mention we’re in the beginning phases of a DR site? Yeah, this will influence some discussions there now. 

    vSphere 5 Fab 2

    Well, the announcement came and went for vSphere 5.0 yesterday and a lot of new technology and new capability was put out there. You may have also heard of the new licensing scheme, but I’m not going to cover that yet as I want to take more time to evaluate how it will impact me (but I’m currently in stage 2 of The Five Stages of VMware Licensing Grief). Here are some quick hits of 2 the new tech that will primarily affect me, small shop in a small EDU:

    New vMotion (aka Storage DRS goodness)

    svMotion has a new copy mechanism that now allows for migrating storage for guests that have snapshots or have linked clones. A Mirror Drive was also created on the destination datastore that holds all the changes during a copy so when the copy is done, the changes are synced from the Mirror Drives rather than having to make several passes back to the original datastore. This should decrease svMotion times by quite a bit.

    Expanding on the amazing DRS feature for VM/host load balancing, storage DRS brings the same capability to storage. Although this is all wrapped up in the new and improved Storage vMotion, it could stand alone as quite the feature. As introduced with vSphere 4.1, if your storage vendor of choice support VAAI (storage acceleration APIs), this all happens on the SAN rather than over the network, bringing joy to your network admins.

    VMFS-5

    Lots of new features here. 

    • 1MB block size – gone are the 1, 2, 4 and 8M block sizes
    • 60TB datastores. Yes, 60. Yes, Terabytes
    • Sub-blocks down to 8k from 64k. Smaller files stay small
    • Speaking of smaller files, files smaller than 1k are now kept in the file descriptor location until they’re bigger than 1k
    • 100,000 file limit up from 30,000
    • ATS (part of the locking feature of VAAI) improvements. Should lend itself to more VMs per datastore

    VMFS-3 file systems can be upgrades straight to VMFS-5 while the VMs are still running. VMware is calling this an “online & non-disruptive upgrade operation”.

    A couple hold over limitations for a VMFS-5 datastore:

    • 2TB file size limit for a single VMDK and non-passthru RDM drives (passthru RDM can be the full 60TB)
    • Max LUNS is still 256 per host (I personally could never see hitting this, but I’m sure larger implementations can)

    More vSphere 5 posts will be coming, but these are the 2 things that got me the most excited.

        Dell Management Plug-in for vSphere

        With our ever growing complexity within our virtualization environment, it’s getting a bit un-wieldy to manage all the disparate pieces (physical servers, virtual servers, storage, network, etc.). Actually, managing the pieces is getting easier. It’s managing the management pieces that’s becoming difficult. I’ve got SANHQ and Group Manager for my SAN, vCenter/Veeam for my vSphere, OpenManage for my Dell servers, and on and on. Anything that cuts down on the number of management infrastructure components is a god send.

        Enter the Dell™ Management Plug-In for VMware vCenter, which is billed as a way to “seamlessly manage both your physical and virtual infrastructure.”. I’ve downloaded the trial (version 1.0.1) and will blog about my experience with it after I run it through some paces. The intial difference I see from the older one is that the older version’s download (1.0.0.40) came with the Users Guide built in to the extract, but the new one did not. Had to go find it here along with the Quick Install Guide and the Release Notes.