Tech Field Day 7 – Austin, TX

Just got word Tuesday that I’ll have the honor to be a delegate for Gestalt IT’s Tech Field Day 7 focusing on Datacenter IT Infrastructure. The event seeks to bring together some of the industries great thinkers, authors, bloggers, influencers and vendors to engage each other. You can read more about the Tech Field Day at their site to get an idea of what these guys are about. 

As excited as I am to get some pretty good face time with a few great vendors, I’m stoked about being able to meet some people in the IT community whom I’ve admired for quite a while. These are guys whose resources I’ve been reading for a while for a good deal of information as I’ve built up my knowledge and experience specifically in the virtualization and storage arenas. They are, in my mind, rock stars in the Datacenter IT world. I’m humbled to be brought in as a newer member of this event along side some veterans. The complete list of delegates is:

The event this time will be in Austin, Texas on August 11th and 12th. The sponsors are Dell (it’s Austin, after all), Veeam, SolarWinds and Symantec. All vendors that I either currently use or have used in the past. Looking forward to our discussions, hands on experience and feedback with them.

You can follow all the madness on Twitter with the #techfieldday hash tag, by following the delegates from the official Tech Field Day 7 List or keeping up with the TFD7 Links page.

Thank you to Stephen Fosket and Matt Simmons for organizing this and to the vendors for their sponsorship and belief that this type of interaction with the community is worthwhile.

 

LA VMUG – vCenter Operations

The Los Angeles VMUG was held today at the DoubeTree Hotel at LAX and the primary topic was a product discussion and demo of vCenter Operations. Much of the time was dedicated to what needs and gaps it fills.

 

The dilemma now is that we have essentially 3 layers: Hardware, Hypervisor, OS/App. For each of those 3 layers there are a multitude of ways to monitor capacity, get health checks and gain deep visibility into performance metrics and bottlenecks. This is the goal of the vCenter Operations along with the promise of capacity planning, compliance checks and change management.

Continue reading

vSphere 5 Licensing – post grief post

There have been gobs of reactions to VMware’s new license model that was announced last week, and the vast majority of it was negative. I will admit that I took part in some of the initial back lash. We sysadmins don’t like change, especially when we’ve engineered systems to maximize a certain licensing model and then that model changes. But then I started to think it’s possible likely that I’m overreacting. Maybe it won’t have any effect on us. So I started doing the math. Licenses are still purchase by the CPU, at minimum one for each socket in your system, with each license having a vRAM Entitlement. For reference, one of the best license summaries I’ve found is on Alan Renouf’s blog http://www.virtu-al.net. (the following is borrowed from his blog post http://www.virtu-al.net/2011/07/14/vsphere-5-license-entitlements/)

License Type Essentials Essentials Plus Standard Enterprise Enterprise Plus
vRAM Entitlement per license 24GB 24GB 24GB 32GB 48GB

 

    We currently have Enterprise Licensing with the following specs:

    5 hosts – each with 2 CPU and a total of 256GB physical RAM (pRAM here on out)

    70 Virtual Machines wth a total of 140GB Virtual RAM allocated (vRAM). But we also have about 20 powered off virtual machines for test/dev with an average of 4GB RAM each. So worst case scenario for vRAM is 220GB

    The Math

    10 Licensed CPUs @ 32GB Entitlement per CPU  = 320GB of RAM Entitlements.

    So as you can see, at the moment, we’re totally fine. Primarily because we have too many hosts, which I plan to fix with eliminating 2 hosts (pRAM is much cheaper these days than when we started building our cluster, which allows for greater consolidation). This works out like so:

    6 Licensed CPUs @ 32GB Entitlement per CPU  = 192GB of RAM Entitlements

    So I’ll still have 52GB vRAM overhead, but will have to careful with how many test/dev servers we turn on at the same time. I’m just glad there isn’t a ‘hard stop’ when you hit your entitlement limit. I’m just not excited to one day tell my CIO that I have to purchase more CPU licenses than we have CPUs.

    Not All Bad

    I understand where VMware was going with this new model. One of the major tenets of the Cloud is the ‘pay for what you need/use, not what you don’t’. VMware’s philosophy is now a ‘license only how much vRAM you need, not your pRAM’. When I lamented that new virtual machines are becoming more of a business decision, this isn’t all bad. VM sprawl is very much real. Especially with RAM and CPU speeds and feeds exploding for such a minimal cost increase, we vAdmins have to think less and less about what resources our machines actually need. New Windows Server 2008 R2? Ah, just go ahead and give it 8GB off the bat. Why not?

    I think it will help admins and users think critically about the resources they allocate to machines and force people to ‘right-size’ them. Really, we’ll be thankful 5-10 years down the road when if we migrate virtualization platforms. (oh no he didn’t)

    For those that are adopting or have adopted a charge back model, this will make it much easier to manage and explain the costs of your tiered environment. You want a big beefy server with tons of RAM? Groovy. That’ll be $90/1GB RAM/year. But you can have all CPU you need.

    Left Over Beef

    • VMware that says they removed “two physical constraints (core and physical RAM)”, but they introduced a virtual constraint and I never had either of those 2 previous ones.
    • The 8GB vRAM limit on free ESXi might be a home/test lab buster. Aren’t all the other restrictions enough?
    • I think VMware could do lot by increasing the entitlement just a bit: 48GB for Enterprise, 64GB or 96GB for Enterprise+ would silence a lot of critics (but also cut into their profit margins)

    But, these are just my thoughts as a sysadmin in an SMB shop. There doesn’t seem to be a huge impact us, yet. Oh, did I mention we’re in the beginning phases of a DR site? Yeah, this will influence some discussions there now. 

    vSphere 5 Fab 2

    Well, the announcement came and went for vSphere 5.0 yesterday and a lot of new technology and new capability was put out there. You may have also heard of the new licensing scheme, but I’m not going to cover that yet as I want to take more time to evaluate how it will impact me (but I’m currently in stage 2 of The Five Stages of VMware Licensing Grief). Here are some quick hits of 2 the new tech that will primarily affect me, small shop in a small EDU:

    New vMotion (aka Storage DRS goodness)

    svMotion has a new copy mechanism that now allows for migrating storage for guests that have snapshots or have linked clones. A Mirror Drive was also created on the destination datastore that holds all the changes during a copy so when the copy is done, the changes are synced from the Mirror Drives rather than having to make several passes back to the original datastore. This should decrease svMotion times by quite a bit.

    Expanding on the amazing DRS feature for VM/host load balancing, storage DRS brings the same capability to storage. Although this is all wrapped up in the new and improved Storage vMotion, it could stand alone as quite the feature. As introduced with vSphere 4.1, if your storage vendor of choice support VAAI (storage acceleration APIs), this all happens on the SAN rather than over the network, bringing joy to your network admins.

    VMFS-5

    Lots of new features here. 

    • 1MB block size – gone are the 1, 2, 4 and 8M block sizes
    • 60TB datastores. Yes, 60. Yes, Terabytes
    • Sub-blocks down to 8k from 64k. Smaller files stay small
    • Speaking of smaller files, files smaller than 1k are now kept in the file descriptor location until they’re bigger than 1k
    • 100,000 file limit up from 30,000
    • ATS (part of the locking feature of VAAI) improvements. Should lend itself to more VMs per datastore

    VMFS-3 file systems can be upgrades straight to VMFS-5 while the VMs are still running. VMware is calling this an “online & non-disruptive upgrade operation”.

    A couple hold over limitations for a VMFS-5 datastore:

    • 2TB file size limit for a single VMDK and non-passthru RDM drives (passthru RDM can be the full 60TB)
    • Max LUNS is still 256 per host (I personally could never see hitting this, but I’m sure larger implementations can)

    More vSphere 5 posts will be coming, but these are the 2 things that got me the most excited.

        Dell Management Plug-in for vSphere

        With our ever growing complexity within our virtualization environment, it’s getting a bit un-wieldy to manage all the disparate pieces (physical servers, virtual servers, storage, network, etc.). Actually, managing the pieces is getting easier. It’s managing the management pieces that’s becoming difficult. I’ve got SANHQ and Group Manager for my SAN, vCenter/Veeam for my vSphere, OpenManage for my Dell servers, and on and on. Anything that cuts down on the number of management infrastructure components is a god send.

        Enter the Dell™ Management Plug-In for VMware vCenter, which is billed as a way to “seamlessly manage both your physical and virtual infrastructure.”. I’ve downloaded the trial (version 1.0.1) and will blog about my experience with it after I run it through some paces. The intial difference I see from the older one is that the older version’s download (1.0.0.40) came with the Users Guide built in to the extract, but the new one did not. Had to go find it here along with the Quick Install Guide and the Release Notes.

         

        Oracle Hates Me (and most everyone)

        So, the bright side of Oracle hating the world (as evidenced through their arcane licensing structures) is the chance to get to do some creative technological circus acts and learn a lot in the process.

        Here’s my original configuration:
        Oracle 10g installed on a bare metal Dell PowerEdge 2850. Single Socket, Single Core, Hyperthreading turned off.

        Why? Licensing. Even with Oracle’s ‘generous’ educational discounts, we cannot afford anything more. We license per core rather than per user/connection out of cost considerations. While this isn’t a terrible setup, it allows for no other protections other than backup (to tape, currently). Oracle does not allow you to bring much into the equation of data protection/redundancy/resiliency without, what? Oh yeah, more licensing. Want to replicate? License. Want to virtualize? Nope, gotta license all of your hosts cores. Ridiculous. Now, I’ve read a bunch of blogs and heard a bunch of users talk about ways using cluster tweaks to say they fall within Oracle’s virtualization scheme, but our Oracle rep (who likes to visit us often to check for compliance) has yet to confirm this does in fact fall within compliance.

        Continue reading

        Boats with Roger

        Ok, this is in response to a twitter thread

        Whilst at The Dell Storage Forum in Orlando, Roger Lund, Kristy Wilke and I had organized a #storagebeers and had even gotten a sponser! (once again, thank you Data Media Solutions). It was to all go down at 8pm in downtown Disney. Well, Roger, Kyle Murley and I decide to start walking to get there just before 8 to grab a spot. As we walk closer to the lake, Roger says “Hey, let’s take the boat across. It’ll drop us off right by the pub and we won’t have to walk very far. It’ll be fast.”

        Having never been to Disney World ourselves, we put all our faith and trust in Roger. We board the boat, and we are the last 3 on, WHEW! Just made it. What a time saver. So then the boat leaves and it starts out heading AWAY from the dock we want to go to. Roger says, “Don’t worry, I’m sure it’s taking the long way around the lake.” Well, all of a sudden we’re headed down this canal that is in the very opposite direction of the place we want to be. And the worst part it that this boat is going 2 freaking miles per hour!!

        At this point, it’s already 8:00. We’re late to the event we organized! And worse, we’re late for BEER!

        Turns out, there are two boats. One goes to the dock by the pub, the other goes to the freaking French Quarter! We get off at the French Quarter stop and have to hail a taxi back to Downtown Disney.

        We finally arrived around 8:30, and yes, we drank….heavily.

        Us on a boat:

        Finally Blogging

        Ok, so people have been on me to start blogging (*cough* Jonathan – @s1xth, Roger – @rogerlund, Gina – @gminks). Something about me gleaning information from other awesome blogs but not contributing anything of my own. Total jerk move. So here it is. Or at least, where it will be. Topics that I’ll try to cover (though don’t hold me to it):

        – Virtualization
        – Storage
        – General Systems Administration
        – Intersection of Community and Technology
        – Technical trouble I get myself into, and hopefully out of 

        Disclaimer: this may be extremely biased based on the technology I currently use (weird, I know).

        For now, look at my dog smile for the camera.

        8lpz8