IP Multipathing Setup In Nexenta

A feature added in NexentaStor 3.1.4 is the ability to configure IP Multipathing (IPMP) groups via the management console (NMV) rather than having to drop to the shell and configure it manually.

IPMP has two purposes: fault-tolerance and outbound traffic load spreading. While there’s a lot of overlap between Link Aggregation and IPMP, there are some key differences. For more on that, you can read Nicolas Droux’s great write up:
https://blogs.oracle.com/droux/entry/link_aggregation_vs_ip_multipathing.

By default, NMV created IPMP groups with link based failure detection rather than probe based. Link based detection is lighter than probe based as it relies on the lower level detection link state rather than a test IP address.

Continue reading

Nexenta VAAI-NAS Beta Released, NFS Hardware Acceleration

Skip to Update 1

Along with the release of NexentaStor 3.1.4, Nexenta Systems today officially released the (very) Beta VAAI-NAS plugin for VMware vSphere 5.x via the community NexentaStor.org forums. VAAI-NAS is still not widely supported in the NAS world, and of those that do, not all support all the primitives.  You can search the VMware Compatibility Guide for vendors that are VAAI-NAS certified.

VAAI, to catch up, is the the suite of primitives (instructions) that allow vSphere to offload certain VM operations to the array. For NAS Hardware Acceleration, these are:

  • Full File Clone – Enables virtual disks to be cloned by the NAS device (but not ‘hot’, the VM must be powered off).
  • Native Snapshot Support – Allows creation of virtual machine snapshots to be offloaded to the array.
  • Extended Statistics – Shows actual space usage on NAS datastores (great for thin provisioning).
  • Reserve Space – Enables creation of thick virtual disk files on NAS.

Everything you wanted to know about VAAI (but were afraid to ask)
http://www.vmware.com/files/pdf/techpaper/VMware-vSphere-Storage-API-Array-Integration.pdf

At this point, all primitives are working (or supposed to, it’s beta, right?) save for the Native Snapshots.

Here’s a quick tutorial to install the agent in NexentaStor and the plugin in VMware Vsphere.

Continue reading

HP Tech Day 2012

This week I’m excited to be flying to Ft. Collins Colorado for an HP Tech Day that will be hosting independent bloggers to take a look at the LeftHand and 3Par products as well as their VMware integration. I’ve been to a couple demos, read a couple papers and have had some conversations with people about these products, so what makes this trip special is that we get some good ol’ fashion hands-on-lab experience. There’s a chasm of a difference between seeing the product in a slide deck and being able to kick the tires yourself.

I’m also excited to meet a group of new bloggers/storage-geeks. I’ve met a few of the guys at different events (Tech Field Day, VMWorld, Hp Cloud Tech Day, etc.) and on twitter and I’m excited to meet the rest:

Alastair Cooke, @DemitasseNZ, www.demitasse.co.nz
Brian Knudtson, @bknudtson, www.knudt.net/vblog
Ray Lucchesi, @raylucchesi, www.silvertonconsulting.com/blog
Howard Marks, @DeepStorageNet, www.deepstorage.net/WP-Save
John Obeto, @johnobeto, www.absolutelywindows.com
Justin Paul, @recklessop, www.jpaul.me
Jeffery Powers, @geekazine, www.geekazine.com
Derek Schauland, @webjunkie, techhelp.cybercreations.net
Rick Schlander, @vmrick, www.vmbulletin.com
Justin Vashisht, @3cVguy, 3cvguy.blog.com

The crew will be hosted by HP Storage Guru and all around good guy Calvin Zito (@HPStorageGuy).

As is all the rage for conferences and other intimate gatherings, a live stream of the event will be attempted. Keep an eye out on twitter for the hash tag #HPTechDay and/or #HPCI for the latest information and buzz about the event.

Can’t wait.

New Dell EqualLogic Arrays

Dell unveiled an update to 2 of their EqualLogic PS series array platforms today along with their first sub-$10k array. The new PS6100 and PS4100 series arrays are a refresh of their PS6000 and PS4000 units. The new boxes are being touted as having up to a 67% improvement in I/O performance. 

Here are the major new features for each:
PS41000
– shrinks down to 2U
– 24 x 2.5″ drives – up to 21.6TB
– 12 x 3.5″ drives – up to 32TB
– Now starting at under $10,000

PS6100
– 2U version with 24 x 2.5″ drives – up to 21.6TB
– New 4U design with 24 x 3.5″ drives – up to 72TB
– NEW Dedicated management port

Capture

Both arrays will ship with the latest 5.1 firmware and are certified for VMware’s vSphere 5.0 storage APIs (VASA, VAAI, etc.). The SSD options will go up to 400GB per drive, which I’m sure will be slightly over the $10,000 starting price in the PS4100. 

This may sound lame, but the addition of the dedicated management port on the PS6100 is something that I’m very excited about. I never understood why there was one on the PS4000 but not the PS6000. It was maddening to lose 25% of my total network throughput on an array if I needed to attach it to a dedicated management network.

Being in the market for a Sumo (Dell’s EqualLogic Monster PS6500 series array), I was hoping that those would get the same refresh, and even though I knew it wasn’t going to be refreshed yet, I’m still a bit bummed that I may have to purchase it just before it gets its own upgrade.

vSphere 5 Fab 2

Well, the announcement came and went for vSphere 5.0 yesterday and a lot of new technology and new capability was put out there. You may have also heard of the new licensing scheme, but I’m not going to cover that yet as I want to take more time to evaluate how it will impact me (but I’m currently in stage 2 of The Five Stages of VMware Licensing Grief). Here are some quick hits of 2 the new tech that will primarily affect me, small shop in a small EDU:

New vMotion (aka Storage DRS goodness)

svMotion has a new copy mechanism that now allows for migrating storage for guests that have snapshots or have linked clones. A Mirror Drive was also created on the destination datastore that holds all the changes during a copy so when the copy is done, the changes are synced from the Mirror Drives rather than having to make several passes back to the original datastore. This should decrease svMotion times by quite a bit.

Expanding on the amazing DRS feature for VM/host load balancing, storage DRS brings the same capability to storage. Although this is all wrapped up in the new and improved Storage vMotion, it could stand alone as quite the feature. As introduced with vSphere 4.1, if your storage vendor of choice support VAAI (storage acceleration APIs), this all happens on the SAN rather than over the network, bringing joy to your network admins.

VMFS-5

Lots of new features here. 

  • 1MB block size – gone are the 1, 2, 4 and 8M block sizes
  • 60TB datastores. Yes, 60. Yes, Terabytes
  • Sub-blocks down to 8k from 64k. Smaller files stay small
  • Speaking of smaller files, files smaller than 1k are now kept in the file descriptor location until they’re bigger than 1k
  • 100,000 file limit up from 30,000
  • ATS (part of the locking feature of VAAI) improvements. Should lend itself to more VMs per datastore

VMFS-3 file systems can be upgrades straight to VMFS-5 while the VMs are still running. VMware is calling this an “online & non-disruptive upgrade operation”.

A couple hold over limitations for a VMFS-5 datastore:

  • 2TB file size limit for a single VMDK and non-passthru RDM drives (passthru RDM can be the full 60TB)
  • Max LUNS is still 256 per host (I personally could never see hitting this, but I’m sure larger implementations can)

More vSphere 5 posts will be coming, but these are the 2 things that got me the most excited.