PowerCLI Mass Add Hard Disks to Virtual Machine

While doing some iSCSI LUN testing for a certain storage vendor, I was looking for a way to add multiple hard disks to a single VM across each iSCSI LUN whose name matched a certain pattern. In my case, all luns I was testing against had the full lun path in their name so the were similar to lun1.naa.600144f0dcb8480000005142553e0001 (thanks to Alan Renouf’s post “PowerCLI: Mass provision datastore’s” for guidance on  scripting datastore creation).

However, I do not have all luns mapped to every vSphere host. Easy enough to get around this in PowerCLI. The following script prompts for the Virtual Machine name, size and hard disk format. Then filters the datastores by that VM’s vSphere host and our common string in the datastore name.

$vmname = read-host "VM Name to add disks to"

$vm = get-vm $vmname

$size = read-host "Disk Size (GB)"

$format = read-host "Disk Format (thin, thick, EagerZeroedThick)"

$datastores = $vm | Get-VMHost | Get-Datastore | Where-Object {$_.name -like "lun*naa*"}

foreach ($item in $datastores){
$datastore = $item.name
write-host "Adding new $size VMDK to $vm on datastore $datastore"
New-HardDisk -vm $vm -CapacityGB $size -Datastore $datastore -StorageFormat $format

There are a lot of parameters for the New-HardDisk cmdlet that I don’t specify because the defaults were what I already wanted (e.g. Persistence, Controller, DiskType, etc.). Some, like StorageFormat which defaults to Thick Lazy Zeroed, I wanted to control.

In another case, I wanted to add multiple disks from one datastore to a vm.

### Get VM/Disk Count/Datastore information ###
$vmname = read-host "VM Name to add disks to"
$num_disks = read-host "number of disks to add"
$ds = read-host "Datastore to place the VMDK"
$format = read-host "Disk Format (thin, thick, EagerZeroedThick)"
$size = read-host "Disk Size (GB)"

$vm = get-vm $vmname
$datastore = get-datastore -name $ds

### Add $num_disks to VM
while ($x -lt $num_disks){
write-host "Adding $size VMDK to $vm on datastore $datastore"
New-HardDisk -vm $vm -CapacityGB $size -Datastore $datastore -StorageFormat $format

You can read more about the New-HardDisk cmlet at:

vSphere 5 Fab 2

Well, the announcement came and went for vSphere 5.0 yesterday and a lot of new technology and new capability was put out there. You may have also heard of the new licensing scheme, but I’m not going to cover that yet as I want to take more time to evaluate how it will impact me (but I’m currently in stage 2 of The Five Stages of VMware Licensing Grief). Here are some quick hits of 2 the new tech that will primarily affect me, small shop in a small EDU:

New vMotion (aka Storage DRS goodness)

svMotion has a new copy mechanism that now allows for migrating storage for guests that have snapshots or have linked clones. A Mirror Drive was also created on the destination datastore that holds all the changes during a copy so when the copy is done, the changes are synced from the Mirror Drives rather than having to make several passes back to the original datastore. This should decrease svMotion times by quite a bit.

Expanding on the amazing DRS feature for VM/host load balancing, storage DRS brings the same capability to storage. Although this is all wrapped up in the new and improved Storage vMotion, it could stand alone as quite the feature. As introduced with vSphere 4.1, if your storage vendor of choice support VAAI (storage acceleration APIs), this all happens on the SAN rather than over the network, bringing joy to your network admins.


Lots of new features here. 

  • 1MB block size – gone are the 1, 2, 4 and 8M block sizes
  • 60TB datastores. Yes, 60. Yes, Terabytes
  • Sub-blocks down to 8k from 64k. Smaller files stay small
  • Speaking of smaller files, files smaller than 1k are now kept in the file descriptor location until they’re bigger than 1k
  • 100,000 file limit up from 30,000
  • ATS (part of the locking feature of VAAI) improvements. Should lend itself to more VMs per datastore

VMFS-3 file systems can be upgrades straight to VMFS-5 while the VMs are still running. VMware is calling this an “online & non-disruptive upgrade operation”.

A couple hold over limitations for a VMFS-5 datastore:

  • 2TB file size limit for a single VMDK and non-passthru RDM drives (passthru RDM can be the full 60TB)
  • Max LUNS is still 256 per host (I personally could never see hitting this, but I’m sure larger implementations can)

More vSphere 5 posts will be coming, but these are the 2 things that got me the most excited.

      Dell Management Plug-in for vSphere

      With our ever growing complexity within our virtualization environment, it’s getting a bit un-wieldy to manage all the disparate pieces (physical servers, virtual servers, storage, network, etc.). Actually, managing the pieces is getting easier. It’s managing the management pieces that’s becoming difficult. I’ve got SANHQ and Group Manager for my SAN, vCenter/Veeam for my vSphere, OpenManage for my Dell servers, and on and on. Anything that cuts down on the number of management infrastructure components is a god send.

      Enter the Dell™ Management Plug-In for VMware vCenter, which is billed as a way to “seamlessly manage both your physical and virtual infrastructure.”. I’ve downloaded the trial (version 1.0.1) and will blog about my experience with it after I run it through some paces. The intial difference I see from the older one is that the older version’s download ( came with the Users Guide built in to the extract, but the new one did not. Had to go find it here along with the Quick Install Guide and the Release Notes.