Changing Windows VM Boot Volume Block Size with vSphere Converter

First things first. This is unsupported by Microsoft, VMware, Amazon, Google, AOL, Geocities, John Madden, Edgar Allen Poe and your mother. Also, not generally a good idea.

Please sign here x_______________________________

Storage providers (NAS, SAN, HCI, Cloud, etc.) typically have preferred block sizes for the volumes created on top of their file system. Alignment is also super critical within a VM and you can read a great post about alignment on Duncan Epping’s blog here. Applications also have some best practices around formatted volume cluster sizes (or allocation units, or block size) based on their default average IO size. For instance, for Microsoft MS SQL, Microsoft highly recommends using a block size (allocation unit, cluster size – will be used interchangeably here) of 64k on any volume containing a database. More detailed info here.

Most often, when deploying an application, you will install the binaries on the root drive (C:\) and place the data on a secondary disk. Most applications allow this, some do not. For those instances where the do not and the application has been installed on the boot drive (C:\), you’re stuck with the cluster size you chose on installation (default of 4k).

If the application cannot deal with it’s data on a different device/directory than its binaries, and the binaries cannot be moved, you’re typically stuck unless you want to re-install and migrate data.

If you want to live on the edge, you might be able to convert the boot drive’s block size using VMware’s free vSphere Converter.

This is a step-by-step for that process. Continue reading

Upgrading vCenter Operations Manager OS to SLES 11 SP2

With the release of vCenter Operations Manager 5.8 (now at 5.8.1), an upgrade to the appliance’s underlying OS also needs a bit of patch, which makes sense since SLES 11 has been out for a while (January 2013). It’s a pretty simple upgrade, but you have to do it from the OS itself, not the vC Ops admin console. Continue reading

Change vCloud vApp/VM Storage Profile with PowerCLI

VMware has done a lot to open up the APIs for vCloud with the 5.1 release, however it still leaves much to be desired. One of the nicer things is the ability to change a storage profile for a VM. However, you need to know the HREF for the storage profile that you want to change to. This wasn’t so easy to get (I would love to be able to use a “get-storageProfile” PowerCLI cmd-let), but thankfully, Jake Robinson (@jakerobinson) and the VMware Community to the rescue:

This script uses PowerCLI for Tenants (which cannot be installed on the same box running the ‘regular’ PowerCli). Taking his prompt to build an XML file from an HTTP GET to a vCloud HREF, we can retrieve storage profiles from any OrgvDC you have rights to. From this XML, we can assign a storage profile to a VM (or in this case, every VM in a vApp) based on it’s name and the Org you’re logged into. I modified his script a little bit, because if we pass an Org to the function, we don’t get the storage profiles, but if we pass an OrgvDC HREF, we automatically get the storage profiles (because storage profiles are assigned to Org vDCs and not globally to an Org). This reduces the number of function calls needed.

All this script needs is your vApp name and desired Storage Profile name.

What this also addresses is the ability to migrate all vCloud VMs off of the “*Any” Storage Profile.

# This function does a HTTP GET against the vCloud 5.1 API using our current API session.
# It accepts any vCloud HREF.
function Get-vCloud51($href)
 $request = [System.Net.HttpWebRequest]::Create($href)
 $request.Accept = "application/*+xml;version=5.1"
 $response = $request.GetResponse()
 $streamReader = new-object System.IO.StreamReader($response.getResponseStream())
 $xmldata = $streamreader.ReadToEnd()
 return $xmldata

# This function gets an OrgVdc via 1.5 API, then 5.1 API.
# It then returns the HREF for the storage profile based on the $profilename and

function Get-storageHref($orgVdc,$profileName)
 $orgVdc51 = Get-vCloud51 $orgVdc.Href
 $storageProfileHref = $orgVdc51.vdc.VdcStorageProfiles.VdcStorageProfile | Where-Object{$ -eq "$profileName"} | foreach {$_.href}
 return $storageProfileHref

# Get vApp, Storage Profile and OrgvDC names

$vappName = read-host "vApp name"
$profileName = read-host "Storage Profile"
$orgVdcName = read-host "Org vDC Name"

$orgVdc = get-orgvdc $orgVdcName

#Get storage profile HREF

$profileHref = Get-storageHref $orgVdc $profileName

# Change each VM's Storage Profile in the vApp

$CIvApp = Get-CIVApp $vappName
Foreach ($CIVM in ($CIvApp | Get-CIVM)) {
 $newSettings = $CIVM.extensiondata
 $ = "$profileName"
 $newSettings.storageprofile.Href = "$profileHref"
 Write-Host "Changing the storage profile for $ to $profileName"

Solving vShield Edge Gateways Not Upgrading/Re-deploying after vSM 5.0.1 to 5.1.2 Upgrade

After upgrading from vCloud Director 1.5.1 to 5.1.2, vShield Manager 5.0.1 to 5.1.2 and vSphere 5.0 to 5.1.0 following all of the Best Practices KBs for each, the time came to upgrade off the vShield Edge Gateways to take advantage of some of the advanced capabilities and performance. When I attempted this via vCloud Director (right-click Edge Gateway and choose ‘Re-deploy’), I was met with this error message:

Cannot redeploy edge gateway BizDev External Network (urn:uuid:f1e69daa-7b56-4e8b-8713-549cfbe8c9f7) org.springframework.web.client.RestClientException: Redeploy failed: Edge connected to ‘dvportgroup-9622’ failed to upgrade.

Inspecting the vCloud Director debug logs revealed this:

2013-05-29 07:42:56,316 | DEBUG | nf-activity-pool-192 | LoggingRestTemplate | Created POST request for "" |

2013-05-29 07:42:56,316 | DEBUG | nf-activity-pool-192 | LoggingRestTemplate | Request::URI: method:POST |
2013-05-29 07:42:56,316 | DEBUG | nf-activity-pool-192 | LoggingRestTemplate | Request body :<none> |
2013-05-29 07:42:56,406 | WARN | nf-activity-pool-192 | LoggingRestTemplate | POST request for "" resulted in 404 (Not Found); invoking error handler |
2013-05-29 07:42:56,406 | ERROR | nf-activity-pool-192 | NetworkSecurityErrorHandler | Response error xml : <?xml version="1.0" encoding="UTF-8" standalone="yes"?><Errors><Error><code>70001</code><description>vShield Edge not installed for given networkID. Cannot proceed with the operation</description></Error></Errors> |
2013-05-29 07:42:56,407 | DEBUG | nf-activity-pool-192 | EdgeManagerSpock | Failed upgrading edge connected to dvportgroup-9622. |
com.vmware.vcloud.fabric.nsm.error.VsmException: vShield Edge not installed for given networkID. Cannot proceed with the operation

at com.vmware.vcloud.fabric.nsm.error.NetworkSecurityErrorHandler.processException(
 at com.vmware.vcloud.fabric.nsm.error.NetworkSecurityErrorHandler.handleError(
 at org.springframework.web.client.RestTemplate.handleResponseError(
 at org.springframework.web.client.RestTemplate.doExecute(
 at org.springframework.web.client.RestTemplate.execute(
 at org.springframework.web.client.RestTemplate.postForEntity(
 at java.util.concurrent.Executors$ Source)
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
 at Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$ Source)
 at Source)
2013-05-29 07:42:56,407 | ERROR | nf-activity-pool-192 | DeployGatewayActivity | [Activity Execution] Handle: urn:uuid:f1e69daa-7b56-4e8b-8713-549cfbe8c9f7, Current Phase:$GenerateBacking, ActivityExecutionState Parameter Names: [BACKING_SPEC, NDC, activitySupervisionRequest, com.vmware.activityEntityRecord.EntityId, REDEPLOY, DEPLOY_PARAMS] - Could not deploy gateway BizDev External Network |
org.springframework.web.client.RestClientException: Redeploy failed: Edge connected to 'dvportgroup-9622' failed to upgrade.

-- snip --
2013-05-29 07:42:56,437 | DEBUG | LocalTaskScheduler-Pool-31 | JobString | Job object - Object : BizDev External Network(com.vmware.vcloud.entity.gateway:d21b172b-b926-46e7-8e8b-07fb71843b18) operation name: NETWORK_GATEWAY_REDEPLOY | vcd=83908311-0f60-48e3-a2ec-f10f07c4f187,task=b6261962-0d14-48b0-836b-45fc0d68df65
2013-05-29 07:42:56,486 | DEBUG | LocalTaskScheduler-Pool-31 | CJob | No last pending job : [BizDev External Network(com.vmware.vcloud.entity.gateway:d21b172b-b926-46e7-8e8b-07fb71843b18)], status=[3] | vcd=83908311-0f60-48e3-a2ec-f10f07c4f187,task=b6261962-0d14-48b0-836b-45fc0d68df65
2013-05-29 07:42:56,487 | DEBUG | LocalTaskScheduler-Pool-31 | CJob | Update last job : [BizDev External Network(com.vmware.vcloud.entity.gateway:d21b172b-b926-46e7-8e8b-07fb71843b18)], status=[3], [5/29/13 7:42 AM] | vcd=83908311-0f60-48e3-a2ec-f10f07c4f187,task=b6261962-0d14-48b0-836b-45fc0d68df65
2013-05-29 07:42:56,487 | DEBUG | LocalTaskScheduler-Pool-31 | TaskServiceImpl | Cleaning busy entities for task 'b6261962-0d14-48b0-836b-45fc0d68df65' | vcd=83908311-0f60-48e3-a2ec-f10f07c4f187,task=b6261962-0d14-48b0-836b-45fc0d68df65
2013-05-29 07:42:56,488 | DEBUG | LocalTaskScheduler-Pool-31 | BusyObjectServiceImpl | Unsetting 1 busy entitie(s) for task ref NETWORK_GATEWAY_REDEPLOY(com.vmware.vcloud.entity.task:b6261962-0d14-48b0-836b-45fc0d68df65) | vcd=83908311-0f60-48e3-a2ec-f10f07c4f187,task=b6261962-0d14-48b0-836b-45fc0d68df65
2013-05-29 07:42:56,492 | DEBUG | LocalTaskScheduler-Pool-31 | TaskServiceImpl | Recorded completion of task 'NETWORK_GATEWAY_REDEPLOY(com.vmware.vcloud.entity.task:b6261962-0d14-48b0-836b-45fc0d68df65)' (retry count: 1) | vcd=83908311-0f60-48e3-a2ec-f10f07c4f187,task=b6261962-0d14-48b0-836b-45fc0d68df65
2013-05-29 07:42:56,494 | INFO | LocalTaskScheduler-Pool-31 | LocalTask | completed executing local task NETWORK_GATEWAY_REDEPLOY(com.vmware.vcloud.entity.task:b6261962-0d14-48b0-836b-45fc0d68df65) |

What I quickly realized is that it also affected the ability to modify any existing Edge Gateway IP/NAT/Firewall/VPN settings. If it were just the upgrade that was affected, I probably would have left it for another day.

Through all my searching, I could not find anyone who had a solution that worked for me and most posts ended up saying “call VMware support”. Well, I’m a glutton for punishment and often don’t know when to give up, so I kept at it and I was able to get it working.

I shutdown the new vShield Manager VM and rolled back to the snapshot I took of original vShield Manager VM after the vCloud Director upgrade but before the vShield upgrade. I then started to go through the steps again in this VMware KB: Upgrading to vCloud Networking and Security 5.1.2a best practices guide with a few deviations.

Even though I had enough space to run the main upgrade bundle, I ran the space clearing VMware-vShield-Manager-upgrade-bundle-maintenance-5.0-939118.tar.gz bundle anyway. After that finished, I ran the main 5.1.2 upgrade bundle (VMware-vShield-Manager-upgrade-bundle-5.1.2-943471.tar.gz).

Before I did the backup, deploy new OVF, restore, maintenance bundle upgrade routine in the KB, I went through and did an upgrade of each edge gateway (under the Edges dropdown in the vShield Manager web UI) which worked! In essence, this is a simple re-deploy of a new OVF of the gateway and reconfiguration of the service template with the latest version from the new vShield Manager.

Then I installed the VMware-vShield-Manager-upgrade-bundle-maintenance-5.1.2-997359.tar.gz bundle. After that was all booted back up and stable, I stopped vCloud Director, took a backup of vSM, deployed the new vSM OVF, installed the VMware-vShield-Manager-upgrade-bundle-maintenance-5.1.2-997359.tar.gz bundle to the new install, restored the backup, re-registered vSM with vCenter, started vCD, re-registered vCD with vSM.

Hope this helps someone out.

Upgrading to vCloud Director 5.1 with Existing Nested ESXi VMs

While my upgrade from vCloud Director 1.5.1 to 5.1 went on through out the day, I started to have a sinking feeling that I wasn’t going to be able to complete it with zero downtime for all of the VMs in the environment.

In our environment, a lot of training and product demos happen, and much of that relies on utilizing nested ESXi, similar to how VMware’s Hands On Labs are run at VMworld (and thankfully, now available online outside of the event).

William Lam has a great article on modifying your vCloud Director database to automatically pass the ‘nested hypervisor’ support flag to vCloud hosts as they’re brought into vCD to be used as a resource rather than having to modify each vSphere hosts’s config file.

However, with vSphere 5.1, VMware changed how nested ESXi is enabled. It’s now on a per VM basis rather than a per host basis. William’s post “How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1” covers the changes and the new process quite well, so I won’t cover that here.

The biggest kicker to this is that it requires the VM being VMware Hardware Version 9 which is new to vSphere 5.1. So, any current nested ESXi (or any other nested hypervisor) is running, at highest, Hardware Version 8. Continue reading

Change Virtual Machine SCSI Controller Type in NexentaStor VSA

Before I say anything, I shouldn’t need to say this, but I will. This is not supported. Now, on to the fun!

The current release of NexentaStor (v3.1.4.1) is made available as an OVA to make it easy to import into VMware environments. Currently this only “works” in full blown vSphere hosts and not Fusion/Workstation/Player (“works”, because with some fenagling, you can get it running in Fusion – don’t have access to Workstation/Player at the moment). Ok, already getting off track. This OVA comes with the following hardware configuration:

1 vCPU
1 x 8GB Hard Drive (syspool)
– this is configured with a VMware Paravirtual controller
1 Virtual Nic
– this is configured as a VMXNET3 device
Continue reading

Nexenta VAAI-NAS Beta Released, NFS Hardware Acceleration

Skip to Update 1

Along with the release of NexentaStor 3.1.4, Nexenta Systems today officially released the (very) Beta VAAI-NAS plugin for VMware vSphere 5.x via the community forums. VAAI-NAS is still not widely supported in the NAS world, and of those that do, not all support all the primitives.  You can search the VMware Compatibility Guide for vendors that are VAAI-NAS certified.

VAAI, to catch up, is the the suite of primitives (instructions) that allow vSphere to offload certain VM operations to the array. For NAS Hardware Acceleration, these are:

  • Full File Clone – Enables virtual disks to be cloned by the NAS device (but not ‘hot’, the VM must be powered off).
  • Native Snapshot Support – Allows creation of virtual machine snapshots to be offloaded to the array.
  • Extended Statistics – Shows actual space usage on NAS datastores (great for thin provisioning).
  • Reserve Space – Enables creation of thick virtual disk files on NAS.

Everything you wanted to know about VAAI (but were afraid to ask)

At this point, all primitives are working (or supposed to, it’s beta, right?) save for the Native Snapshots.

Here’s a quick tutorial to install the agent in NexentaStor and the plugin in VMware Vsphere.

Continue reading

PowerCLI Mass Add Hard Disks to Virtual Machine

While doing some iSCSI LUN testing for a certain storage vendor, I was looking for a way to add multiple hard disks to a single VM across each iSCSI LUN whose name matched a certain pattern. In my case, all luns I was testing against had the full lun path in their name so the were similar to lun1.naa.600144f0dcb8480000005142553e0001 (thanks to Alan Renouf’s post “PowerCLI: Mass provision datastore’s” for guidance on  scripting datastore creation).

However, I do not have all luns mapped to every vSphere host. Easy enough to get around this in PowerCLI. The following script prompts for the Virtual Machine name, size and hard disk format. Then filters the datastores by that VM’s vSphere host and our common string in the datastore name.

$vmname = read-host "VM Name to add disks to"

$vm = get-vm $vmname

$size = read-host "Disk Size (GB)"

$format = read-host "Disk Format (thin, thick, EagerZeroedThick)"

$datastores = $vm | Get-VMHost | Get-Datastore | Where-Object {$ -like "lun*naa*"}

foreach ($item in $datastores){
$datastore = $
write-host "Adding new $size VMDK to $vm on datastore $datastore"
New-HardDisk -vm $vm -CapacityGB $size -Datastore $datastore -StorageFormat $format

There are a lot of parameters for the New-HardDisk cmdlet that I don’t specify because the defaults were what I already wanted (e.g. Persistence, Controller, DiskType, etc.). Some, like StorageFormat which defaults to Thick Lazy Zeroed, I wanted to control.

In another case, I wanted to add multiple disks from one datastore to a vm.

### Get VM/Disk Count/Datastore information ###
$vmname = read-host "VM Name to add disks to"
$num_disks = read-host "number of disks to add"
$ds = read-host "Datastore to place the VMDK"
$format = read-host "Disk Format (thin, thick, EagerZeroedThick)"
$size = read-host "Disk Size (GB)"

$vm = get-vm $vmname
$datastore = get-datastore -name $ds

### Add $num_disks to VM
while ($x -lt $num_disks){
write-host "Adding $size VMDK to $vm on datastore $datastore"
New-HardDisk -vm $vm -CapacityGB $size -Datastore $datastore -StorageFormat $format

You can read more about the New-HardDisk cmlet at:

VMs Grayed Out (Inaccessible) After NFS Datastore Restored

[Added new workaround]

While working with a customer last week with Mike Letschin, we discovered an issue during one of their storage tests. It wasn’t a test that I’d normally seen done, but what the heck, let’s roll.

“What happens to all the VMs hosted on an NFS datastore when all NFS connectivity is lost for certain period of time?”

Well, turns out, it depends on a couple things. Was the VM powered on? How long was the NFS datastore unavailable for?

Dell Management Plug-In for VMware vCenter Review

Ok, I’ve had the plug-in running for a few weeks and have gone through some of the primary functions of it (firmware updates, inventory, monitoring, warranty retrieval, create hardware profile for deployment)

I’m not going to go through the initial setup, that’s been covered pretty well on

Here are the claimed major functionalities with my notes as far as day to day usage as well as some miscellaneous thoughts at the end.