vCloud director SDRS
vCloud director SDRS
Another feature which was introduced in vSphere 5.1 & vCloud Director 5.1 was the interoperability between vCloud Director & Storage DRS. Now vCloud Director can use datastore clusters for the placements of vCloud vApps, and allow Storage DRS do what it does best � choose the best datastore in the datastore cluster for the initial placement of the vApp, and then load balance the capacity and performance of the datastores through the use of Storage vMotion.
However, what about Fast Provisioned vCloud vApps which are based on link clones? Well yes, this is also supported. Storage DRS now understands how to handle linked clones objects which it didn�t do previously.
Shadow VMs
To begin with, let�s talk a little about Fast Provisioned vCloud vApps and shadow VMs. If a fast provisioned vCloud vApp is deployed from the catalog to a different datastore, a shadow VM of the vApp is instantiated from the catalog on the datastore. A shadow VM is an exact copy of the base disk. The fast provisioned vCloud vApp (which is effectively a linked clone) then references the local shadow VM on the same datastore; it does not reference the version of the vApp in the catalog. There is a great KB article here which discusses Fast Provisioned vCloud vApps in more detail.
However, I haven�t seen much documentation in the way of Storage DRS & vCloud Director 5.1 interoperability. Therefore I decided to investigate some of the behaviour on my own environment. My setup had the Fast Provisioned option enabled for my ORG-VDC:
In this example, I imported a VM from vSphere into my catalog on a Storage DRS datastore cluster. I am using this catalog entry to fast provision (fp) vCloud vApps. I next deployed a vApp from my catalog, but changed the Storage profile so that a new, fully automated Storage DRS datastore cluster becomes the destination. I observed the clone virtual machine task running, and the shadow VM folder being created on one of the datastores in the datastore cluster. When the operation completed, I saw the shadow VM and the fp vCloud vApp deployed to that datastore in the datastore cluster:
I made some subsequent deployments of that fp vCloud vApps to this datastore cluster to see if other shadow VMs are instantiated automatically on other datastores. Note that fp vCloud vApps to the same datastore in the datastore cluster will not need a new shadow VM; they simply reference the existing shadow VM as their base disk. My second fp vCloud vApp went to the same datastore. This is expected as there would be some overhead and space consumption to instantiate another shadow VM. The next 4 fp vCloud vApps also went to the same datastore. At that point I decided to go to the Storage DRS Management view and Run Storage DRS Now. At that point a number of Storage vMotion operations executed and my fp vCloud vApps were balanced across all the datastores in the datastore cluster. One assumes that this would have eventually happened automatically since Storage DRS does need a little time to gather enough information to determine correct balancing of the datastore cluster.
Storage DRS Migration Decisions
In the case of fast provisioned vCloud vApps, Storage DRS initial placement recommendations will not recommend an initial placement to a datastore which does not contain the base disk or shadow VM, nor will Storage DRS make a recommendation to migrate fast provisioned vCloud vApps to a datastore which does not contain the base disk or a shadow VM copy of the base disk. Preference is always going to be given to datastores in the datastore cluster which already contain either the base disk or shadow VMs as observed in my testing.
However, if Storage DRS capacity or latency thresholds are close to being exceeded on some datastores in the datastore cluster, new shadow VMs can be instantiated on other datastores in the datastore cluster by Storage DRS. This allows additional fp vCloud vApps to be initially placed or migrated to this datastore. This is also what I observed during testing. When I clicked on Run Storage DRS Now, I noticed new shadow VMs get instantiated on other datastores in the datastore cluster. Now fp vCloud vApps (based on linked clones) can be placed on any datastore in the datastore cluster.
If there is a cross�datastore linked clone configuration (created via APIs for example) and the linked clone vCloud vApp references a base disk on a different datastore, you may find that Storage DRS will not surface recommendations for this vApp. In fact, such configurations should be avoided if you want to use Storage DRS with vCloud Director.
So what about migration decisions? The choice of migrating a VM depends on several factors such as:
- The amount of data being moved
- The amount of space reduction in the source datastore
- The amount of additional space on the destination datastore.
For linked clones, these depend on whether or not the destination datastore has a copy of a base disk or if a shadow VM must be instantiated. The new model in Storage DRS takes the link clone sharing into account when calculating the effects of potential moves.
On initial placement, putting the linked clone on a datastore without the base disk or shadown VM is more costly (uses more space) than placing the clone on a datastore where the base disk or shadow VM resides.
Get notification of these blogs postings and more VMware Storage information.
Let�s take it from the top � in my vSphere environment, I built out two datastore clusters, each with 3 datastores. Each datastore in its respective datastore cluster has the same storage capability associated with it. This is the only way storage profiles will work with datastore clusters � all datastores in the datastore cluster must have the same capability. CloudDatastoreCluster datastores has the User-defined Storage Capability called Cloud-Store; DatastoreClusterT2 datastores has the User-defined Storage Capability called Cloud-Store-T2.
My next step in the vSphere client is two create two separate VM Storage Profiles, each profile containing one of the capabilities.
That completes the setup from the vSphere side of things. Lets now see how this integrates with vCloud Director. The datastore clusters and storage profiles now show up as vSphere resources in the vCloud Director System > Home view:
When I create my ProviderVDC, I can now include the datastore clusters (we�ll look at Storage DRS integration with vCloud Director at a later date) and the Storage Profiles.
The same is true now for your ORG-VDC; datastores and profiles can now be included at the ORG level. Be sure you assign a reasonable quota to your ORG-VDC or you might bump into this issue

This vApp, win7x64-vApp was deployed with the Cloud-Store-T2 profile selected, which mean that the VM was deployed on a datastore with a matching capability � vCloud-DS6. That in itself is a great feature to have � you no longer need to worry about the specific underlying capabilities of the datastore, you simply select the correct profile and it determines that for you. Some simplified profile naming will make the provisioning of vApps from vCloud Director error free each and every time.
What is even better however is the ability to change the VM�s profile in its properties view. In this example, I change the storage profile from Cloud-Store-T2 to Cloud-Store-Profile (the other profile we defined earlier), & now the VM is automatically migrated to a new datastore with this matching capability:
In the Virtual MAchine�s status window in vCloud Director, we see a status change to Updating. In the vSphere Task Console, we can see a Relocate virtual machine task underway. So all we did was change the profile associated with the VM, and a migration operation is automatically initiated to move the VM to a compatible datastore. This is ideal for situations where an organization may start on a lower tier of storage, but after a while realise that they may need a higher tier (or indeed vice-versa). The administrator simply changes the profile, and the VMs are seamlessly migrated to the new storage tier.
This is not really a storage feature per-se, but I am including it in this series of vSphere 5.1 storage enhancements simply because most of the work to support this 5 node Microsoft cluster framework was done in the storage layer.
Although most of the framework was in place since vSphere 4.0 (support for SCSI-3, LSI SAS Controller in the Virtual Hardware, support for PGRs), a number of additional improvements were required before we could scale out to supporting 5 nodes instead of the 2 nodes we supported in the past.
I also need to call out that this is for failover clusters only. In a failover cluster, if one of the cluster nodes fails, another node begins to provide service (a process known as failover). It should be noted that users will experience a temporary disruption in service when this occurs.
4-Node cluster testing was done in addition to 5-Node testing. This is because quorum models to be used are different for these configurations.
- 4-Node clusters use a Node and Disk Majority Model
- 5-Node cluster use a Node Majority Model.
In a 4 node cluster, we could end up in a situation where there are 2 nodes/votes on either side of the cluster. In this case, we use a majority node & disk set quorum, where the quorum data is stored locally on the system disk of each cluster node but is also stored in a shared disk accessible by all hosts. This shared disk (also known as the witness disk) has the deciding vote when there is a split- brain scenario.
The enhancements made to vSphere 5.1 allows up to 5 participating nodes. We tested both 4 node and 5 node configurations as the quorum models are different depending on whether the number of nodes in the cluster is odd or even. A majority node set cluster can handle up to 2 node failures out of 5.
Storage I/O Control (SIOC) was initially introduced in vSphere 4.1 to provide I/O prioritization of virtual machines running on a cluster of ESXi hosts that had access to shared storage. It extended the familiar constructs of shares and limits, which existed for CPU and memory, to address storage utilization through a dynamic allocation of I/O queue slots across a cluster of ESXi servers. The purpose of SIOC is to address the �noisy neighbour� problem, i.e. a low priority virtual machine impacting other higher priority virtual machines due to the nature of the application and its I/O running in that low priority VM.
vSphere 5.0 extended Storage I/O Control (SIOC) to provide cluster-wide I/O shares and limits for NFS datastores. This means that no single virtual machine should be able to create a bottleneck in any environment regardless of the type of shared storage used. SIOC automatically throttles a virtual machine which is consuming a disparate amount of I/O bandwidth when the configured latency threshold has been exceeded. In the above example, the data mining virtual machine (which happens to reside on a different host) is the �noisy neighbour�. To allow other virtual machines receive their fair share of I/O bandwidth on the same datastore, a share based fairness mechanism has been created which now is supported on both NFS and VMFS.
The following are the new enhancements to Storage I/O Control in vSphere 5.1.
1. Stats Only Mode
SIOC is now turned on in stats only mode automatically. It doesn�t enforce throttling but gathers statistics to assist Storage DRS. Storage DRS now has statistics in advance for new datastores being added to the datastore cluster & can get up to speed on the datastores profile/capabilities much quicker than before.
2. Automatic Threshold Computation
The default latency threshold for SIOC is 30 msecs. Not all storage devices are created equal so this default is set to a middle-of-the-ground range. There are certain devices which will hit their natural contention point earlier than others, e.g. SSDs, in which case the threshold should be lowered by the user. However, manually determining the correct latency can be difficult for users. This motivates the need for the latency threshold to get automatically determined at a correct level for each device. Another enhancement to SIOC is that SIOC is now turned on in stats only mode. This means that interesting statistics which are only presented when SIOC is enabled will now be available immediately.
When peak throughput is measured, latency is also measured.
The latency threshold value at which Storage I/O Control will kick in is then set to 90% of this peak value (by default).
vSphere administrators can change this 90% to another percentage value or they can still input a millisecond value if they so wish.
3. VMobservedLatency
I am a big fan of Storage I/O Control. I wrote a myth-busting article about it on the vSphere Storage blog some time back. I�d urge you all to try it out if you are in a position to do so.
download file now