Deploying Citrix XenDesktop on a Multi-Node HPE SimpliVity Cluster

We’ve all seen the “SimpliVity VM Data Access Not Optimized” warning on our VMs when the running instance is not aligned with a local storage copy. This articles provides information on how this alignment is done and which SimpliVity tweaks are required to work with Citrix XenDesktop.


Running VDI desktops on Hyper-Converged Infrastructure solution such as SimpliVity (which is now owned by HPE) is very popular nowadays.

The size of these environment varies, but as the environment grows larger, management and maintenance could get very complex. One of the improvements you could make, is working with stateless desktops compared to stateful ones. This decreases the complexity as changes to desktops can be pushed with a single click and users will get these changes after they re-login in their desktop.

Stateless Desktops on SimpliVity

Using stateless desktops on SimpliVity is supported, but in this particular case, it required some extra thinking.

For those not familiar with the “SimpliVity VM Data Access Not Optimized” warning I mentioned earlier, this basically means that your VM is running on an ESXi host where there is no storage copy.

SimpliVity also calls this “Proxy Mode” – meaning the VM talks to its storage over the network. This introduces some latency, but shouldn’t be noticeable unless the number of VMs running in this state exceeds the capacity on your network link.

In this case, almost every deployed VM was showing this warning as displayed in the following screenshot.


As this environment is going to host 600-700 VMs, this would cause a very yellow-ish screen because of all the warning signs. Not something you want as it will quickly take your attention away when something is really wrong.

Solutions and Mechanisms

Below you can find several ways of solving this issue and which mechanisms are in place to keep things balanced and healthy.

Single or Dual Node

It’s possible to deploy VDI desktops to a single node or dual node cluster. This way there will always be a storage copy (primary and secondary) on either node. When a VM is running on a host where the secondary copy is, it will automatically be marked as primary.

vSphere DRS

One way of solving this, is enabling vSphere DRS (Distributed Resource Scheduler). Together with SimpliVity’s IWO (Intelligent Workload Optimizer), this will automatically move VMs to where their storage copy is (primary or secondary).

However, in this case the customer did not have a vSphere Enterprise Plus license. And purchasing this for 12 sockets is rather expensive.

Manual Rebalance

A manual rebalance is something SimpliVity Support can help you with remotely if you have a valid support contract. Using the CLI, they can basically point out which storage copy needs to be where. Especially in the case where you’ve got a cluster and available space between your nodes is not equal (or almost equal) – this might be a good approach.

Balancing does happen automatically though. This is always done by SimpliVity by the available free space on each node. Because of this, deploying a lot of VDI desktops might cause them to end up on the same node. This is because of the small size of each VDI desktop (stateless) and the speed at which they are deployed.

SimpliVity suggests deploying batches of 50 VDI desktops, so that the balancing mechanism has time to coop with these changes and spread out the number of desktops accordingly.


A VM that is in proxy mode, will trigger an event in the log of the VM inside vSphere. It should mention that it’s running in a non-optimized mode and which two hosts are currently hosting the storage copies. Simply move your VM to either host to start running in optimized mode.


This piece of information is basically why I started writing this article.

SimpliVity Support is able to set two different parameters on each OVC (OmniStack Virtual Controller) which will make sure that each deployed VM will have its primary storage copy on the same place as where the VM instances is running and keep it there.

These are the following two commands:


Stopping the VMs from immediately showing as “Data Access not Optimized”

svtcfgcli $SVTCONF -c /Balancing/LinkedCloneBalancer/Enabled false && restart svt-resourcebalancer

Stopping the VMs from only placing the primary and secondary copies on 2 nodes

These commands cannot be run by anyone else besides SimpliVity Support. If you need these parameters set, make sure you request them using your SimpliVity Support channel.

OVC Tuning

Since SimpliVity started shipping on HPE exclusively, new sizing guides have been made available and are as following:

  • X-Small Node: 4 vCPUs and 60 GB of RAM
  • Small Node: 4 vCPUs and 70 GB of RAM
  • Midsize Node: 6 vCPUs and 108 GB of RAM
  • Large Node: 7 vCPUs and 114 GB of RAM

Final Remarks

The final remarks I want to make is that having a stateless desktop configuration with Citrix (but probably independant of which vendor your are using, as long as it works with a master image or copy) will cause a high load on read IOPS where the master image is stored.

As there is only one, all deployed VDI desktops will point their base disk to this very same image, which only runs on a single node and very specific pieces of blocks on your storage devices.

The storage device, or devices, that store this data, will provide most of the read IOPS. Performance could be impacted because of this, but also tear of these pieces of hardware will increase more than others.

A way to prevent this is to work with multiple master images (for example one per node), multiple clusters (in essence this also provides multiple master images) and finally using full clones would solve this as each VM will have its own dedicated set of disks.

All of these alternatives will cause a higher administrative overhead, so it really depends on your requirements.

One thought on “Deploying Citrix XenDesktop on a Multi-Node HPE SimpliVity Cluster

Leave a Reply