VMworld 2013: VMware Virtual SAN #STO5391

Before going into detail about this session, I must say that the speakers were hard to understand due to bad English pronouncement. It would be good if the speakers would train this so that everybody can understand what they are saying.

This session, primarily presented by Christos Karamanolis was all about VSAN, or Virtual SAN, the new feature available from vSphere 5.5.

In my opinion, VSAN is a replacement for the Virtual Storage Appliance (VSA) released during vSphere 5.0. And, VSAN could compete with the solution offered by Nutanix, except the intelligent software that Nutanix has developed will be more enhanced compared to VSAN.

For people not familiar with VSA; this solution will transform the local storage in your ESXi host to a VSA datastore. With a minimum of 2, maximum of 3 ESXi hosts, you can create a VSA cluster. Hosts in this cluster will replicate their VSA datastore with adjacent ESXi hosts and provide high availability (and stable performance, since it’s local storage!). You can run the VSA datastore next to your other datastore and will be able to storage vMotion between datastore types.

VSAN is enabled on ESXi cluster-level and will by default initialize all unused space on the hosts in this cluster. Advanced configuration enables an administrator to only use a specific amount or type of storage capacity. You don’t need to deploy extra VMs or vApps to use VSAN. It’s all inside vSphere 5.5.

Hardware-based (or software) RAID will not be used, but replication will be used instead. For each VM it’s possible to define how many replicas will be available using storage profiles in vSphere. This way you can protect important VMs using more replicas and provide a simple protection for the less important VMs.

Replication will keep copies of the VMDK blocks among all ESXi hosts in the VSAN-enabled cluster and removes the need for a RAID configuration. The local initialized storage (SATA, SAS or SSD drives) will be added to one big datastore presented by VSAN. This datastore will grow in size as you add extra drives. You don’t need to keep all your hosts the same, you can add extra SSD or HDD drives to specific hosts to add extra capacity of a certain drive type. The same goes for compute, but you need to have a minimum of 1 HDD and 1 SSD drive in a host to enable VSAN. A minimum of 3 vSphere ESXi 5.5 hosts are required to enable VSAN at cluster level.

The ESXi hosts need to have a 1GB or 10GB NIC and need to have VSAN virtual networks configured in order for VSAN to be enabled and be operational. When you have a RAID controller in your host, configure it as JBOD. As stated earlier, you should not configure RAID protection in combination with VSAN.

The vSphere hosts need to be on the VMware HCL to be supported for VSAN use. Of course all of your hardware is listed on the HCL.. Right? 🙂

When using replicas, write I/O will go to all replicas for that specific VM. Read I/O will go to any replica. This way of I/O pathing will provide high performance and high availability.

One last thing I would like to mention is that VSAN provides a detailed performance view which can be used for capacity management and troubleshooting.

When vSphere 5.5 is available for download I will most certainly test out the VSAN functionality and see if it can indeed replace the VSA functionality.

Nutanix Virtual Computing Platform – Live demo

I would like to share some interesting information I have gathered while attending a live demo hosted by Terach in Amersfoort about the Nutanix Virtual Computing Platform.

Nutanix is offering a combination of compute and storage in one piece of bare metal to deliver the following advantages:

  • Save space in your rack (4 Hypervisors including storage consuming only 2U rackspace)
  • Achieve high performance using local harddisks, SSD and storage accelerators
  • Maintaining high-availability using Nutanix Distributed Filesystem (NDFS)
  • Pay-as-you-grow (Start small and increase your environment as it grows, adding cpu, memory and storage for each node you add)

As said, you start off with one so called ‘block’ which is a 2U server chassis providing consolidated power and cooling for a maximum of 4 ‘nodes’. Each node contains a hypervisor with an amount of storage, memory and processor power depending on the model you pick.

Every node has SATA disks, SSD’s and a PCIe SSD. All I/O will flow thru the PCIe SSD first, providing low latency and fast access, ending up on the SATA disks. When specific blocks are accessed regularly, these blocks are marked as ‘hot’ and are automatically placed on the SSD’s. When those blocks would become ‘cold’ again, they are moved back to the slower disks.

Running VMs and your storage in the same box, high availability, saving space in your rack and having great performance, this solution is something to consider when designing a new environment or when needing an upgrade of your existing environment.

One thing I still don’t get is the way Nutanix makes vSphere interact with the two 10 Gbit NICs. Best practices of VMware state that you should separate your traffic flows by using multiple (d)vSwitches and connecting 2 NICs for high availability purposes. In the environments I know, using HP BladeSystem with VirtualConnect, you can configure up to 8 so-called FlexNICs in vSphere which we separate as following: Management, vMotion, iSCSI and VM traffic (2 NICs for each (d)vSwitch).

In the Nutanix setup, I wonder if a vMotion would be started, which always tries to utilize the full capacity of the NIC, what would happen with the other traffic like management and more importantly the VM traffic. There must be some sort of QOS or traffic shaping applied because otherwise I am certain this would cause issues in production.

I’m very impressed by the Nutanix solution and hope to hear more from them soon. If I ever get my hands on a Nutanix block, I will provide you with more practical information about their solution!

Thanks for reading!