Nutanix Virtual Computing Platform – Live demo

I would like to share some interesting information I have gathered while attending a live demo hosted by Terach in Amersfoort about the Nutanix Virtual Computing Platform.

Nutanix is offering a combination of compute and storage in one piece of bare metal to deliver the following advantages:

  • Save space in your rack (4 Hypervisors including storage consuming only 2U rackspace)
  • Achieve high performance using local harddisks, SSD and storage accelerators
  • Maintaining high-availability using Nutanix Distributed Filesystem (NDFS)
  • Pay-as-you-grow (Start small and increase your environment as it grows, adding cpu, memory and storage for each node you add)

As said, you start off with one so called ‘block’ which is a 2U server chassis providing consolidated power and cooling for a maximum of 4 ‘nodes’. Each node contains a hypervisor with an amount of storage, memory and processor power depending on the model you pick.

Every node has SATA disks, SSD’s and a PCIe SSD. All I/O will flow thru the PCIe SSD first, providing low latency and fast access, ending up on the SATA disks. When specific blocks are accessed regularly, these blocks are marked as ‘hot’ and are automatically placed on the SSD’s. When those blocks would become ‘cold’ again, they are moved back to the slower disks.

Running VMs and your storage in the same box, high availability, saving space in your rack and having great performance, this solution is something to consider when designing a new environment or when needing an upgrade of your existing environment.

One thing I still don’t get is the way Nutanix makes vSphere interact with the two 10 Gbit NICs. Best practices of VMware state that you should separate your traffic flows by using multiple (d)vSwitches and connecting 2 NICs for high availability purposes. In the environments I know, using HP BladeSystem with VirtualConnect, you can configure up to 8 so-called FlexNICs in vSphere which we separate as following: Management, vMotion, iSCSI and VM traffic (2 NICs for each (d)vSwitch).

In the Nutanix setup, I wonder if a vMotion would be started, which always tries to utilize the full capacity of the NIC, what would happen with the other traffic like management and more importantly the VM traffic. There must be some sort of QOS or traffic shaping applied because otherwise I am certain this would cause issues in production.

I’m very impressed by the Nutanix solution and hope to hear more from them soon. If I ever get my hands on a Nutanix block, I will provide you with more practical information about their solution!

Thanks for reading!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s