The reason for this post was because I had some questions regarding a situation where your USB or SD memory card would die while running the ESXi hypervisor. What would happen next? How can you monitor this and how to recover from this situation?
During my career, I have been using LeftHand/StoreVirtual hardware very often and really liked the stability and flexibility of the product line. I would like to share my experiences with the virtual version of this iSCSI SAN solution as it can prove to be a valuable tool during your IT operations.
One of my readers asked me if I could write down the steps required to upgrade an HP StoreVirtual (P4500) SAN environment (Thanks Hal!). Instead of updating the original post, I am publishing this post to make it easier to find for other people. This is a general instruction for version 9.5 and higher.
Today VMware announced that the VSAN Beta is available for download.
Get your copy at the following URL:
Check out my other posts to see what VSAN does. Basically it transforms your local storage to a replicated form of storage, delivering low latency and high performance next to high availability.
For more information check out the VSAN Beta Program Community:
- VMworld 2013: VMware Virtual SAN #STO5391 (snowvm.com)
I would like to share some interesting information I have gathered while attending a live demo hosted by Terach in Amersfoort about the Nutanix Virtual Computing Platform.
Nutanix is offering a combination of compute and storage in one piece of bare metal to deliver the following advantages:
- Save space in your rack (4 Hypervisors including storage consuming only 2U rackspace)
- Achieve high performance using local harddisks, SSD and storage accelerators
- Maintaining high-availability using Nutanix Distributed Filesystem (NDFS)
- Pay-as-you-grow (Start small and increase your environment as it grows, adding cpu, memory and storage for each node you add)
As said, you start off with one so called ‘block’ which is a 2U server chassis providing consolidated power and cooling for a maximum of 4 ‘nodes’. Each node contains a hypervisor with an amount of storage, memory and processor power depending on the model you pick.
Every node has SATA disks, SSD’s and a PCIe SSD. All I/O will flow thru the PCIe SSD first, providing low latency and fast access, ending up on the SATA disks. When specific blocks are accessed regularly, these blocks are marked as ‘hot’ and are automatically placed on the SSD’s. When those blocks would become ‘cold’ again, they are moved back to the slower disks.
Running VMs and your storage in the same box, high availability, saving space in your rack and having great performance, this solution is something to consider when designing a new environment or when needing an upgrade of your existing environment.
One thing I still don’t get is the way Nutanix makes vSphere interact with the two 10 Gbit NICs. Best practices of VMware state that you should separate your traffic flows by using multiple (d)vSwitches and connecting 2 NICs for high availability purposes. In the environments I know, using HP BladeSystem with VirtualConnect, you can configure up to 8 so-called FlexNICs in vSphere which we separate as following: Management, vMotion, iSCSI and VM traffic (2 NICs for each (d)vSwitch).
In the Nutanix setup, I wonder if a vMotion would be started, which always tries to utilize the full capacity of the NIC, what would happen with the other traffic like management and more importantly the VM traffic. There must be some sort of QOS or traffic shaping applied because otherwise I am certain this would cause issues in production.
I’m very impressed by the Nutanix solution and hope to hear more from them soon. If I ever get my hands on a Nutanix block, I will provide you with more practical information about their solution!
Thanks for reading!