PernixData FVP homelab experience

This post is dedicated to FVP, a storage acceleration solution by PernixData. I will briefly introduce their company and solution to give you an idea of what it does. Next I will show the installation steps, benchmarking data and share my opinion with you.

Post updated on October 3, 2014. See bottom of article.

Introduction
PernixData is a company co-founded in 2012 by Poojan Kumar and Satyam Vaghani. With FVP they provide a scale-out performance solution and enables you to separate storage capacity from storage performance.

The way they achieve this is explained by the image below.

As you can see, the performance lies within your hypervisor, and the capacity is delivered by your traditional storage arrays. This means, that every hypervisor your add, will increase the total performance. No need to buy extra arrays and get extra capacity which you don’t need. This way you are very flexible and enables you to react to business demand.

How does this work? Well, first of all you need at least one flash device in your hypervisor that will be used for caching purposes. Next, you need the FVP software on your hypervisor (in the form of a VIB file, which can be deployed using the shell or with vSphere Update Manager) and a piece of management software to configure flash clusters. This all together provides you with extra functionality inside your vSphere (Web) Client, where you can start configuring your flash clusters. The instructions for this installation and configuration can be found in the next chapter.

When everything is in place, your VMs will start talking with the flash device, writing and reading IOPs from the local flash device (faster then using wires to your iSCSI or FC fabric). As the VM is receiving acknowledgement way faster this way, the performance of your VM has increased dramatically. Depending on the policy you define inside FVP, you can choose whether to cache IOPs on the flash device for better performance, or if you want a safer method, you can choose to cache on the flash device, but no outstanding IOPs will stay on the flash device. Still faster then aiming all IOPs directly to your array, but no risk of losing IOPs that are cached on the flash device and still have to be submitted to the array.

Losing IOPs is not necessarily a risk though; it’s possible to cache IOPs on multiple flash devices, spread over different hypervisors. Your VM IOPs will have multiple copies, providing you with extra redundancy.

As you can imagine, using the local flash device for read and write IOPs, instead of talking directly to your storage array, de-stressed your array, making it perform better for all volumes provided by your array. You are increasing performance on both sides!

For more information about PernixData FVP, see their website or this FVP datasheet.


Installation
The installation of FVP 1.5 is rather simple. As mentioned, you need to install the management software on a Windows server (for example your vCenter Server) and deploy a VIB package on your hypervisor (ESXi is the only supported one at this moment). This can all be done without impacts/reboots of your machines.

I am installing FVP in my homelab (vSphere 5.5, virtual vCenter Server, one physical machine with a flash device). Be sure to follow best practices if you are implementing FVP in your production environment!

See the instruction below for the necessary installation and configuration steps.


Configuration
The next images will show you the steps for creating a flash cluster. Basically this means creating a new cluster object from the PernixData FVP section (I’m using the vSphere Web Client but you can also use the vSphere Client) and assigning flash devices and datastores or VMs to the cluster.

The configuration is rather simple but I guess you need to find out by yourself to experience the simplicity =)


Performance
Finally I want to give you insights in some basic performance benchmarking I performed to give you insights in the possible gain with FVP.

I’m using a Synology DS214play for testing purposes, and an Intel DC S3700 SSD of 100 GB.
In my physical host, I also have some local RAID1 logical disks, that consist of 7200 RPM SATA drives. Drives are connected through a HP Smart Array P400 RAID controller. When testing a particular setup, no other VMs than the benchmarking VM will be running on the storage device. There are however other VMs running on the same host.

The tests I perform will be on:

  • Local SATA RAID1 set
  • Synology RAID1 iSCSI volume
  • Local SSD (Full-SSD)
  • FVP with SSD + Synology RAID1 iSCSI volume

I will be using I/O Analyzer, a fling by VMware which can be downloaded from their VMware Labs page. It’s an appliance based storage performance management tool and looks very user friendly. This is the first time I’m using it, so will provide you with some feedback about that as well. Instructions on how to use this appliance yourself can be found here. I will be using the Iometer tests and using a 20 GB data disk.

The appliance contains several workload test, of which I will use the tests mentioned below on each storage setup. Each test will run twice for 2 minutes and will use 128K blocks. The reason for running the tests twice is to make sure possible caching is visible.

  • 128k_0read_0rand (100% Sequential Writes)
  • 128k_0read_100rand (100% Random Writes)
  • 128k_100read_0rand (100% Reads)
  • 128k_100read_100rand (100% Reads and 100% Random)
  • 128k_50read_0rand (50% Reads and 50% Sequential Writes)
  • 128k_50read_100rand (50% Reads and 100% Random)

Below you can find the (large amount!) of test results =)

Test Setup 1: Local SATA RAID1 set

100% Sequential Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 783
  • Maximum Read MBPS: N/A
  • Maximum Write MBPS: 98

100% Random Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 228
  • Maximum Read MBPS: N/A
  • Maximum Write MBPS: 28

100% Reads

  • Maximum Read IOPs: 905
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 113
  • Maximum Write MBPS: N/A

100% Reads and 100% Random

  • Maximum Read IOPs: 262
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 33
  • Maximum Write MBPS: N/A

50% Reads and 50% Sequential Writes

  • Maximum Read IOPs: 244
  • Maximum Write IOPs: 244
  • Maximum Read MBPS: 30
  • Maximum Write MBPS: 30

50% Reads and 100% Random Writes

  • Maximum Read IOPs: 118
  • Maximum Write IOPs: 117
  • Maximum Read MBPS: 15
  • Maximum Write MBPS: 14

Test Setup Results
I am actually amazed by the performance I’m getting with this setup! Heavy workloads will not really perform here, but simple workloads (like a homelab or small office setup) would perform fine! After running the tests on this first setup, I decided to change the block size I was testing with from 4K to 128K as I was getting unreliable results (skyhigh). Even after increasing the data disk from 1 GB to 4 GB and afterwards to 20 GB didn’t seem to display better results. Of course I am performing the same test on the other setups.


Test Setup 2: Synology RAID1 iSCSI volume

100% Sequential Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 310
  • Maximum Read MBPS: N/A
  • Maximum Write MBPS: 38

100% Random Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 21
  • Maximum Read MBPS: N/A
  • Maximum Write MBPS: 2

100% Reads

  • Maximum Read IOPs: 317
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 39
  • Maximum Write MBPS: N/A

100% Reads and 100% Random

  • Maximum Read IOPs: 39
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 5
  • Maximum Write MBPS: N/A

50% Reads and 50% Sequential Writes

  • Maximum Read IOPs: 124
  • Maximum Write IOPs: 125
  • Maximum Read MBPS: 15
  • Maximum Write MBPS: 15

50% Reads and 100% Random

  • Maximum Read IOPs: 15
  • Maximum Write IOPs: 15
  • Maximum Read MBPS: 2
  • Maximum Write MBPS: 2

Test Setup Results
As I already expected, running any VM data on a simple home-use Synology would not deliver any amazing numbers. Especially random IOPS are horrific. On the other hand, I am happy that this is the result. You will now be able to see the great improve in performance after applying FVP.


Test Setup 3: Local SSD (Full-SSD)

100% Sequential Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 1552
  • Maximum Read MBPS: N/A
  • Maximum Write MBPS: 194

100% Random Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 1492
  • Maximum Read MBPS: N/A
  • Maximum Write MBPS: 186

100% Reads

  • Maximum Read IOPs: 3566
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 445
  • Maximum Write MBPS: N/A

100% Reads and 100% Random

  • Maximum Read IOPs: 3500
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 437
  • Maximum Write MBPS: N/A

50% Reads and 50% Sequential Writes

  • Maximum Read IOPs: 941
  • Maximum Write IOPs: 941
  • Maximum Read MBPS: 117
  • Maximum Write MBPS: 117

50% Reads and 100% Random

  • Maximum Read IOPs: 909
  • Maximum Write IOPs: 913
  • Maximum Read MBPS: 113
  • Maximum Write MBPS: 114

Test Setup Results
Full-SSD power! Of course this setup is the “crème de la crème” but will be the most expensive setup. Even in my homelab 100GB is not enough to run all VMs, so think of a business that would like to go full SSD. The good thing about FVP is that you can leverage a flash device without the need to deliver the capacity.

As mentioned earlier, the capacity is delivered by your array (in my case the Synology) and the performance is delivered by a flash device. I guess my next tests will result in numbers that are between the Synology max IOPs, latency and throughput and the SSD device.


Test Setup 4: FVP with SSD + Synology RAID1 iSCSI volume

100% Sequential Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 1400
  • Maximum Read MBPS: 170
  • Maximum Write MBPS: N/A

100% Random Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 1000
  • Maximum Read MBPS: N/A
  • Maximum Write MBPS: 120

100% Reads

  • Maximum Read IOPs: 2400
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 290
  • Maximum Write MBPS: N/A

100% Reads and 100% Random

  • Maximum Read IOPs: 154
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 19
  • Maximum Write MBPS: N/A


50% Reads and 50% Sequential Writes

  • Maximum Read IOPs: 650
  • Maximum Write IOPs: 650
  • Maximum Read MBPS: 85
  • Maximum Write MBPS: 85

50% Reads and 100% Random

  • Maximum Read IOPs: 20
  • Maximum Write IOPs: 55
  • Maximum Read MBPS: 3
  • Maximum Write MBPS: 3

Test Setup Results
Adding a flash device to your ESXi host and using it for caching with FVP really does miracles! The numbers you are seeing in my test are not comparable to enterprise environments of course, but the ratio is rather interesting. What I find really strange, is that most of these tests were faster in the first run than the second run. I would expect it the other way round, as it needs to pass the cache first. Someone knows why?

Setting this strange behavior aside, the ratios are pretty impressive:

100% Sequential Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 310 > 736
  • Maximum Read MBPS: N/A
  • Maximum Write MBPS: 38 > 92

100% Random Writes

  • Maximum Read IOPs: N/A
  • Maximum Write IOPs: 21 > 415
  • Maximum Read MBPS: N/A
  • Maximum Write MBPS: 2 > 52

100% Reads

  • Maximum Read IOPs: 317 > 747
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 39 > 93
  • Maximum Write MBPS: N/A

100% Reads and 100% Random

  • Maximum Read IOPs: 39 > 154
  • Maximum Write IOPs: N/A
  • Maximum Read MBPS: 5 > 19
  • Maximum Write MBPS: N/A

50% Reads and 50% Sequential Writes

  • Maximum Read IOPs: 124 > 132
  • Maximum Write IOPs: 125 > 133
  • Maximum Read MBPS: 15 > 16
  • Maximum Write MBPS: 15 > 16

50% Reads and 100% Random

  • Maximum Read IOPs: 15 > 17
  • Maximum Write IOPs: 15 > 17
  • Maximum Read MBPS: 2 > 2
  • Maximum Write MBPS: 2 > 2

In some tests, only 10% or less of the SSD capabilities are being utilized. Ofcourse this is because of my homelab setup and single worker thread. The 50% Read / 50% Sequential Writes and 100% Random were not accelerated at all in my case.


Summary
PernixData FVP is certainly a product that can mean the difference in your environment between bad performance and outstanding performance. If you are running into performance issues and assume it’s your storage layer, be sure to check out FVP. You can als run the product in trial and use the FVP monitor capabilities to see what performance gain you could achieve.

Frank Denneman wrote a great post about using FVP to monitor your applications which can be found here.

If you are a blogger like me, and would like to use FVP in your homelab as well, be sure to contact PernixData on how to become a PernixPro and get access to the latest news and ofcourse FVP binaries =)


Update
Only some minutes after publishing this article, I received a message of Frank Denneman on Twitter about some feedback he had. Within 2,5 hours I got an e-mail by him with detailed feedback about the tests I did and how to improve them, amazing! =)

The tests I performed in addition to the ones above are done on the Synology iSCSI volume without flash acceleration and with acceleration enabled. The difference is I’m now testing with a 1GB data disk, 4K blocks, 95% Read and 75% Random. This test would reflect a general workload of a Web File Server. The test is running for 120 seconds like below. Before the test starts, it will fill the 1GB data disk with random data. Test results are shown below.

The flash device has been enabled around 10:12 PM. As you can see, the flash hit rate is increasing from that time (as data passes the flash device and is being cached, ready to be re-read). Next observation are the maximum IOPs, increasing from around 180 to over 240, which is more than 30% increase in maximum IOPs.

Latency is decreasing from around 100ms to near 60ms. Throughput is increasing from 800 KBps to 1000 KBps. Again, imagine your business/enterprise-grade array and professional server hardware. Your performance would be boosted for sure!

Thanks for reading!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s