Bandwidth allocation using HP FlexFabric and VMware vSphere

Who is surprised to see server hardware with multiple 1 or 10Gbit NICs nowadays? Hardware keeps evolving, allowing higher capacities of compute power, network and storage. How do you coop with that and what possibilities do you have?

My post is based on the BladeSystem solution by HP, using VirtualConnect or FlexFabric as uplink to the physical network and storage fabric. The hypervisor I’m using is VMware. I can’t guarantee the proposed solutions in this post will work with other hypervisors or hardware solutions, but please let me know your experiences if you try it out. Screenshots in this post are made in a temporary ESXi installation, not with the actual physical blades with VirtualConnect/FlexFabric.

Remember the times in ESX(i) where you had specific requirements for the partitions? You had to increase the swap partition to make sure your service console didn’t ran out of memory. And you had to create root partitions for /var, /tmp and /home. These settings are not applicable anymore in ESXi 4.1 and higher, but seperating traffic using physical NICs is still one of the best practices that is still being used.

I would like to question this best practice. Not that I think it is bad to seperate traffic using physical NICs, but in some cases it might be more efficiënt to do this otherwise. I’m talking about the following flows:

  • vSphere Management
  • VM traffic (which can be multiple flows/VLANs ofcourse)
  • vMotion

In my case, I am using FlexFabric interconnect modules to provide storage and network capabilities to HP ProLiant BladeServers inside the blade enclosure, each blowing 10Gbit of ethernet to a single blade, giving a total of 20Gbit (failover) capacity. If one of the interconnects would fail, I will only have 10Gbit capacity in each blade.

When using VirtualConnect/FlexFabric, you are able to create multiple “FlexNICs”, which are presented as physical NICs to your blades (ESXi boxes). Each FlexNIC can be configured with a maximum bandwidth and will be limited to the total available bandwidth on the interconnect uplinks. A maximum of 8 FlexNICs can be configured for each blade.

Now, as I’m using both storage and network through these modules, and I’ve allocated 4Gbit on each interconnect for storage, it leaves me 6Gbit free network capacity. How can I allocate bandwidth in the best, most efficiënt way? One theory would be to seperate traffic using multiple FlexNICs:


The link speeds could be anything, but the point is that traffic is seperated staticly by sending specific traffic over dedicated physical NICs. This way you are sure that for example your vMotion traffic does not prevent your VM or management traffic to occur. The downside of this configuration is that you could be throwing away valuable network capacity, as each physical NIC cannot utilize more capacity then what is configured or the limit of that specific NIC.

The alternative I was thinking of, is providing 2 full uplinks, each with 6Gbit and seperating traffic on a different way. The setup will look like below (don’t mention the link speeds):


Now, I am able to use the full physical capacity of my two physical NICs, or my FlexNICs. Still, a way needs to be found to make sure traffic doesn’t influence eachother. Network I/O Control (or NIOC) will come in handy now. With NIOC, which requires a vSphere Distributed Virtual Switch (dvSwitch), you can prioritize traffic by assigning shares (Low, Normal, High and Custom) with a range from 1 to 100 shares. By default, the following network resource pools are available with their default shares:

  • FT (Fault Tolerance) – 50
  • iSCSI – 100
  • vMotion – 50
  • Management – 50
  • NFS – 50
  • Virtual Machine – 25

You can actually calculate how much bandwidth is going to be available when the physical NICs are under stress. Just add up all shares (when using the values above, it’s 325) and divide the specific resource pool with the total number of shares. Next multiply this number with the bandwidth in Mbit of your physical NIC. For example, with a 6Gbit NIC, the Management traffic will have: (50/325*6000=) 923 Mbit.

It’s also possible to create custom network resource pools for specific use cases. You can link these pools to specific port groups on your dvSwitches.

To summarize, I have created the following configuration to use my available network capacity as efficient possible:


This setup has not been tested in production environments by me yet, and there are ofcourse more solutions to solve this case. What are your experiences with this setup, or how would you solve this issue?

Looking forward to your feedback, thanks for reading!

Resources:

One thought on “Bandwidth allocation using HP FlexFabric and VMware vSphere

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s