Upgrading a #HP StoreVirtual SAN environment

One of my readers asked me if I could write down the steps required to upgrade an HP StoreVirtual (P4500) SAN environment (Thanks Hal!). Instead of updating the original post, I am publishing this post to make it easier to find for other people. This is a general instruction for version 9.5 and higher.

Continue reading

Upgrading to LeftHand OS 10.5

Last night I performed an upgrade to SANiQ *ahum* LeftHand OS 10.5 from SANiQ 9.5 on 16 HP LeftHand P4500 G2 storage nodes and want to share a couple of things I learned from this process.

Before actually upgrading I spent some time analysing the possible risks and impact.

HP states that when using the CMC (Centralized Management Console) no downtime whatsoever should occur. This is possible due to the fact that CMC never reboots storage nodes simultaneously when they are the ones responsible for a specific LUN (which is, ofcourse, protected by Network Raid-10).

The possibility that we would suffer data loss was nil and reading other people’s experience with upgrading the storage nodes in combination with VMware was nothing but positive.

Still we wanted to take no risk at all and scheduled an extra backup, right before upgrading the nodes. The backup was performed after regular office hours (6 PM) so if disaster would strike, the least amount of user data would be lost. Running all (7) Veeam backup jobs at the same time took a while to complete (5 hours approximately) and after that I was good to go.

I started the upgrade process around 11 PM and actively monitored all of our systems. Not a single error or warning came by and no downtime was experienced (except the storage nodes themselves of course while they were rebooting).

The HP FOM (Failover Manager) was upgraded first and next the storage nodes were upgraded. They all power cycled and some had to restripe before the process continued. After all nodes were rebooted and upgraded, CMC installed another patch on all systems after they all had another power cycle. This process took about 5 hours to complete.

This slideshow requires JavaScript.

I performed a check after the upgrade completed and concluded that only minor issues occured:

  • SQL service on two VMs was stopped, not sure if this is a coincidence or due to the upgrade. Manually started the services OK.
  • Disk access lost on some ESXi hosts, but shortly after the access was resumed automatically.
  • One VM was marked as ‘inaccessible’. Removed it from inventory and re-added it to solve.

So, no major issues but quite some time to complete.

Oh; you should increase the Bandwith Priority of your Management Group inside CMC to increase the speed which will be used to restripe your nodes. I changed this from 16 MB/sec (default) to 40 MB/sec to decrease the total time needed to restripe.

My conclusion is that CMC is a great tool to perform an unattended upgrade of storage clusters. I would trust the tool even without running a backup prior to the process. Still I would recommend running the upgrade in off-hours due to the path failovers, restriping and possible latency spikes.

ESXi 5.1 U1 Whitebox Adventures

So, here I am again with something to share =)

First an introduction:
Some months ago, I built a new machine for my testlab, based on vSphere 5.1.
Because it is a whitebox (Not branded like HP or Dell) I spent much time researching the compatibility of my hardware with VMware vSphere.

My setup is based on the following hardware:

Fractal Design Define R4 case
Intel DX79SI motherboard
Intel i7-3820 processor
8x 8GB Kingston DDR3 1600MHz RAM
Intel 82574L server NIC
HP SmartArray P400 RAID controller
4x WD Caviar Black 1TB SATA600 harddisks
Kingston 4GB DataTraveler G3 (for ze ESXizor!)
Be-Quiet / Straight Power E9 500W PSU
CLUB3D / HD5450 videocard (passive, low voltage)
Cooler Master / Hyper 412S CPU cooler

Everything seemed to work without any issue, with the exception of my HP SmartArray P400 and onboard Intel 82579LM NIC. The PCIe NIC performed fine.
The problem with the P400 was that my logical volume was not recognized in ESXi. This was because of the driver supplied (hpsa) did not support logical volumes bigger then 2TB.
I ‘solved’ this by creating a RAID10 set of just below 2TB instead of RAID5 which would provide me with more capacity.

One thing I couldn’t get to work as well was monitoring of the RAID controller (logical and physical volume health) so I didn’t have to check manually every once in a while with acucli thru the ESXi shell.

As of now, I do not know if this issue still exists (as drivers could have been updated in the meantime) but it’s good to know if you are using the same controller and driver =)

The Intel 82579LM NIC was not recognized on boot and is because it apparently wasn’t on the support NIC list by VMware =(
This can be solved by installing a custom E1000 driver which is the default Intel driver used in ESXi. How you can do this, can be found later in this article =)

So, after some troubleshooting I managed to get a working whitebox with enough capacity and sufficient performance!

Now back to the reason I am posting this article:

vSphere 5.1 U1 has been released a while ago and I wanted to install it right away, but because of all the tweaks I have done before (undocumented, hehe) I was holding back.
This evening I resumed my search for finding a way to upgrade my whitebox to 5.1 U1 and have everything I had, plus more.

*spoiler* I found everything I needed and got ESXi 5.1 U1 running, including the monitoring of my HP SmartArray P400. Happy!

I couldn’t have done this without my indirect virtualization technology colleagues on the following pages:
http://blog.campodoro.org/?p=325
http://communities.vmware.com/thread/340524?start=0&tstart=0
http://www.yellow-bricks.com/2011/11/29/how-to-create-your-own-vib-files/
http://virtual-drive.in/2012/11/16/enabling-intel-nic-82579lm-on-intel-s1200bt-under-esxi-5-1/
http://www.v-front.de

To summarize;

-Download the offline bundle for upgrading to vSphere 5.1 U1 (update-from-esxi5.1-5.1_update01.zip)
-Upload the file to a datastore accessible from the ESXi host you’re about to upgrade
-Enable SSH access and execute the command below. Be sure to use your own path. The option ‘install’ will remove any VIBs installed that are not part of the to-be-installed offline bundle.
This means any custom VIBs will be removed! Because I wanted to use a different approach with injecting my VIBs (installing them after a clean ESXi upgrade) I used install and the –ok-to-remove parameter

esxcli software profile install -d /vmfs/volumes/vh02-repo/update/update-from-esxi5.1-5.1_update01.zip -p ESXi-5.1.0-20130402001-standard -f –ok-to-remove

After executing the command, it will start running for a while (about 2 minutes on my machine) and will report if the upgrade succeeded and you need a reboot to complete it =)
My old build number was 799733 (5.1) and the new one is 1065491 (5.1 U1)

-Customize the HP VIBs containing the files required for having proper support for your P400 controller. Customizing is necessary because HP checks if your system (BIOS) is of vendor Hewlett-Packard. Since I have a whitebox system (and you probably too because you’re reading this article) we need to remove this protection.
-Install the customized VIBs and custom Intel NIC VIB

Customizing the HP VIBs require that you download the appropriate VIB files first from the HP website:
http://vibsdepot.hp.com/hpq/apr2013/

The esxi-5x-vibs directory has VIBs for the HP ACU CLI and SMX provider (which is needed for monitoring)

Download the VIB files you need and get yourself a RedHat based OS to perform some nice commands.
I used CentOS 6.3 and accessed it using Putty. You also need some way to transfer the files to the Linux machine, I used WinSCP for this.

When the files are on your Linux machine, extract the VIB files using ‘ar’. I will document the way I have customized the hpacucli VIB file:

Extract VIB to working directory
ar vx hpacucli-9.40-12.0.vib

Edit descriptor.xml file using nano or vi and remove all hwplatform entries. In my case I had to remove the following line:

Save the descriptor.xml file and repack the files using the following command (in exact order!)
ar -r hpacucli-9.40-12.0.vib descriptor.xml sig.pkcs7 hpacucli

It should say that it created a VIB file, which is created in the working directory.

Upload the VIB file to your accessible datastore and install it using the following command on your ESXi host using SSH:
esxcli software vib install -f -v /vmfs/volumes/vh02-repo/update/customvib/hpacucli-9.40-12.0.vib

Well that’s it! It should not say anything about HP Vendor requirements and will tell you if it succeeded. A reboot might be required.
Same goes for other VIB packages!

The Intel 82579LM NIC can be used by installing the following VIB, which does not need customization or anything special:
http://snowvm.com/2013/05/14/esxi-51-u1-whitebox-adventures/net-e1000e-2-1-4-x86_64-vib/

The customized VIB packages I used can be downloaded from my Dropbox account as well:
http://snowvm.com/2013/05/14/esxi-51-u1-whitebox-adventures/hpacucli-9-40-12-0-vib/
http://snowvm.com/2013/05/14/esxi-51-u1-whitebox-adventures/hp-smx-provider-500-03-02-10-4-434156-vib/

I guess this article is finished for now; it’s nearly 2 AM here so time for bed =)

Thanks for reading!