So, here I am again with something to share =)
First an introduction:
Some months ago, I built a new machine for my testlab, based on vSphere 5.1.
Because it is a whitebox (Not branded like HP or Dell) I spent much time researching the compatibility of my hardware with VMware vSphere.
My setup is based on the following hardware:
Fractal Design Define R4 case
Intel DX79SI motherboard
Intel i7-3820 processor
8x 8GB Kingston DDR3 1600MHz RAM
Intel 82574L server NIC
HP SmartArray P400 RAID controller
4x WD Caviar Black 1TB SATA600 harddisks
Kingston 4GB DataTraveler G3 (for ze ESXizor!)
Be-Quiet / Straight Power E9 500W PSU
CLUB3D / HD5450 videocard (passive, low voltage)
Cooler Master / Hyper 412S CPU cooler
Everything seemed to work without any issue, with the exception of my HP SmartArray P400 and onboard Intel 82579LM NIC. The PCIe NIC performed fine.
The problem with the P400 was that my logical volume was not recognized in ESXi. This was because of the driver supplied (hpsa) did not support logical volumes bigger then 2TB.
I ‘solved’ this by creating a RAID10 set of just below 2TB instead of RAID5 which would provide me with more capacity.
One thing I couldn’t get to work as well was monitoring of the RAID controller (logical and physical volume health) so I didn’t have to check manually every once in a while with acucli thru the ESXi shell.
As of now, I do not know if this issue still exists (as drivers could have been updated in the meantime) but it’s good to know if you are using the same controller and driver =)
The Intel 82579LM NIC was not recognized on boot and is because it apparently wasn’t on the support NIC list by VMware =(
This can be solved by installing a custom E1000 driver which is the default Intel driver used in ESXi. How you can do this, can be found later in this article =)
So, after some troubleshooting I managed to get a working whitebox with enough capacity and sufficient performance!
Now back to the reason I am posting this article:
vSphere 5.1 U1 has been released a while ago and I wanted to install it right away, but because of all the tweaks I have done before (undocumented, hehe) I was holding back.
This evening I resumed my search for finding a way to upgrade my whitebox to 5.1 U1 and have everything I had, plus more.
*spoiler* I found everything I needed and got ESXi 5.1 U1 running, including the monitoring of my HP SmartArray P400. Happy!
I couldn’t have done this without my indirect virtualization technology colleagues on the following pages:
-Download the offline bundle for upgrading to vSphere 5.1 U1 (update-from-esxi5.1-5.1_update01.zip)
-Upload the file to a datastore accessible from the ESXi host you’re about to upgrade
-Enable SSH access and execute the command below. Be sure to use your own path. The option ‘install’ will remove any VIBs installed that are not part of the to-be-installed offline bundle.
This means any custom VIBs will be removed! Because I wanted to use a different approach with injecting my VIBs (installing them after a clean ESXi upgrade) I used install and the –ok-to-remove parameter
esxcli software profile install -d /vmfs/volumes/vh02-repo/update/update-from-esxi5.1-5.1_update01.zip -p ESXi-5.1.0-20130402001-standard -f –ok-to-remove
After executing the command, it will start running for a while (about 2 minutes on my machine) and will report if the upgrade succeeded and you need a reboot to complete it =)
My old build number was 799733 (5.1) and the new one is 1065491 (5.1 U1)
-Customize the HP VIBs containing the files required for having proper support for your P400 controller. Customizing is necessary because HP checks if your system (BIOS) is of vendor Hewlett-Packard. Since I have a whitebox system (and you probably too because you’re reading this article) we need to remove this protection.
-Install the customized VIBs and custom Intel NIC VIB
Customizing the HP VIBs require that you download the appropriate VIB files first from the HP website:
The esxi-5x-vibs directory has VIBs for the HP ACU CLI and SMX provider (which is needed for monitoring)
Download the VIB files you need and get yourself a RedHat based OS to perform some nice commands.
I used CentOS 6.3 and accessed it using Putty. You also need some way to transfer the files to the Linux machine, I used WinSCP for this.
When the files are on your Linux machine, extract the VIB files using ‘ar’. I will document the way I have customized the hpacucli VIB file:
Extract VIB to working directory
ar vx hpacucli-9.40-12.0.vib
Edit descriptor.xml file using nano or vi and remove all hwplatform entries. In my case I had to remove the following line:
Save the descriptor.xml file and repack the files using the following command (in exact order!)
ar -r hpacucli-9.40-12.0.vib descriptor.xml sig.pkcs7 hpacucli
It should say that it created a VIB file, which is created in the working directory.
Upload the VIB file to your accessible datastore and install it using the following command on your ESXi host using SSH:
esxcli software vib install -f -v /vmfs/volumes/vh02-repo/update/customvib/hpacucli-9.40-12.0.vib
Well that’s it! It should not say anything about HP Vendor requirements and will tell you if it succeeded. A reboot might be required.
Same goes for other VIB packages!
The Intel 82579LM NIC can be used by installing the following VIB, which does not need customization or anything special:
The customized VIB packages I used can be downloaded from my Dropbox account as well:
I guess this article is finished for now; it’s nearly 2 AM here so time for bed =)
Thanks for reading!