Winter’s Coming: Time for a HOT lab!

As it’s getting cold and chilly outside, I though it was a good idea to get my hands on something hot. Something hot you say? Yeah, I know, it’s not a term which I usually mix up with IT-related stuff, but hey! I’ll promise you it’s good =) And no, I’m not talking about a hot tub.

Some months ago, I tried getting a newer version of my lab up and running. Kinda failed due to some hardware incompatibilities, defective pieces and being tired of troubleshooting unsupported hardware.

This time, I’ve spent many hours getting together a setup which would provide me with enough capacity and performance to run a full VMware management stack, and running several nested labs for specific setups. At moment of writing, the management stack is actually running at blazing-speed and I’m almost ready to start building some lab building blocks.

Back to basics: Why homelab?

Before we continue with the good stuff, I’d like to remark that having a homelab to run products on that you use professionally, really enhances your skills and confidence. Experience is one of the things you need the most of in IT business and gaining it is fairly easy if you dare yourself with challenging setups that could be implemented at customers.

Besides experience, its an awesome tool to have in situations where you have to demo something quick, or compare settings/features at the customer site. Writing documentation? Just run through the actual steps at your own pace on your own environment.

If you don’t have the possibility to purchase your own homelab, there are other alternatives that provide a similar experience, for example:

VMware HOL – Hands-on Labs. Free to use, tons of labs to roll out. Does come with a timer, but you can extend the lab if you need more time. Comes with a manual and spins up in minutes. The only downside to me personally is the performance and that the actual installation and initial configuration is skipped. The lab is basically enrolled in ready-state.

Ravello – Spin up your own VMs, using custom networks and self-uploaded ISO files. I’ve tried it for some days and was pretty happy with it, but performance is again, the reason why I didn’t enjoy using it.

Locally on your laptop/desktop – Using tools like VMware Workstation or Microsoft Hyper-V, you can easily run multiple machines on your laptop or desktop. Sizing for this isn’t a bad idea as your CPU, memory and storage IOPs will be gone like giveaway flatscreens at Bestbuy.

As I’m personally running a MacBook Pro with 16GB of RAM, resources are low. Especially if you want to run multiple products other than ESXi and vCenter Server. Running a desktop with a bunch of memory could do the trick, but in that case I won’t be able to run Virtual SAN on bare metal. See? I really need to get my hands on that lab.. Let’s go!

Hardware Build

Alright, let’s talk about specs here!

I’ve got three hosts, each consisting of the following parts:

  • Case: Cooler Master HAF XB EVO
  • Case fans: Be quiet! Pure Wings 2 – 2×140 mm and 2x 80mm
  • Motherboard: SuperMicro X10DRL-C
  • CPU: Intel Xeon E5-2609V4, 1.7 Ghz with 8 cores. 20 MB cache
  • CPU fan: Be quiet! Pure Rock
  • Memory: 4x Kingston ValueRAM – DDR4 – 16 GB 2400 MHz ECC
  • PSU: Cooler Master V550 – fully modular
  • Storage: 1x Samsung 840 Pro 128GB and 1x Samsung 850 Evo 500GB. Oh and a 16GB Kingston DataTraveler to boot ESXi from
  • Network: 1x HP NC380T Dual-Port Gigabit NIC

This picture was actually taken during the build:


Back in the days eh?

The next paragraphs will dive deeper into the selected parts and build.


Case: Cooler Master HAF XB EVO

I really, really love cases from Fractal Design. I must’ve bought 6 cases from them because they’re practical, silent and beautiful. For this setup I was actually looking for something that takes up less space. Stacking tower models was something I wanted to avoid. They offer smaller, mini-ITX based cases but the motherboard I wanted to use (more on this later) was only available in ATX form. So I needed to look a bit further.

This case by Cooler Master is actually meant for test benching and easy access (parts can be moved in and out very easy), but it is also stackable! This convinced me into buying three of these:


The case will fit both ATX and Micro ATX boards. It has 2 hotswap bays in the front, even more space for additional disks internally.

There’s just one thing which you have to keep in mind. I’m not sure if it’s a mistake in the case, or something special with SuperMicro boards, but see for yourself:

As you might see, the indicated holes by the case (on the left) don’t match the ones I could screw in to match the board. Instead of 9, I could only tighten 6. Still enough to seat the board, but not the way it should be of course.

Oh before I forget to mention; see the motherboard is installed on a separate, metal plate? It can be fully configured (with CPU, cooler and memory) before you slide it into the case.  I even managed to move a PCIe card between slots by just opening the side panel of the case. Pretty cool!

Case fans: Be quiet! Pure Wings 2

The case comes with some pre-installed fans on both the front and backside. Being a Cooler Master newbie, I thought it’d be wise to get me some quiet, but cool fans.

Removal of the old ones and installation of the new ones was a piece of cake:

The 140mm models go into the front and the 80mm models go into the back (next to the PSU).

Motherboard: SuperMicro X10DRL-C

First of all, why SuperMicro? Well, mostly because it offers out of the box remote console over a dedicated NIC. Being able to power the environment off and on from a distance allows me to keep power costs down and reduce part wear.

The board can fit two physical processors and up to 1TB of RAM. The onboard storage controller of this board is a LSI 3108, which is actually supported with Virtual SAN.

There’s also a model that offers 2x 10Gbit Ethernet. Looking at the workloads I am about to run,  4 Gbit should be enough. If 10 Gbit switches are getting somewhat cheaper, I might consider moving over.

CPU: Intel Xeon E5-2609V4

I did a calculation on number of vCPUs I was going to assign to my virtual workloads. Starting with a single CPU in each system, this CPU offered enough cores (8 each) for a reasonable price. It’s from the latest Intel family (v4) so performs well even though it only offers 1.7 GHz per core.


CPU fan: Be quiet! Pure Rock

CPUs get hot, so a good cooler is important. I’ve picked the Be quiet! Pure Rock for my CPU cooling. Installation was very simple! Unlike coolers I’ve installed in the past.

Memory: Kingston ValueRAM DDR4

Ever since I’ve worked with computers, I’ve been using Kingston memory modules. Call it personal preference as I don’t think there are actual “bad” brands out there.

After doing my sizing, I decided to go with 64GB of RAM in each of my hosts. With four 16GB modules, I filled half of my motherboard (CPU1 slots). So the next upgrade will be an extra CPU and memory. This, however, is not necessary yet.


PSU: Cooler Master V550

To deliver power to these howling beasts, I chose a 550W power supply by Cooler Master. It should be more efficient than cheaper PSUs as its certified as 80 PLUS GOLD. Never really paid attention to those certifications, let’s see if its actually worth it in the long run.

What I’m more interested in is that its fully modular. No cables attached unless you want them to. It saves much space in your case and allows for better cooling airflow.

Storage: Flash!

I think it’s only a matter of time until all spinning disks are replaced with flash-based storage. It is so much faster, provides low latency, produces less heat and consumes less power. Prices are still dropping quickly, so a comparison between spinning and flash is certainly advisable if you are in the position of renewing or building from scratch.

I’ve installed one Samsung 840 Pro 128GB and one Samsung 850 Evo 500GB. Together, they serve flash cache and capacity to a Virtual SAN cluster. Enabling compression and deduplication (Virtual SAN 6.2 all-flash features) really gives you a great bang for buck. I’m currently saving almost 50% of storage capacity (300 used versus 600GB).

It’s important to note that to use the built-in LSI 3108 controller, you need special SAS cables to connect between the SuperMicro board and the SATA interfaces of your disks. Search for SFF8643 in case you want to use it as well (SAS>4xSATA cable).

I didn’t run extensive benchmarking tests yet, but will perform these soon and publish results in a next blog article.


Network: HP NC380T

To provide sufficient networking bandwidth, I have combined the internal 2x 1Gbit with this add-on PCIe card, giving me 4Gbit of bandwidth. All four uplinks are configured on a single vSphere Distributed Switch and I’m using Network I/O Control to make sure important traffic goes first.

A 24-port Cisco C3750G is backing my lab.




I love green!

All boxes have been running for a few days now, performance is awesome and capacity still seems sufficient.

To wrap-up this post, I would like to mention that I will be doing all sorts of labbing with various combinations of VMware and third-party software. Expect fresh articles soon!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s