After several years of growth and change in my personal infrastructure, I’ve eventually hit enough bottlenecks on my N36L Microserver to need to upgrade, as usual this led to a full overhaul enough fun to make it a proper project, however with the complexity I’ve managed to build up over the years of projects and different services I’ve introduced (along with the unsupported hack of an SSD in my existing server) this was going to lead to a few oddities along the way.
Having built one before and with their prices still being pretty reasonable on the second hand market, I settled on the HP Proliant Gen8 Microserver, I picked one up for £140 on eBay and needed to pick up an additional disk caddy for another £15 as the ones from the N36L aren’t compatible.
Coming with a pretty puny (and power hungry) Celeron G1610T built in, I also opted to purchase a replacement CPU, a Xeon E3-1220Lv2 which can be had for $23 US on AliExpress, the highest spec CPU that backplane can take.
Finally the memory is very specific, the system supports a maximum of 16GB of PC3-12800E DDR3 ECC unbuffered RAM, the big CPU caching of the Xeon E3 the speed of this RAM can be pushed up to 1600 MHz from the 1333 Mhz allowed by the Celeron. Getting just the right memory set me back another £45, if you’re going to do this don’t be suckered in by what looks like by a deal and buy Buffered RAM, the board will fail to POST and you’ll need to buy more.
The process is pretty simple and luckily it worked much like I had hoped on paper, but as usual the devil is in the detail. My rough plan was to:
- Move the P410i dedicated RAID Controller in to the new server, placing the RAID battery in the void meant for the optical disk drive (ODD)
- Move each of the 4 2TB internal HDDs in order in to the server and connect them to the RAID controller
- Move the SSD which currently houses both the ESXi installation and the VM datastore in to the ODD void alongside the RAID battery
- Install an SD card in the internal SD card port and install a fresh ESXi 6.5
- Boot in to this new ESXi, reconfigure the network, mount the datastores on both the SSD and HDD RAID, re-register the existing VMs and finally erase the old ESXi installation
So this is all pretty straight forward right?
Installing the Hardware
First of all, this solution has some built in problems, much like the N36L, the Gen8 doesn’t actually support the use of an internal SSD, so we’re going to be making some small hardware hacks to make that possible, specifically re-purposing the power rail and data bus intended for the optical internal Optical Disk Drive (ODD) to serve our SSD instead.
Almost instantly, I find a problem, getting in to the void meant for the ODD I find that the only connector in there is actually a 4-pin Berg Connector (better recognised by people my age as a Floppy Disk Drive or FDD Connector). We can get an converter for about £3 on eBay and it doesn’t need to be anything flash. We’ll also need a standard SATA data connector, nothing flash again, the bus is only a maximum of 3mb/ps so don’t go and buy anything special, any spare cable should do the job:
Once we’re wired in, we can get the SSD and the RAID Battery installed, these obviously were never meant to live anywhere in this server, so we’ll need to secure them in the void in the top of the chassis and there’s no better way to do that when you’re hacking together than with some electrical tape. We’ll be running the blue SATA data cable and the black RAID battery cable down through the gaps in the case down towards the backplane:
The CPU is installation is nothing to shout about, a standard Intel LGA installation, drop the CPU on, apply some thermal paste and replace the fairly generous heat sink. The RAID card takes up the single 16 lane PCIe socket but does work without any additional driver installations (because this isn’t 2003):
Getting the Boot Working
The Gen8 comes with a built in SD Card slot which can be used as a boot device, in the BIOS this is part of the USB bus, so when setting the boot priority this is somewhat unusually set as USB Key (C:\):
There’s also some very unusual oddities regarding the built in SmartArray B120i Embedded RAID Controller (which as I’ve discussed in the past is a Fake RAID Controller). Since I want to use the external P410i dedicated Raid Controller, it would seem safe to disabled the B120i embedded controller, however doing so also silently disables the SATA AHCI bus meaning that our SSD data port will stop working (don’t you love a good mystery setup, lost a while working that out). TLDR; don’t disable the internal RAID controller just because you aren’t using it, it’ll break something else.
In the BIOS System Settings we can configure the priority of the USB Key boot (to look for an External USB Device, Internal USB Device, SD card or all of the above in a specific order), since we need to get ESXi on to the SD card I’ll boot from an external USB stick first, after some more trial and error this only works from the front, top USB port and I didn’t find that documented anywhere.
Now knowing all this, we can finally get to the ESXi installer:
Boot and Configuration
ESXi installed without any any additional tweaking, and the datastores were presented to the host, the management network was configured to the same settings, disabling IPv6 prompted a reboot which is the ideal time to boot in to the iLO (which requires a pretty fast strike of F8 during the servers boot process, without a reset and reconfiguration of the iLO the module wasn’t communicating with the rest of the system.
After a reboot, vSphere loaded with a known bug in 6.5 where the error “An error occurred, please try again”. The problem appears both with the new VM and with existing ones” appears every time you log in or refresh using either Chrome or Firefox (Chromium and IE still work). This bug can be resolved with a patch located here.
To install, first enable SSH on the ESXi Host and follow the below commands:
#--Copy patch to the /tmp dir on the ESXi Host (from a Linux Shell), on Windows use WinSCP scp esxui-signed-12086396.vib root@<esxi_host_ip_address>:/tmp #--Connect to ESXi Host ssh <esxi_host_ip_address> #--Install patch, BE SURE TO USE FULL PATH E.G. /tmp/esxui-signed-12086396.vib esxcli software vib install -v /tmp/esxui-signed-12086396.vib
The vNetwork had to be manually rebuilt to meet the same specifications as the old host, then the VMs all re-registered and booted without issue (and then the correct Port Group added back to each VM. It was also important to to inform ESXi that the VMs had been copied and not moved when prompted on first boot.
As for storage, the RAID detected without issue by the RAID controller during boot and this was recognised along with the SSD by VMWare, finally the old ESXi partitions deleted without issue freeing up additional space for the VM datastore:
Successful migration after a few (a lot) of oddities and hiccups, after getting there it’s running at well over 4 times the speed of the old server: