Choosing a Hypervisor

So it came to a point where I needed to choose a Hypervisor.

The choices (from the free ones) appeared to be Vmware’s offerings, either vSphere 4.1 or vSphere 5.0 or a free hypervisor such as Xen or Virtualbox.

Vmware’s offerings are touted as Tier-1 Hypervisors. That is, the hypervisor itself runs on the bare metal. Whereas KVM or Virtualbox are Tier-2 Hypervisors – relying on an underlying OS kernel.

This is where I began to hit Vmware’s limits on the ‘free’ aspects of ESXi.

My server has 8 Cores, 48GB Memory and a 2.7TB Disk Array.

ESXi v5.0 (or whatever they mean to call it now – deliberately blurring the feature set of their free product with their paid-for product) has a limitation of 32GB RAM, anything above that is unusable to the OS.

Well, OK, I’m not going to waste 16GB of memory for the sake of being on a Tier-1 Hypervisor.

ESXi v4.1 doesn’t have the memory limitation, although it appears to have fewer features (I was really interested in the PVLANs for instance, but it appears that those are paid-for only features as well!). Never the less, I tried it out – and well what do you know it has a lmitation of 750GB for local storage!

A further irksome issue I had with Vmware in testing was when I was trying to set up their Networking. I essentially wanted to implement multiple subnets, with a single interface bridged to one of the physical interfaces on the host server. Should be simple enough, no? Well think again – when I tried to do this Vmware insisted that I assign an IP address to this interface. Why? It’s a bridged interface, it doesn’t need an IP address, it doesn’t necessarily need to even run IPv4 – I might want to run IPX/SPX or IPv6 instead – in any case the bridge does not need to be assigned any Layer-3 address.

I googled on this networking faux-pas and found other people asking the same question, why does it need an IP address, with people giving ill-informed answers that “because it’s the bridge interface”. Nonsense, these people don’t know their networking.

So, now I was left with either going with something like Xen Server or perhaps going with the Hypervisor I actually use on my Laptop (Virtualbox). I thought there would be an obvious downside to Virtualbox – “it’s clearly designed for desktop virtualisation”, I thought. I had a brief look at Xen, however, and started to get confused. So XenServer is Citrix right? Citrix sell XenServer, then there is XenSource? How does that fit in? The Wikipedia says that RedHat / CentOS 6 don’t support dom0, what is dom0…

Well, too many questions, not enough answers, no good documentation.

So I opted for Virtualbox – turns out it has a pretty good Vboxheadless mode which allows me to run all my VMs through a VRDP session to their consoles (essentially using Microsoft’s Remote Desktop Services protocol RDP). I intended nearly all my VMs to be Linux based, and be primarily controlled via SSH – so this is fine for me.

There is also a companion project called phpVirtualbox – which provides a near identical GUI to the Virtualbox interface via a Web Browser.

It should also be noted that, surprisingly, Virtualbox is probably more dynamically controlable via the command line than it is via either of the GUI interfaces – and is very suited to a roll your own VM Hypervisor set up.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.