From the title, you might think I would be griping about Hyper V, Microsoft’s server virtulization technology. The truth is, I am only frustrated by the prerequisites of Hyper V, one of which is particularly troublesome. Mainly, that you need to have a Virtualization Technology (VT) enabled processor from either Intel or AMD to run Hyper V.
This effectively makes it impossible to have a virtualized Hyper V test/development environment by ensuring that all of your Hyper V hosts are indeed physical. With vSphere, it is very possible to run an entire cluster on a machine that has enough enough memory through using a product like VMware workstation. In that scenario, you are limited to 32 bit guest operating systems on that cluster because vSphere requires the VT bit for 64 bit virtual machines. In the end, that is A-OK because what you are truly learning about is how to setup and configure the hosts along with vCenter. Hyper V is a fairly complicated beast, so it would be nice to be able to completely go through its configuration (not too mention testing configuration changes) in a virtual environment.
It’s been a frustrating day at work because of this requirement. Supposedly, Sun’s Virtualbox could pass along this VT bit capability to the VMs you were running, theoretically enabling the the running of a Hyper V host in a VM. I am here to tell you that doesn’t work, at least in the latest 3.1 release of Virtualbox. Other than that, I have no beef with Virtualbox, its actually pretty speedy and simple.
I was listening to my team discuss the configuration of new virtual appliance, based on Suse Enterprise, that would be created and delivered to the customers in .ovf format. The software engineers had requested that the size be as small as possible for delivery reasons. One of the heated discussion items became how much swap space should be created. There are a lot of opinions on this and much of the “knowledge” the team had pointed to about 1.5x the amount of ram allocated to the VM. In this case, the appliance is preconfigured to use 4GB of ram so 6GB to 8GB seemed to be the answer, a disappointment because the engineers had hoped the entire appliance would be 10GB.
This interested me, because I really didn’t know the answer. Obviously, it is very important in a consolodated server environment to size these things right because your swap space is actually very expensive SAN space. This article and its comments was very interesting on the topic. It boiled down to this formula as the easy standard:
- Swap space == Equal RAM size (if RAM < 2GB)
- Swap space == 2GB size (if RAM > 2GB)
The later comments were pretty good though, and they pointed out that Linux may be a bit dangerous in that when you run out of swap, processes start to be killed off to free up memory. Also, that there are use case scenarios where using swap space is OK or at the very least preferable to random processes being killed. Another thing to worry about is if you need to collect a kernel dump and where that might be going. It gets interesting when you treat disk as an expensive resource. At home or in dedicated server environment, disk space is pretty cheap but in enterprise virtualization, where you might spin up tens or hundreds of the same image for testing etc., disk space is really expensive!
Windows appears to be down to about 1x memory size for swap now, which is good. I still go for 1.5x there, myself.
If you happened to be curious about what they settled on for swap space with 4GB of ram, the answer is 5GB.