Monthly Archives: December 2009

Interesting Insights in Linux Swap Configuration

I was listening to my team discuss the configuration of new virtual appliance, based on Suse Enterprise, that would be created and delivered to the customers in .ovf format.   The software engineers had requested that the size be as small as possible for delivery reasons.  One of the heated discussion items became how much swap space should be created.  There are a lot of opinions on this and much of the “knowledge” the team had pointed to about 1.5x the amount of ram allocated to the VM.  In this case, the appliance is preconfigured to use 4GB of ram so 6GB to 8GB seemed to be the answer, a disappointment because the engineers had hoped the entire appliance would be 10GB.

This interested me, because I really didn’t know the answer.  Obviously, it is very important in a consolodated server environment to size these things right because your swap space is actually very expensive SAN space.  This article and its comments was very interesting on the topic. It boiled down to this formula as the easy standard:

  1. Swap space == Equal RAM size (if RAM < 2GB)
  2. Swap space == 2GB size (if RAM > 2GB)

The later comments were pretty good though, and they pointed out that Linux may be a bit dangerous in that when you run out of swap, processes start to be killed off to free up memory.  Also, that there are use case scenarios where using swap space is OK or at the very least preferable to random processes being killed.  Another thing to worry about is if you need to collect a kernel dump and where that might be going. It gets interesting when you treat disk as an expensive resource.  At home or in dedicated server environment, disk space is pretty cheap but in enterprise virtualization, where you might spin up tens or hundreds of the same image for testing etc., disk space is really expensive!

Windows appears to be down to about 1x memory size for swap now, which is good.  I still go for 1.5x there, myself.

If you happened to be curious about what they settled on for swap space with 4GB of ram, the answer is 5GB.

–Nat

Converting a 2008 R2 Server to Virtual

Get this – that doesn’t work yet!  I had the same error as popped up in this thread and evidently the work arounds are not too pretty.  Word to wise is to create an R2 VM on the virtualization platform you want to be using.

It appears, reading through the release notes for vSphere 4U1 that 7 and R2 are at least officially supported operating systems now.   So get right on that vCenter update, then your vSphere server updates.  You know the drill!  At least these should be the last significant OS releases from Redmond for a couple years 🙂

–Nat

Frustration with Lab Manager

A large part of my new job has been helping with architect a large VMware Lab Manager 4 implementation.  This has proven to be fairly annoying when deploying Lab Manager in a big way.  It is important to keep in mind that the main design challenge and constraint of nearly all virtualization solutions is the back end disk, configured in VMware as “Datastores.” Some of the main frustrations we are currently facing:

  • Lab Manager blatantly disregards VMwares own best practices when it comes to disk allocation – we are talking about 2TB LUNS as a minimum and facing the issue of using VMFS extents.  Horrible!  You are almost forced to use NFS, which brings more support complications to the table as you can no longer rely on calling VMware as the primary support vendor.   How this can be when VMware sells and and supports it?  You would think VMFS would be the recommended file system for any VMware solution until they provide the ability to create and maintain NFS Datastores from within vCenter.
  • You can’t use thin provisioning in Lab Manager. Arguably, this would be more useful than Linked Clones, which just create more management headaches than they are worth in a bigger deployment.  I am not alone here in thinking this.  We are deploying unique VM’s with ~200-500GB of auxiliary disk.  Having this all thick provisioned upfront is harmful, especially as users have the ability to make clones of these – or even worse, check them into the library where they would take up that space again and be *required* to be on the same Datastore.
  • Even though Lab Manager devs are well aware of how they are bound by datastore limitations and know full well the way that vSphere overcomes many of those challenges, they don’t provide a way to seamlessly use storage vMotion either within Lab Manager or external to it.
  • Lab Manager could provide for automatic load balancing for Datastores and Networks, but it doesn’t.  Instead we have to trust users to do this for themselves.  That’s just silly, the users don’t care about these things and therefore no amount of training will get them to do this on a consistent basis.  I’ve already mentioned that we can’t fix overloaded Datastores without user impact, and Lab Manager doesn’t even help us preempt it.
  • It would be great if we could take actions on flags, for example once a datastore reaches 70% full we disable to the ability to create VM’s on it.  That would help keep us away from the situation where a LUN drops offline because it is packed to the gills.
  • Disable Linked Clones all together.  They make it more complicated than its worth with 100′ s of self provisioning users and tens of Datastores.  It also incredibly inhibits VM mobility.
  • A way to have a centralized template store that admins can put VM’s on but no one else can.

The items above are really inhibiting our ability to make good use of Lab Manager.  It is clear that this piece of software was not built with large scale deployment in mind.  It also features too many design compromises that hamper the overall value of running vSphere as a whole.  This is epitomized in a conversation I just had with my boss.  When talking about Lab Manager, we are constantly talking about the problems it is causing us.  With vSphere, we are talking about how the technology allows us to over come challenges.

We need a solution not a constraint, dammit.

–Nat

Not your typical Texas Roadhouse

Kristin and I were looking for a place to eat and this place caught our eye.  I mean, it’s awesome in the U.S. right?  Maybe a Manila franchise?

Not quite authentic.

Not quite authentic.

So, not really.  At least they had $1 beer!

There we are, eating outside in November!

There we are, eating outside in November!

I was hoping for real deal Chicken Strips...

I was hoping for real deal Chicken Strips...

Ultimately it was tasty, but we should have known it wasn’t going to be as good.  There was no line and no tasty buns.  They did have some pretty tiny peanuts that were free, though…

–Nat

Huge malls in Manila

It is intersting that here in Manila, one of the recreational acitivities is “malling.”   The malls are epic in size (think Mall of America, but almost twice the size) and connected to other malls and shopping areas.  I was looking for a movie theater in one of these labyrinthine structures and ended up in the wrong mall by going up the wrong escalator.  Like the MOA, many of the stores are in the mall multiple times.  They don’t seem to have the multiplexes like the U.S. has, but if you are willing to walk to a different mall, the big movies are still playing every twenty  minutes or so.

These malls are seething with people and every mall boundary is defined by a security checkpoint?  Its a little unsettling, but being a big white guy the don’t even check me.

–Nat