Tag Archive for 'Lab Manager 4'

Linked Clones: Lab Manager vs vCloud Director v1.5

One of the big “features” added to vCloud Director that allows it “parity” when compared to the outgoing Lab Manager is the re-introduction of Linked Clones.  These Copy-On-Write (CoW) disks provide for VMs that are actually little more than differencing disks from a base disk.  Using Virtual Desktop Infrastructure (VDI) solutions, this is common to preserve disk space for all of your XP/W7 desktops you are spawning and allows you to better utilized small, expensive SSD drives.

Well, in LM and vCD, it is supposed to save space too. One beef I have with the current implementation in vCD is that it is actually worse when compared to LM.  The root of the issue, you see, is that in Lab Manager you could cleanly create a VM from a template, this would stay thin provisioned and it would act just like a classic VM, no linked clones and no CoW.  Well, in vCD you always get a linked clone no matter how you provision the VM if your Org has fast provisioning enabled.  This is also true for consolidations, where in LM you get a clean VM as result and in vCD you continue to get a linked clone, chain length of one.

In the long run, this is going to negatively impact disk space utilization.  As you are forced to always write to the differencing disk with Linked Clones, LM actually offered a nifty hybrid approach that allowed for overwriting the base disk when the VM was freshly provisioned or freshly consolidated.  This is a step backwards that I hope VMware will address.

–Nat

Lab Manager Blog

I’ve had a love/hate relationship with Lab Manager so far. It has been a great tool for our developers, but it is not a complete solution. It wantonly wastes storage (its primary constraining resource) and it make server maintenance much more of a chore than necessary. I stumbled across this good blog on it now, of course, as it enters its twilight. Hopefully good information on vCloud Director will be forthcoming.

http://bsmith9999.blogspot.com/2011/03/lab-manager-4.html

–Nat

VMware Lab Manager 4.01 Review

This isn’t going to interest most folks who are reading my blog.  I need to get this written out though, because some guy was looking for Lab Manager feedback and couldn’t find constructive criticism.  Here is mine, and I am sure Google will index it.

As of 4.01, VMware vCenter Lab Manager has its uses, but it has huge gaps:

1) Total lack of storage resource monitoring tools/information that would be useful. You can’t export storage usage, linked clone tree structures, etc. If you aren’t familiar with CoW disks, linked clone chains, etc. you soon will be and you’ll be wondering about this in a big way when you need to constantly buy huge chunks of SAN disk with little hard data.

2)No exisitng backup solutions. Want to back up your library entries? Enjoy manually exporting them and hitting them one by one. SAN replication IS NOT a backup mechanism, folks. Backup is to tape or similar.

3)Very little in the way of customization. We have users that constantly fill up LUNS and IP pools when they have open space in other LUNS and pools because they just use the defaults. We’d like to set the default to blank in many cases, but that isn’t available.

4)Redploying VM’s nets them a new IP. This is a huge issue at times if you have IP sensitive configurations, especially when dealing with fencing.

5)Active Directory is a mess with fenced VMs, etc. Not really Lab Managers fault, but that’s the state of things.

6)Scalability. Using host spanning networks you are limited to 512 distributed switch port groups that each fenced configuration uses. In large deployments, you are likely to collide with this, necessatating another vCenter/Lab Manager instance and fragmentation of resources.

7)Maitenance issues. Maitenance Mode even with host transport networks enabled is borked because of the little VM that Lab Manager locks to each host. This is fairly ridiculous and convulutes what should be a very straight forward process.

8)Get ready to work some enourmous LUN sizes vs what you are likely used to. We have 2TB FC Luns and the only one we extended to 4TB is having locking issues, etc. NFS is the way you need to go.

9)Enjoy adding another Server 2003 instance to your infrastructure, because 2008 isn’t supported as an host OS for the Lab Manager services.
  Oh yeah, all your important data is located in a little SQL express database on that server too. This is Enterprise software, right?

THE biggest issue I have with Lab Manager is the fact that Lab Manager accesses the ESX servers directly. Do us all a favor and use vCenter as an abstraction layer so we can actually see what the crap is going on and rely on a proven set of administration tools. Ideally Lab Manager would be a plugin and wouldn’t be harboring its own database, etc.

Bottom line is that you need to be sure you have the right needs for Lab Manager to be useful.

Original Thread:

http://communities.vmware.com/

–Nat

Frustration with Lab Manager

A large part of my new job has been helping with architect a large VMware Lab Manager 4 implementation.  This has proven to be fairly annoying when deploying Lab Manager in a big way.  It is important to keep in mind that the main design challenge and constraint of nearly all virtualization solutions is the back end disk, configured in VMware as “Datastores.” Some of the main frustrations we are currently facing:

  • Lab Manager blatantly disregards VMwares own best practices when it comes to disk allocation – we are talking about 2TB LUNS as a minimum and facing the issue of using VMFS extents.  Horrible!  You are almost forced to use NFS, which brings more support complications to the table as you can no longer rely on calling VMware as the primary support vendor.   How this can be when VMware sells and and supports it?  You would think VMFS would be the recommended file system for any VMware solution until they provide the ability to create and maintain NFS Datastores from within vCenter.
  • You can’t use thin provisioning in Lab Manager. Arguably, this would be more useful than Linked Clones, which just create more management headaches than they are worth in a bigger deployment.  I am not alone here in thinking this.  We are deploying unique VM’s with ~200-500GB of auxiliary disk.  Having this all thick provisioned upfront is harmful, especially as users have the ability to make clones of these – or even worse, check them into the library where they would take up that space again and be *required* to be on the same Datastore.
  • Even though Lab Manager devs are well aware of how they are bound by datastore limitations and know full well the way that vSphere overcomes many of those challenges, they don’t provide a way to seamlessly use storage vMotion either within Lab Manager or external to it.
  • Lab Manager could provide for automatic load balancing for Datastores and Networks, but it doesn’t.  Instead we have to trust users to do this for themselves.  That’s just silly, the users don’t care about these things and therefore no amount of training will get them to do this on a consistent basis.  I’ve already mentioned that we can’t fix overloaded Datastores without user impact, and Lab Manager doesn’t even help us preempt it.
  • It would be great if we could take actions on flags, for example once a datastore reaches 70% full we disable to the ability to create VM’s on it.  That would help keep us away from the situation where a LUN drops offline because it is packed to the gills.
  • Disable Linked Clones all together.  They make it more complicated than its worth with 100′ s of self provisioning users and tens of Datastores.  It also incredibly inhibits VM mobility.
  • A way to have a centralized template store that admins can put VM’s on but no one else can.

The items above are really inhibiting our ability to make good use of Lab Manager.  It is clear that this piece of software was not built with large scale deployment in mind.  It also features too many design compromises that hamper the overall value of running vSphere as a whole.  This is epitomized in a conversation I just had with my boss.  When talking about Lab Manager, we are constantly talking about the problems it is causing us.  With vSphere, we are talking about how the technology allows us to over come challenges.

We need a solution not a constraint, dammit.

–Nat