Category Archives: VMware

Anything and everything about VMware products.

VMware Technology Network Subscription – Bring it Back!

One of the biggest beefs I’ve had with VMware over the last few years, and I apologize to everyone to whom I’ve already ranted about this to, is that they don’t have program that is like Microsoft’s TechNet.

What’s so great about TechNet, you might ask?

With the TechNet subscription you get access to everything that Microsoft offers  – with full retail keys.  This isn’t some time bombed trial, this is the real deal.  You get access to all of their software from the distant past right up through early release betas of their software – like the upcoming Windows 8 and Server 2012.  This is essential for long term test VMs and testing software with what can be complicated, involving installs like Active Directory and Exchange, for example.  Also, you get access to the creme of their productivity software crop like Project and Visio.  Best of all its “only” $200 to start and $150 to renew.  If that sounds expensive, remember that a single Server 2008 R2 license can  run you $700 alone, and the productivity software can also run hundreds of dollars.

Why might Microsoft sell this subscription if they could get so much more money for each project by forcing you to buy real licenses for real products?  It’s pretty simple, really. As individuals, we are not going to buy this software at these prices, and would then turn to free or cheaper alternatives.  Microsoft must know that  TechNet sub is something a very technical person is going to buy – like IT Professionals.  What IT Professionals use at home directly influences what they use at work – and business purchases are Microsoft’s bread and butter, they’ll tell you this to your face no matter how much it seems like they are about conquering the home PC.  Having your home PCs run Windows and Office is just another way to keep business running what their employees know and can be efficient with.

Back to the VMware Technology Network Subscription (VMTN).  They used to have a similar program that let you use full versions of their software in your home and labs and many attribute this program with the rapid adoption of VMware in the Enterprise space – since you could play with it on the cheap and gain confidence in it, then it made sense to champion it within your organization.  VMware discontinued it about five years ago (or so…) when they made clear that Windows GSX Server (VMware Server) and VMware player were free products that could be used.  GSX has totally gone the way of the dinosaur now, and while VMware player is immensely useful in some tasks, it doesn’t allow you to play with the Enterprise features that you might actually want from VMware.

VMware does offer a free version of their bare metal hypervisor, ESXi.  The problem?  This Hypervisor also does not allow you to experiment/implement any of the Enterprise features that differentiate VMware from the rest – and it doesn’t even allow for any scripting automation, another of VMware’s strengths.  This very much limits the usefulness of the platform.  It should be noted that you can get sixty day trials of just about everything VMware offers easily online, but the issue there is that the “big” offerings like SRM and VDI are so intricate in their setup that it can take easily longer than sixty days to get them fully off the ground if you are just doing it in your free time.  You also have to completely scrap the entire setup, from ESXi to vCenter to these addons as they are all tied to that same sixty day time frame.  Want to do it again?  You need a new email address to sign up for the trial again!

The elephant in the room is this – Microsoft is very serious about taking VMware’s ball and going home with it.  Virtualization was the #1 focus of Server 2008 R2 SP1 and it appears that Server 2012 will continue the trend.  IT Professionals that are using TechNet will have easy access to using Hyper-V in all of its glory (and 2012 is looking much easier/sweeter than 2008 R2 SP1) already.

Even with my VCP and years of VMware experience along with a fairly sizable investment in specialized RAID hardware for native disk redundancy in my home lab, Server 2012 looks mighty attractive for my home platform.  It doesn’t need to be this way, VMware.

I am not the only one who thinks so.

Agree?  Raise your voice.  VMware is missing a big opportunity here and anyone invested in VMware from a technical expertise level or from a shareholder level knows the dangers of competing with Microsoft (just ask Novell or the other companies they have left bloodied in their wake.)  Do the right thing, VMware.  Let me pay you a little money so I can recommend your products to those with the big checkbooks.

–Nat

Linked Clones: Lab Manager vs vCloud Director v1.5

One of the big “features” added to vCloud Director that allows it “parity” when compared to the outgoing Lab Manager is the re-introduction of Linked Clones.  These Copy-On-Write (CoW) disks provide for VMs that are actually little more than differencing disks from a base disk.  Using Virtual Desktop Infrastructure (VDI) solutions, this is common to preserve disk space for all of your XP/W7 desktops you are spawning and allows you to better utilized small, expensive SSD drives.

Well, in LM and vCD, it is supposed to save space too. One beef I have with the current implementation in vCD is that it is actually worse when compared to LM.  The root of the issue, you see, is that in Lab Manager you could cleanly create a VM from a template, this would stay thin provisioned and it would act just like a classic VM, no linked clones and no CoW.  Well, in vCD you always get a linked clone no matter how you provision the VM if your Org has fast provisioning enabled.  This is also true for consolidations, where in LM you get a clean VM as result and in vCD you continue to get a linked clone, chain length of one.

In the long run, this is going to negatively impact disk space utilization.  As you are forced to always write to the differencing disk with Linked Clones, LM actually offered a nifty hybrid approach that allowed for overwriting the base disk when the VM was freshly provisioned or freshly consolidated.  This is a step backwards that I hope VMware will address.

–Nat

ESXi Whitebox Hosting

For quite some time, the blog has been running as an Ubuntu VM running in VMware Server on Windows Server 2008.  If you said “yuck!” – you’re right.   It is a decidedly 2009 setup that I wanted to update before a rapidly developing new Juchems makes working on projects like this a luxury.

Moving to VMware ESXi means that I will necessarily have the VMs running much closer to the hardware for better performance, and the most recent release has good support for many newer operating systems as VMs.  It will also be quite “headless” – no monitor, keyboard or mouse needed for 99.99% of the life of the server.

Perhaps the nicest part of using VMware Server vs ESXi was that it was an “all in one” solution where I could work on VMs etc without installing anything on the rest of my computers.  The downside was that it wasn’t particularly speedy, updated for newer operating systems and had one too many levels of “stuff”; VM/VMware Server/Windows OS versus the new VM/ESXi setup.

The “Old” hardware:

AMD 5400+ x2 (2.8Ghz dual core)

8GB DDR2 800 Mhz (4x2GB)

Gigabyte nVidia 430 Chipset ATX motherboard

160GB Seagate 7200.9 (160 GB Western Digital Raptor died a year ago or so…)

Onboard video & LAN

 

The “New” hardware:

Core i3 2100 (3.1ghz dual core + hyper threading)

8GB DDR3 1333 Mhz (2x4GB)

Gigabyte H61 mATX motherboard

250GB Samsung Spinpoint (will be joined by a 750GB Western Digital Green shortly)

On-CPU video/Intel Pro 1000 NIC

 

The old case, Seasonic 330W power supply and fan setup was kept as-is.

 

Having ESXi 4.1U1 install without much issue was quite a relief.  The onboard nic was not detected with the built the default ESXi driver set, but the Intel nic was obviously picked up without any hassle.

I think that 8GB is a sweet spot with a dual core processor.  More RAM and I would have felt an urge to go with a quad core – and spend more money.  The motherboard only has two ram slots so I am safe from that temptation.  I think that I’ll be able to run about ten vms on this guy, what they would all even be I can’t imagine right now.  Five with good performance will meet my needs for the foreseeable future.

The Migration

I copied the VMs first locally to my main workstation and then tried to simply upload them to the ESXi server.  This resulted in a scsi error when I tried to power them on – failure.  The next step was using ESX Standalone Converter to change the VMs from “workstation” to “server” VMs and I have to say that this tool from VMware works great in that regard.

The Ubuntu VM was stubborn in the fact that eth0 had the static IP configured but the new network card was known as eth1 so the blog was down for an additional ~20 minutes while I sorted that out.  The Server 2008 R2 VM moved over easy peasy but needed a VMware tools update.

All in all I was pleasantly surprised at how well it went.  TeamJuchems is now hosted on a completely modern hosting platform that should offer plenty of performance for the foreseeable future.

–Nat

Lab Manager Blog

I’ve had a love/hate relationship with Lab Manager so far. It has been a great tool for our developers, but it is not a complete solution. It wantonly wastes storage (its primary constraining resource) and it make server maintenance much more of a chore than necessary. I stumbled across this good blog on it now, of course, as it enters its twilight. Hopefully good information on vCloud Director will be forthcoming.

http://bsmith9999.blogspot.com/2011/03/lab-manager-4.html

–Nat

Killing Wayward VMs in vSphere ESX

From time to time as an ESX admin, you’ve likely come across a VM that doesn’t want to die.  The infamous “Another task is already in progress.” error message likely means you have a VM locked into la-la land, unable to be powered down, reset, restarted, shut down or otherwise manipulated.  If you have ESX, this is about the time you find yourself firing up Putty and heading in to do some low level surgery.

First steps:

  • topping the virtual machine by issuing the command vmware-cmd /vmfs/volumes/<datastorename>/<vmname>/<vmname>.vmx stop.   Equivalent to sending it a shutdown command.  Will probably fail.
  • If this does not work, one can issue the following command: vmware-cmd /vmfs/volumes/<datastorename>/<vmname>/<vmname>.vmx stop hard. This will try to kill the Virtual Machine instantly.  Equivalent to a power off, will likely fail.
  • If that does not work, one can issue the command vm-support -x to list the running VMs and their World IDs, then vm-support -X worldid (note the x is case sensitive in both commands). This then prompts the user with a couple of questions, then runs a debug stop of the VM, and creates a set of log files as well that you can forward to VMware tech support.  This does some fancy background things and is your last stop before calling VMware support.  Not to mention, its a great way to get the PID of all your running VMs.  You can try kill -9 PIDOFYOURVM but that probably won’t work if the previous commands failed.
  • I’ve had to do this about four times in four years, just often to have always forgotten how to do it…

    –Nat

    VMware Lab Manager 4.01 Review

    This isn’t going to interest most folks who are reading my blog.  I need to get this written out though, because some guy was looking for Lab Manager feedback and couldn’t find constructive criticism.  Here is mine, and I am sure Google will index it.

    As of 4.01, VMware vCenter Lab Manager has its uses, but it has huge gaps:

    1) Total lack of storage resource monitoring tools/information that would be useful. You can’t export storage usage, linked clone tree structures, etc. If you aren’t familiar with CoW disks, linked clone chains, etc. you soon will be and you’ll be wondering about this in a big way when you need to constantly buy huge chunks of SAN disk with little hard data.

    2)No exisitng backup solutions. Want to back up your library entries? Enjoy manually exporting them and hitting them one by one. SAN replication IS NOT a backup mechanism, folks. Backup is to tape or similar.

    3)Very little in the way of customization. We have users that constantly fill up LUNS and IP pools when they have open space in other LUNS and pools because they just use the defaults. We’d like to set the default to blank in many cases, but that isn’t available.

    4)Redploying VM’s nets them a new IP. This is a huge issue at times if you have IP sensitive configurations, especially when dealing with fencing.

    5)Active Directory is a mess with fenced VMs, etc. Not really Lab Managers fault, but that’s the state of things.

    6)Scalability. Using host spanning networks you are limited to 512 distributed switch port groups that each fenced configuration uses. In large deployments, you are likely to collide with this, necessatating another vCenter/Lab Manager instance and fragmentation of resources.

    7)Maitenance issues. Maitenance Mode even with host transport networks enabled is borked because of the little VM that Lab Manager locks to each host. This is fairly ridiculous and convulutes what should be a very straight forward process.

    8)Get ready to work some enourmous LUN sizes vs what you are likely used to. We have 2TB FC Luns and the only one we extended to 4TB is having locking issues, etc. NFS is the way you need to go.

    9)Enjoy adding another Server 2003 instance to your infrastructure, because 2008 isn’t supported as an host OS for the Lab Manager services.
      Oh yeah, all your important data is located in a little SQL express database on that server too. This is Enterprise software, right?

    THE biggest issue I have with Lab Manager is the fact that Lab Manager accesses the ESX servers directly. Do us all a favor and use vCenter as an abstraction layer so we can actually see what the crap is going on and rely on a proven set of administration tools. Ideally Lab Manager would be a plugin and wouldn’t be harboring its own database, etc.

    Bottom line is that you need to be sure you have the right needs for Lab Manager to be useful.

    Original Thread:

    http://communities.vmware.com/

    –Nat

    OVF Exports from VMware Products

    An OVF is a portable container for VMs that allows for easy import into a virtualization platform, like the VMware suite of products.

    It seems like a no-brainer task to make one, you just highlight the VM in vCenter, click on file –> export, easy peasy.

    This process is slow, prone to error and isn’t very flexible.  There is a very elegant, if non-gui centric way of accomplishing this seemingly easy task.

    OVF Tool

    It is a great little command utility that takes a couple arguments (target, destination) and out comes a VM or OVF/OVA file.   It is supported under Windows and Linux and provides reliable functionality for your OVF import/export needs.

    Tool Syntax:

    c:\Program Files (x86)\VMware\VMware OVF Tool>ovftool –help examples
    Source Locator Examples:

    c:\ovfs\my_vapp.ovf

    c:\vms\my_vm.vmx

    vi://username:pass@localhost/my_datacenter/vm/    \
    my_vms_folder/my_vm_name

    Destination Locator Examples:

    c:\ovfs\my_vapp.ovf

    c:\vms\my_vm.vmx

    vi://username:pass@localhost/my_datacenter/host/    \
    esx01.example.com
    vi://username:pass@localhost/my_datacenter/host/    \
    esx01.example.com/Resources/my_resourcepool

    Note: the /host/ and /Resources/ part of the above inventory path are fixed and must be specified when using a vi destination locator.  The /Resources/ part is only used when specifying a resource pool.

    Examples:

    ovftool -tt=vmx c:\ovfs\my_vapp.ovf c:\vms\
    (.ovf file to .vmx file. Result files will
    be: c:\vms\my_vapp\my_vapp.[vmx|vmdk])

    ovftool c:\vms\my_vm.vmx c:\ovfs\my_vapp.ovf
    (.vmx file to .ovf file. Result files will be c:\ovfs\my_vapp.[ovf|vmdk])

    ovftool http://my_ovf_server/ovfs/my_vapp.ova c:\vms\my_vm.vmx
    (.ova file to .vmx file)

    ovftool c:\ovfs\my_vapp.ovf vi://username:pass@my_esx_host
    (.ovf file to ESX host using default mappings)

    ovftool c:\ovfs\my_vm.vmx vi://username:pass@my_esx_host
    (.vmx file to ESX host using default mappings)

    ovftool https://my_ovf_server/ovfs/my_vapp.ovf \
    vi://username:pass@my_esx_host
    (.ovf file from a web server to ESX host using defaults)

    ovftool c:\ovfs\my_vapp.ovf \
    vi://username:pass@my_vc_server/?ip=10.20.30.40
    (.ovf file to vCenter server using managed ESX host ip address)

    ovftool “vi://username:pass@my_vc_server/my_datacenter?ds=\
    [Storage1] foo/foo.vmx” c:\ovfs\
    (VM on ESX/vCenter server to OVF using datastore location query)

    ovftool c:\ovfs\my_vapp.ovf \
    vi://username:pass@my_vc_server/my_datacenter/host/my_host
    (.ovf file to vCenter server using vCenter inventory path)

    ovftool vi://username:pass@my_host/my_datacenter/vm/my_vm_folder/my_vm_name \
    c:\ovfs\my_vapp.ovf
    (VC/ESX vm to .ovf file)

    ovftool https://my_ovflib/vm/my_vapp.ovf
    (shows summary information about the OVF package [probe mode])

    \End Tool Syntax

    The syntax for this tool is grueling.  I started out trying to a do a datastore query export and gave up, I ended up using the folder method.  It took me *36* attempts to get it to work.  You can view the folder structure by looking at the VM and Template view in vCenter.  The syntax is case sensitive and you need the “quotes” for when you have a open space in the command, much like other command line entries.  Also, the “\” in the examples seems to be VMwares way of saying you need a space there, putting that in the actual line throws an error.

    Good luck!

    –Nat

    Fun with Update Manager

    After having a mornings worth of issues troubleshooting an error in vCenter Update Manager, I stumbled across a great blog entry that had the answer.  After we had a server (the server I am trying to patch, ironically) act up and give some VM’s fits, our first reaction was to bring it up to date with patches.  Update Manager wouldn’t do anything but scan hosts for baseline compliance, which makes patching pretty difficult.

    First, I restarted the Update Manager service without any change.  Next, I followed the above blogs guidance and unregistered some orphaned VM’s.  Even though the server I was having issues with didn’t have any VM’s on it and was in maintenance mode, this seems to have turned the trick.

    Now if I could only find the Update Manger logs in Server 2008…

    –Nat

    Frustrations with Hyper V

    From the title, you might think I would be griping about Hyper V, Microsoft’s server virtulization technology.  The truth is, I am only frustrated by the prerequisites of Hyper V, one of which is particularly troublesome.  Mainly, that you need to have a Virtualization Technology (VT) enabled processor from either Intel or AMD to run Hyper V.

    This effectively makes it impossible to have a virtualized Hyper V test/development environment by ensuring that all of your Hyper V hosts are indeed physical.  With vSphere, it is very possible to run an entire cluster on a machine that has enough enough memory through using a product like VMware workstation.   In that scenario, you are limited to 32 bit guest operating systems on that cluster because vSphere requires the VT bit for 64 bit virtual machines.  In the end, that is A-OK because what you are truly learning about is how to setup and configure the hosts along with vCenter.  Hyper V is a fairly complicated beast, so it would be nice to be able to completely go through its configuration (not too mention testing configuration changes) in a virtual environment.

    It’s been a frustrating day at work because of this requirement.  Supposedly, Sun’s Virtualbox could pass along this VT bit capability to the VMs you were running, theoretically enabling the the running of a Hyper V host in a VM.  I am here to tell you that doesn’t work, at least in the latest 3.1 release of Virtualbox.  Other than that, I have no beef with Virtualbox, its actually pretty speedy and simple.

    –Nat

    Interesting Insights in Linux Swap Configuration

    I was listening to my team discuss the configuration of new virtual appliance, based on Suse Enterprise, that would be created and delivered to the customers in .ovf format.   The software engineers had requested that the size be as small as possible for delivery reasons.  One of the heated discussion items became how much swap space should be created.  There are a lot of opinions on this and much of the “knowledge” the team had pointed to about 1.5x the amount of ram allocated to the VM.  In this case, the appliance is preconfigured to use 4GB of ram so 6GB to 8GB seemed to be the answer, a disappointment because the engineers had hoped the entire appliance would be 10GB.

    This interested me, because I really didn’t know the answer.  Obviously, it is very important in a consolodated server environment to size these things right because your swap space is actually very expensive SAN space.  This article and its comments was very interesting on the topic. It boiled down to this formula as the easy standard:

    1. Swap space == Equal RAM size (if RAM < 2GB)
    2. Swap space == 2GB size (if RAM > 2GB)

    The later comments were pretty good though, and they pointed out that Linux may be a bit dangerous in that when you run out of swap, processes start to be killed off to free up memory.  Also, that there are use case scenarios where using swap space is OK or at the very least preferable to random processes being killed.  Another thing to worry about is if you need to collect a kernel dump and where that might be going. It gets interesting when you treat disk as an expensive resource.  At home or in dedicated server environment, disk space is pretty cheap but in enterprise virtualization, where you might spin up tens or hundreds of the same image for testing etc., disk space is really expensive!

    Windows appears to be down to about 1x memory size for swap now, which is good.  I still go for 1.5x there, myself.

    If you happened to be curious about what they settled on for swap space with 4GB of ram, the answer is 5GB.

    –Nat