Entries in vmware (19)


Can't format a VMFS-5 volume on an existing disk?

If you are cobbling together a home lab for ESXi, sometimes you’ll be reusing various disks that have been used in other computers with other operating systems. This can cause a problem for ESXi when you want to format a disk as a VMFS volume if there is an existing partition table and you want to format it using GPT for VMFS-5 and there is an existing MBR partition.

When you try to format the disk you’ll get an error like:

Error:A specified parameter was not correct.
Error Stack
Call "HostStorageSystem.ComputeDiskPartitionInfo" for object "storageSystem" on ESXi "x.x.x.x" failed.

In my particular situation I am using a hosted Mac Mini so I can’t just grab the disk and wipe the partition table but it’s still possible from the command line. You’ll need to enable SSH and the ESXi Shell and use the partedUtil.

On my Mini I have a hard disk and an SSD which can be found under the /vmfs/devices/disks directory.

ls -lh /vmfs/devices/disks
total 1985717835
-rw-------    1 root     root        7.5G Jan 23 07:49 mpx.vmhba32:C0:T0:L0
-rw-------    1 root     root        4.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:1
-rw-------    1 root     root      250.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:5
-rw-------    1 root     root      250.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:6
-rw-------    1 root     root      110.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:7
-rw-------    1 root     root      286.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:8
-rw-------    1 root     root      465.8G Jan 23 07:49 t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H
-rw-------    1 root     root      465.8G Jan 23 07:49 t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H:1
-rw-------    1 root     root      476.9G Jan 23 07:49 t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____
-rw-------    1 root     root      476.9G Jan 23 07:49 t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____:1
lrwxrwxrwx    1 root     root          20 Jan 23 07:49 vml.0000000000766d68626133323a303a30 -> mpx.vmhba32:C0:T0:L0
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:1 -> mpx.vmhba32:C0:T0:L0:1
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:5 -> mpx.vmhba32:C0:T0:L0:5
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:6 -> mpx.vmhba32:C0:T0:L0:6
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:7 -> mpx.vmhba32:C0:T0:L0:7
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:8 -> mpx.vmhba32:C0:T0:L0:8
lrwxrwxrwx    1 root     root          72 Jan 23 07:49 vml.010000000020202020202054454c35313933394a42385531484150504c4520 -> t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H
lrwxrwxrwx    1 root     root          74 Jan 23 07:49 vml.010000000020202020202054454c35313933394a42385531484150504c4520:1 -> t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H:1
lrwxrwxrwx    1 root     root          72 Jan 23 07:49 vml.0100000000533141584e53414442313238313258202020202053616d73756e -> t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____
lrwxrwxrwx    1 root     root          74 Jan 23 07:49 vml.0100000000533141584e53414442313238313258202020202053616d73756e:1 -> t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____:1

So now I need to delete the existing partitions on the two internal drives with the following command:

/sbin/partedUtil delete /vmfs/devices/disks/t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H 1

Note the 1 at the end which identifies the partition to delete. If you have a disk with multiple partitions, you’ll need to delete each of them.

Once you’ve done that, you can go back into the VI-Client or the web interface and format the disk as a VMFS-5 volume.


Dump to tape from VMFS

A recurring issue that I see in a few instances are places that still have requirements to externalize backups to tape for long-term storage (please don’t use the archive word). But on the other hand, it’s clear that disk to disk backup solutions that leverage the VADP protocols are considerably more efficient tools for VMware environments.

Now assuming you have a decent budget, I highly recommend Veeam Backup & Replication as a complete solution that now integrates tape externalization. But if you’re environment is smaller or can’t justify the investment when there are excellent “free” tools like VMware Data Protection available, here’s a potential solution for long term dump to tape.

Assuming that you have some kind of existing backup solution that write files to tape, the problem is that VMFS is pretty much an unreadable black box file system. This has been exacerbated by the fact that wuth ESXi the old fashioned approach of putting a Linux backup client in the Service Console is not longer really a viable option.

So we need a few little things in place here.

  • A server with your backup software connected to the SAN (iSCSI or FC)
  • A storage bay that can create and present snapshots (optional, but more efficient)
  • The open source VMFS driver

Some assumptions:

  • Your backup appliance is stored on VMFS3.x block storage, with no RDMs

The basic process involves the following steps:

  1. Stop the backup appliance
  2. Snapshot the LUN(s) for the backup appliance
  3. Start the backup appliance
  4. Present a cloned volume based on the snapshot to the backup server
  5. Connect to the LUNs using the fvmfs java applet and publish them over WebDAV
  6. Mount the WebDAV share as a disk
  7. Backup the contents using your backup software

Stop the backup appliance

In order to ensure a coherent state of the data on disk, you’ll want to stop the backup appliance. VDP can be stopped with a Shutdown Guest OS from the VI-Client or shutdown at the command line.

Snapshot the LUN(s) for the backup appliance

Snapshotting the LUN is an efficient method to have a copy of the appliance in the off state to ensure data consistency. Most systems will allow you to script this kind of activity.

Example using Nexenta recordings:

create snapshot backuppool/vdp-01@totape

Start the backup appliance

Since we have a snapshot, we can now restart the backup appliance using the VI-Client or whatever is easiest for you.

Present a cloned volume based on the snapshot to the backup server

Now that the appliance is running normally, and we have a snapshot with the appliance in a stopped state we can now continue with the extraction to tape process without any time pressure that will impact new backups.

So we need to create a cloned volume from the snapshot and present it to the backup server:

Nexenta example:

setup snapshot backuppool/vdp-01@totape clone backuppool/totape
setup zvol backuppool/totape share -i backupserver -t iSCSI

Where -i points to the name of the initiator group and -t points to the registered target group (generally a set of interfaces).

Now to verify that the presentation worked, we go to the backup server and (assuming a Windows Server), Computer Management > Disk Management. We should now see the a new disk with an Unknown partition type. Don’t try to format this or mount it as a Windows disk. From a practical standpoint, you won’t be doing any harm to your source data since it’s a volume based on a snapshot, not the original, but you want access to the source data. What you want to note is the name on the left side of the window “Disk 3”.

NB If you are using VMFS extents based on multiple source LUNs, you’ll need to present all of them so take note of the new disks that are presented here.

Connect to the LUNs using the fvmfs java applet and publish them over WebDAV

Still on the Windows server, you’re going launch the fvmfs java applet so you’ll need a recent java.exe. At the command line, navigate to the folder containing the fvmfs.jar file and launch it using the following syntax:

“c:\Program Files\Java\jre7\bin\java.exe” -jar fvmfs.jar \\.\PhysicalDrive2 webdav 80

Where the Physical Drive number maps to the Disk number noted in the Disk Management console.

If you are using extents, note the disks with the same syntax, separated by commas.

The WebDAV share is mountable on modern Windows systems with the classic “net use Z: http://localhost/vmfs”.

If you have the misfortune to still be using Windows 2003, you’ll also need to install the WebDAV client which may or may not work for you. If it still doesn’t work, then I recommend trying out WebDrive for mounting WebDAV shares to a letter.

Once the drive is mounted to a drive letter, you’ll have near native access speed to copy the data to tape.

Tear down

Cleaning up is mostly just walking through the steps in reverse. On the server doing the backups, unmount the drive by closing the command prompt running the fvmfs applet or control-C to kill the process.

Then we need to delete the cloned volume, followed by the snapshot. Another Nexenta example:

destroy zvol backuppool/totape -y
destroy snapshot backuppool/vdp01@totape -y

And we’re done.

Restoring data

To keep things simple, restoring this kind of data is best done to an NFS share that is visible to your ESXi hosts. This way you can restore directly from tape to the destination. The fvmfs tool presents a read-only copy of the VMFS datastore so you can’t use it for restores.

Under normal conditions, this would be a very exceptional event and should be to some kind of temporary storage.

Other options

A simple approach for extracting a VDP appliance is to export the system as an OVF, but there are a number of shortcomings to this approach: - it’s slow to extract - it can only extract to via the network - you need a big chunk of temporary space

NB: This is a specific approach to the specific problem of long term externalization. In most operational day to day use cases, you’re better off using some kind of replication to ensure availability of your backups.

[^fn-1] Nexenta recordings are a method of building scripts based on the NMS command line syntax. They are stored in the .recordings hidden directory in /volumes and are simple text files that you can launch with “the “run recording” from the NMS command line without diving down to the expert mode raw bash shell.


ESXi, OS X and a Mac Mini

I’ve been lusting after my very own ESX install at home for a while now, especially after following the news that you get ESXi 5 running on a Mac Mini. A key factor for me is that I want to run multiple OS X instances so the Mini is a requirement without going down the hackintosh route. The other is purely practical as I don’t really have any available space so whatever I build has to fit on my desk with the rest of the stuff already there. Noise and power consumption are key factors.

Starting with a Mac Mini Server (5.3) as the base component. I debated going with one of the regular models, but the fact that it has two 500Gb internal drives plus a quad core i7 gave it the edge. RAM is currently pretty cheap so I pushed it to 16 Gb with aftermarket memory.

I intend to run a mix of production machines and pure test bed stuff. So the idea is to have a VM running Solaris have a mirrored zpool created from two VMDK files, presented locally via NFS for any production machines which will be replicated to the main NAS (an HP N40L running Solaris) via my auto replicate scripts.

The left over space on the two drives will be used for all of the miscellaneous test machines that I don’t care about so losing a drive is not going to bother me. I debated hosting the production machine directly on the NAS, but it’s getting a little full and I’d prefer to do that with a proper isolated storage network and I need to buy an additional Thunderbolt GbE adaptor for the Mini and a network card for the HP so that can wait.

Step by step

I started with the ready to go ESXi 5.0 image available from Angry Cisco Guy. Simply burn it to a CD, and using the external Apple DVD player boot it up (hold down C on startup). Then it’s just a standard ESXi install process.

Note: there are some excellent instructions for building your own custom installer over at Paraguin and Virtually Ghetto that you can use to go straight to 5.1 with the latest driver, but I ran across the Angry Cisco Guy link first and had already downloaded the image so I went the lazy route.

Now that I have the baseline running, I opened up the SSH shell and copied over the 5.1 offline update bundle and installed it using:

esxcli software install vib -d ....

Now this setup ran OK for a couple of days until I had a network freeze. ESXi was still running, but it was completely off the grid. Activating the SSH shell access opened it up again, but signaled that for production machines, this was not acceptable.

After a little touring around, I found the latest tg3 Broadcom driver (which would be required for the Thunderbolt adaptor) and installed it, again using esxcli.

Note that the bundle you download from VMware is not directly installable. You need to unzip it and use the offline-bundle file inside.


Now that the machine is running nicely, I installed the Solaris VM, assigned it two VMDK files, one from each internal drive.

This allows me to create a mirrored zpool for the additional read performance and reliability.

On the ESX side, I mount the new NFS share and I’m ready to deploy my production machines.

The first project is a replacement server for my aging second generation Mini that is an OS X Server instance that has been running since 10.1 and getting upgraded to every version up until 10.7 so it’s just full of cruft.

I had started building the replacement server in Fusion on my MacBook, so I figured that I should just convert it over. This however is not something sufficiently mainstream to be supported by the regular P2V or V2V tools out there.

I tried a number of different approaches including VMware Converter Standalone, copying the Fusion VMDK file to the NFS share and mapping it to the VM without any luck.

So the best approach to virtualization OS X computers seems to be to create a new VM on ESXi and then use the Apple Migration Assistant to bring over the old machine’s files and settings.

At the moment, ESXi has support for OS X machines up to 10.7 so I started there, mapping the installer dmg file to the new VM and installing from there. VMware does have VMware Tools for OS X so these got installed too. Then I copied the 10.8 installer image into the VM and ran it with no problems.

A quick tour with Software Update and I now have a fully up to date VM.

But then I ran into a limitation that the Migration Assistant will not do a machine to machine migration for a machine using OS X Server. I tried restoring from a Time Machine backup, but for some reason the Migration Assistant didn’t like the backup in question (running on Netatalk on the Solaris servers). So I had to fall back to the tried and true SuperDuper clone of the disk to a network share and play musical disks mapping the disk to a temporary VM to do the restore, disconnecting and booting the destination machine.

OS X is sufficiently hardware agnostic that cloning the drive from a physical Mac should work equally well.


A few things that may come back to bite you with the default settings applied by OS X:

  • The screen saver is on by default which chews up CPU, and makes it almost impossible to VNC in some of the time.
  • The power saving features are on by default and the machine will go to sleep so you need to disable these options
  • Be very careful with international keyboard mappings when assigning the initial password. I was going from a Mac via RDP to a Windows machine, to the VI-Client console of the new OS X VM and my first attempt was impossible to recreate. Keep it simple until you get to the native Screen Sharing connection.

Side notes

The Mac Mini’s only real fault that makes it ineligible to be a supported ESXi platform is the fact that it doesn’t support ECC RAM. So if you have a Radar account, filling out a request might be help Apple decide to add that into future versions.

Something else to keep in mind about the current generation of Minis is that Thunderbolt is nothing more than a transparent extension of the PCI bus. So if you have the budget, you can imagine some silly/amazing/freaky configurations using the Sonnet External PCIe boxes with a 10GbE or FC card.

They also have a really interesting rack mount kit that includes PCIe slots. But it’s horribly expensive compared to a standard off the shelf 1U Intel based Server.


Development tools

Today's topic is programming tools and environments. I've moved back into enterprise-land for the last year and getting back up to speed on all of the complexities of working with the Microsoft web development stack, combined with a bunch of tomcat java systems, mixed in with powershell scripts for managing package deployments and so on.

Combined with this I'm watching the stunning complexity of working and maintaining code in these environments and thought I should get back on the learning Ruby on Rails personal project that had been back burnered for a while. This was prompted by the nascent discussions about moving stuff into the "cloud". Of course no discussion can happen in the enterprise today without the cloud intruding on it somewhere.

In order to be able to help orient the discussion and ensure that people are using the same language I went out to do a quick tour of the current state of the "cloud" as it could be useful to me. I've been doing VMware deployments for a number of years now and the whole Infrastructure as a Service (IaaS) stuff is old hat. Unfortunately most enterprises view on the cloud is exactly that, except that they want it offered by a service provider with a simple monthly bill so they can ask for capacity on the fly without actually having to plan for anything. While that approach certainly has some merits for people higher up the IT food chain, it doesn't really change anything fundamental about how applications get developed, are deployed and maintained. The bottom line is the same ugly application stack managed in virtual machines that are no longer stored locally.

The cool stuff that I saw were the new Platform as a Service (PaaS) offerings that are finally coming into their own with viable private an hybrid platform solutions. I'm not interested in using anything that locks me into a given provider and I want to be able to run stuff internally as well as externally. This led me to VMware's Cloud Foundry and the Stackato and Iron Foundy offshoots. Stackato is from the folks over at Active State who have been around for quite a while and are firmly living in the open space, while Iron Foundry is the .Net variation on the Cloud Foundry stack. The latest news is that Stackato now offers .Net integration as well, which is a very good thing for enterprise clients who often require a support contract and are leery of community supported tools.

Looking over the basic toolkits and architectures, this is just so much cleaner that the current mess of web.config, properties.xml, controllers.xml, managers.xml, ... ad_nauseum.xml which prompted me to get my ass in gear and start learning Ruby on Rails (again).


The next frontier...PaaS

Some new virtualization technology that’s starting to mature nicely that enterprises and developers should start looking at:

The Cloud Foundry is an open source initiative for building platform neutral Platform as a Service (PaaS) environments. Sponsored by VMware (or purchased or something like that), the concept is similar to Azure, providing the underlying platform that can be deployed to create custom elastic clouds and eliminating much of the make work involved in managing the OS and middleware stacks.

The initial offering is based on a Linux VM, and supporting a number of popular development and delivery environments such as Ruby, PHP, Python and Node.js. The two other players, Iron Foundry and Stackato are extensions of the open sourced code to cover some additional use cases.

Iron Foundry brings the architecture to the Windows world and brings full integration with .NET development and deployment. But as you will be deploying Windows VMs, you’ll need to take the licence costs into account. But for Windows shops that are full-in on .NET, this offers an interesting option.

Stackato comes from the folks over at ActiveState who have been the stewards of ActivePerl for years now and bring a number of extensions to the base Cloud Foundry environment including Perl, Clojure, Scala, Erlang. Additional benefits for the enterprise are a fully-supported toolkit, a more advanced security model, more granular user management, and system and application monitoring with New Relic integrated directly in the package.

It’s well worth spending a few moments reading over the FAQs and taking them for a test drive to understand just how this can simplify application management and deployment without tying your applications to a particular hosting service or cloud application stack. The VMs composing these clouds can be deployed locally for development work and testing, on private virtualization solutions like VMware vSphere for internal clouds or Amazon EC2 for public clouds or just about any hosting provider out there.

This is looking like the logical future for developing and deploying web apps while maintaining a maximum level of ownership and flexibility