Thursday
Jan232014

Can't format a VMFS-5 volume on an existing disk?

If you are cobbling together a home lab for ESXi, sometimes you’ll be reusing various disks that have been used in other computers with other operating systems. This can cause a problem for ESXi when you want to format a disk as a VMFS volume if there is an existing partition table and you want to format it using GPT for VMFS-5 and there is an existing MBR partition.

When you try to format the disk you’ll get an error like:

Error:A specified parameter was not correct.
Vim.Host.DiskPartitionInfo.spec
Error Stack
Call "HostStorageSystem.ComputeDiskPartitionInfo" for object "storageSystem" on ESXi "x.x.x.x" failed.

In my particular situation I am using a hosted Mac Mini so I can’t just grab the disk and wipe the partition table but it’s still possible from the command line. You’ll need to enable SSH and the ESXi Shell and use the partedUtil.

On my Mini I have a hard disk and an SSD which can be found under the /vmfs/devices/disks directory.

ls -lh /vmfs/devices/disks
total 1985717835
-rw-------    1 root     root        7.5G Jan 23 07:49 mpx.vmhba32:C0:T0:L0
-rw-------    1 root     root        4.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:1
-rw-------    1 root     root      250.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:5
-rw-------    1 root     root      250.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:6
-rw-------    1 root     root      110.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:7
-rw-------    1 root     root      286.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:8
-rw-------    1 root     root      465.8G Jan 23 07:49 t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H
-rw-------    1 root     root      465.8G Jan 23 07:49 t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H:1
-rw-------    1 root     root      476.9G Jan 23 07:49 t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____
-rw-------    1 root     root      476.9G Jan 23 07:49 t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____:1
lrwxrwxrwx    1 root     root          20 Jan 23 07:49 vml.0000000000766d68626133323a303a30 -> mpx.vmhba32:C0:T0:L0
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:1 -> mpx.vmhba32:C0:T0:L0:1
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:5 -> mpx.vmhba32:C0:T0:L0:5
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:6 -> mpx.vmhba32:C0:T0:L0:6
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:7 -> mpx.vmhba32:C0:T0:L0:7
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:8 -> mpx.vmhba32:C0:T0:L0:8
lrwxrwxrwx    1 root     root          72 Jan 23 07:49 vml.010000000020202020202054454c35313933394a42385531484150504c4520 -> t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H
lrwxrwxrwx    1 root     root          74 Jan 23 07:49 vml.010000000020202020202054454c35313933394a42385531484150504c4520:1 -> t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H:1
lrwxrwxrwx    1 root     root          72 Jan 23 07:49 vml.0100000000533141584e53414442313238313258202020202053616d73756e -> t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____
lrwxrwxrwx    1 root     root          74 Jan 23 07:49 vml.0100000000533141584e53414442313238313258202020202053616d73756e:1 -> t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____:1

So now I need to delete the existing partitions on the two internal drives with the following command:

/sbin/partedUtil delete /vmfs/devices/disks/t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H 1

Note the 1 at the end which identifies the partition to delete. If you have a disk with multiple partitions, you’ll need to delete each of them.

Once you’ve done that, you can go back into the VI-Client or the web interface and format the disk as a VMFS-5 volume.

Wednesday
Oct232013

Mavericks NFS alert

Just a quick note to help those that are upgrading to Mavericks and use NFS automounts.

By default, Mavericks will use NFS4 which uses a different security model so you may end up with what appears to be a regular mount, but it’s pretty much empty. Don’t worry the data hasn’t gone away or been deleted. It’s just that the user mapping doesn’t just cross check the local and remote UID of the user the same way, especially if you’re using PAM to grab UIDs from an LDAP directory.

I haven’t yet pieced together the best method to move to using NFS4 natively with my Solaris and OmnisOS boxes, but in the meantime, you just need to force the connection in NFS3. If you’re using automounts for user home directories, then add ver=3 to the automount entry and all will go back to normal.

Connecting via Command-K in the Finder with an NFS URL doesn’t seem to be affected at the moment.

Monday
Oct212013

Windows battery life & secondary impacts

Yet another article in the ongoing issue concerning the lackluster battery performance of Windows on portable computers, this time from Jeff Atwood of Codinghorror, with Anand Lal Shimpi of Anandtech weighing in on the situation.

What I find most interesting about this problem is not so much the problem posed to those using Windows based laptops (I feel for you), but rather the comparison with OS X on identical hardware. When you isolate the issue to identical use cases on identical hardware, you eliminate the screen, the networking, the keyboard backlighting, and so on. Finally you are left with the software and how it uses the hardware.

From a practical standpoint, we can discount the screen and the network and even the SSD since there is no “spindown” state and the power consumption is a straight line. Which leaves us the use of the CPU and GPU. It is clearly demonstrated that software optimizations can have a huge impact on efficiency and we’ve only seen the beginning from Apple. 10.9 Mavericks brings even more finely grained CPU and network scheduling.

Now my interest is in a different use case: Virtual Desktop Infrastructure. The existing and upcoming CPU optimizations implemented in OS X are potentially as useful in a shared CPU configuration like VDI as they are on a notebook computer.

This makes me wonder about the power efficiency of using Windows in a VDI installation. From the power consumption tests, it’s eating up more CPU to do the same job than does OS X, to the order of 50% more. Granted, in some tests, this results in slightly faster processing time, but only of the order of 15% in the best case scenarios.

On the one hand, that’s awful.

On the other hand, this type of system efficiency will become a much more pressing issue for Microsoft now that they are making their own hardware and need to be competitive with iPads and MacBooks on a work/watt basis. Which means that if Microsoft gets its act in gear, and can improve the overall efficiency of the CPU usage, we should be able to see even higher density workloads on the same server hardware. But it’s unlikely that this kind of thing will be retrofitted onto the existing Windows 7 systems and will only be available on future Windows 8 releases.

In the meantime, just use OS X on your portable computers if you want to get the most out of them, and hope that maybe some day we’ll see OS X as a VDI hosted option.

Although racking the new Mac Pros is going to require some innovative thinking…

Friday
Oct182013

It’s all a matter of perspective

I’m just catching up on my development RSS feeds and ran across yet another insightful technical article by Mike Ash. I’m finding this quite funny as I just gave a presentation at the Infralys (soon to be integrated in Ackacia!) hosted Rendezvous de la Virtualisation 2013 discussing the impact of SSD and flash storage arriving in the storage stack. Here are the slides for those interested.

In my presentation the coolest, most way out there SSD storage technology is the Diablo Memory Channel storage, where they put NAND chips onto cards that get attached to the RDIMM slots in your server. This is to ensure consistent (and very very very small) latency between the CPU and the storage. No jumping across the PCI bus and traversing various other components and protocols to get to storage, it’s right there accessible via the memory bus.

And here I have Mike explaining from the developer perspective “Why Registers Are Fast and RAM Is Slow”.

Always good to remind us that every part of the stack can be optimized and it’s a matter of perspective. Multi-millisecond latency fetching data from a physical object traversing multiple networks is forever for a modern CPU.

Thought experiment of the day: What if we configured our servers to behave like resource constrained devices, disabled swap and killed processes that stepped out of bounds? We’ve been taking the easy route throwing memory and hardware at problems that might have software optimization answers…

Wednesday
Oct022013

Dump to tape from VMFS

A recurring issue that I see in a few instances are places that still have requirements to externalize backups to tape for long-term storage (please don’t use the archive word). But on the other hand, it’s clear that disk to disk backup solutions that leverage the VADP protocols are considerably more efficient tools for VMware environments.

Now assuming you have a decent budget, I highly recommend Veeam Backup & Replication as a complete solution that now integrates tape externalization. But if you’re environment is smaller or can’t justify the investment when there are excellent “free” tools like VMware Data Protection available, here’s a potential solution for long term dump to tape.

Assuming that you have some kind of existing backup solution that write files to tape, the problem is that VMFS is pretty much an unreadable black box file system. This has been exacerbated by the fact that wuth ESXi the old fashioned approach of putting a Linux backup client in the Service Console is not longer really a viable option.

So we need a few little things in place here.

  • A server with your backup software connected to the SAN (iSCSI or FC)
  • A storage bay that can create and present snapshots (optional, but more efficient)
  • The open source VMFS driver

Some assumptions:

  • Your backup appliance is stored on VMFS3.x block storage, with no RDMs

The basic process involves the following steps:

  1. Stop the backup appliance
  2. Snapshot the LUN(s) for the backup appliance
  3. Start the backup appliance
  4. Present a cloned volume based on the snapshot to the backup server
  5. Connect to the LUNs using the fvmfs java applet and publish them over WebDAV
  6. Mount the WebDAV share as a disk
  7. Backup the contents using your backup software

Stop the backup appliance

In order to ensure a coherent state of the data on disk, you’ll want to stop the backup appliance. VDP can be stopped with a Shutdown Guest OS from the VI-Client or shutdown at the command line.

Snapshot the LUN(s) for the backup appliance

Snapshotting the LUN is an efficient method to have a copy of the appliance in the off state to ensure data consistency. Most systems will allow you to script this kind of activity.

Example using Nexenta recordings:

create snapshot backuppool/vdp-01@totape

Start the backup appliance

Since we have a snapshot, we can now restart the backup appliance using the VI-Client or whatever is easiest for you.

Present a cloned volume based on the snapshot to the backup server

Now that the appliance is running normally, and we have a snapshot with the appliance in a stopped state we can now continue with the extraction to tape process without any time pressure that will impact new backups.

So we need to create a cloned volume from the snapshot and present it to the backup server:

Nexenta example:

setup snapshot backuppool/vdp-01@totape clone backuppool/totape
setup zvol backuppool/totape share -i backupserver -t iSCSI

Where -i points to the name of the initiator group and -t points to the registered target group (generally a set of interfaces).

Now to verify that the presentation worked, we go to the backup server and (assuming a Windows Server), Computer Management > Disk Management. We should now see the a new disk with an Unknown partition type. Don’t try to format this or mount it as a Windows disk. From a practical standpoint, you won’t be doing any harm to your source data since it’s a volume based on a snapshot, not the original, but you want access to the source data. What you want to note is the name on the left side of the window “Disk 3”.

NB If you are using VMFS extents based on multiple source LUNs, you’ll need to present all of them so take note of the new disks that are presented here.

Connect to the LUNs using the fvmfs java applet and publish them over WebDAV

Still on the Windows server, you’re going launch the fvmfs java applet so you’ll need a recent java.exe. At the command line, navigate to the folder containing the fvmfs.jar file and launch it using the following syntax:

“c:\Program Files\Java\jre7\bin\java.exe” -jar fvmfs.jar \\.\PhysicalDrive2 webdav 127.0.0.1 80

Where the Physical Drive number maps to the Disk number noted in the Disk Management console.

If you are using extents, note the disks with the same syntax, separated by commas.

The WebDAV share is mountable on modern Windows systems with the classic “net use Z: http://localhost/vmfs”.

If you have the misfortune to still be using Windows 2003, you’ll also need to install the WebDAV client which may or may not work for you. If it still doesn’t work, then I recommend trying out WebDrive for mounting WebDAV shares to a letter.

Once the drive is mounted to a drive letter, you’ll have near native access speed to copy the data to tape.

Tear down

Cleaning up is mostly just walking through the steps in reverse. On the server doing the backups, unmount the drive by closing the command prompt running the fvmfs applet or control-C to kill the process.

Then we need to delete the cloned volume, followed by the snapshot. Another Nexenta example:

destroy zvol backuppool/totape -y
destroy snapshot backuppool/vdp01@totape -y

And we’re done.


Restoring data

To keep things simple, restoring this kind of data is best done to an NFS share that is visible to your ESXi hosts. This way you can restore directly from tape to the destination. The fvmfs tool presents a read-only copy of the VMFS datastore so you can’t use it for restores.

Under normal conditions, this would be a very exceptional event and should be to some kind of temporary storage.

Other options

A simple approach for extracting a VDP appliance is to export the system as an OVF, but there are a number of shortcomings to this approach: - it’s slow to extract - it can only extract to via the network - you need a big chunk of temporary space


NB: This is a specific approach to the specific problem of long term externalization. In most operational day to day use cases, you’re better off using some kind of replication to ensure availability of your backups.

[^fn-1] Nexenta recordings are a method of building scripts based on the NMS command line syntax. They are stored in the .recordings hidden directory in /volumes and are simple text files that you can launch with “the “run recording” from the NMS command line without diving down to the expert mode raw bash shell.

Page 1 ... 3 4 5 6 7 ... 49 Next 5 Entries »