Mac Mini Hosting


Up until now, I’ve been handling my own hosting via my home DSL service for a number of years now, with a few VPS instances and Squarespace service for some other web sites. One key component is my mail server, currently hosted on OS X Server in a VM running on ESXi at home. I had an unfortunate incident where the local telco cut off my DSL line by accident and as a result I was without internet connectivity for almost three weeks starting around Christmas.

This meant that all mail destined to my infrageeks.com domain ended up bouncing after a certain amount of time which was the impetus to finally go out and get a Mac Mini hosted in a datacenter. I’ve been thinking about doing this for a long time, but put off by the cost. But now I find that some of the Mac Mini colocation services will install ESXi as a standard install, which makes it considerably more attractive since I am no longer limited to one OS instance and can run multiple OS environments. This also means that I’ll be able to consolidate some of my other VPS and web hosting services onto this machine.

Yes, I could have done the same thing with a base install and VMware Fusion, Parallels or VirtualBox, but the result is not as efficient nor flexible enough for what I have in mind.

For the notes and instructions, I’m assuming you have a basic understanding of ESXi so I won’t be filling in a ton of screen shots, although if I get some feedback, I’ll go back and try and add some more in. Or (shameless plug) ping me for consulting assistance.


I went with an upgraded Mac Mini with a 500 Gb SSD + a 500 Gb internal HD, running ESXi from an SD Card which leaves the full disk space available for ESXi.

The basics


One nice feature of running ESXi is that you can easily create a set of virtual networks with a router instead of just putting your virtual machines directly on the internet. This also allows you to do some interesting stuff like create a private network on your ESXi instance where your machines live, but are connected to your own network via VPN so it becomes just another subnet that behaves like an extension of your network.

This is important since my mail server is a member of an Open Directory domain that is hosted on another machine. By setting things up as an extended private network, the servers can talk to each other through normal channels without opening up everything to the internet or doing firewall rules on a machine by machine basis.

I’m using pfSense in a virtual machine linked with a VPN to a Cisco RV180.


You can use the internal disks formatted as VMFS-5 volumes natively to store your virtual machines, but I wanted to ensure that I had an effective backup plan that fit into my current system. I could have used VMware’s built-in replication feature or a third party tool like Veeam or Zerto which are designed for this kind of replication, but they all require vCenter and I wanted to see what could be done without additional investment.

So in this case I’m using OmniOS to create a virtual NFS server backed by VMDK files. It adds some extra work on the server since it has to go through NFS to OmniOS virtual machine to the disks, but I gain snapshots and free replication back to my ZFS servers at home.

I’m in the middle of an existential debate concerning the most efficient configuration for the zpool in this situation. One option is to have two zpools, one for the SSD and one for the HD and to replicate from the SSD to the HD on a regular basis, and then off to the storage server at home. If the SSD fails, I can easily switch over to the HD. But in this case, I don’t have any real automated protection against bit-rot since each block is stored on non-redundant (from the ZFS point of view) storage. The other obvious option is to create a single zpool with a mirror of the SSD and the HD which would ensure that there are two copies of each block and if there is a problem with the data, ZFS will read both copies and use the one that matches the checksum. But the flip side is that performance will become less predictable since some reads will come from a slow 5400 RPM disk and others from the SSD, while all writes will be constrained by the speed of the spinning disk. I could also just set the SSD as a very large cache to a zpool backed by the hard drive, but this seems a little silly with no additional data protection.

The other side effect of slow disk IO is that I will end up in many more situations where the CPU is waiting on disk IO which will slow down the whole system, so I’m going with the two pool setup for now. Budget permitting, a mirrored zpool with two SSDs is the ideal solution.

Next up: Step by step


Upgrading a standalone ESXi server

If you have an existing standalone ESXi 5.0 or 5.1 server that is not managed by vCenter and you don’t have Update Manager and the other cool tools for handling upgrades and stuff, here’s a quick method that will do the trick (hat tip to Virtually Ghetto). I just used this to bring my home Mini (v5.3) up to 5.5 to fix a couple of issues with the caching server on an OS X Server VM.

Assuming that your machine has internet connectivity you should be able to upgrade directly from VMware’s online depot.

You’ll need to enable ssh or do this from the console. In either case, there’s practically no feedback though so be prepared to be flying a little blind during the actual upgrade process.

You’ll also need to open the http Client firewall rule, either from the VI-Client or the command line:

esxcli network firewall ruleset set -e true -r httpClient

To check for available 5.5 images, you can query the depot with:

esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

I added a “| grep 5.5” at the end to filter down to just the current release, noting that the list comes back unordered so filtering is useful. This gave me back the following list:

ESXi-5.5.0-1331820-no-tools       VMware, Inc.  PartnerSupported
ESXi-5.5.0-20131204001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20131204001-standard   VMware, Inc.  PartnerSupported
ESXi-5.5.0-20131201001s-standard  VMware, Inc.  PartnerSupported
ESXi-5.5.0-1331820-standard       VMware, Inc.  PartnerSupported
ESXi-5.5.0-20131201001s-no-tools  VMware, Inc.  PartnerSupported

Then I manually shut down all of my running VMs (doing the vCenter Appliance last of course). I chose the latest version to get things as up to date as possible so the upgrade command is:

esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.5.0-20131201001s-standard

And then you wait a while for it to download and apply the upgrade, followed a reboot command. Once the server came back online, I was cut off from my NFS datastores that were connected via a Thunderbolt/Ethernet adaptor.

esxcli storage nfs list
Volume Name  Host             Share              Accessible  Mounted  Read-Only  Hardware Acceleration
-----------  ---------------  -----------------  ----------  -------  ---------  ---------------------
prod-a  /azzalle/prod-a         false    false      false  Unknown              
prod-b  /azzalle/prod-b         false    false      false  Unknown              
test  /azzalle/test           false    false      false  Unknown

So I applied the vib kindly provided by Virtually Ghetto.

For some reason that didn’t fix the NFS mounts right away, but rebooting once more brought everything back online, including the autobooted VMs.

A painless method for little labs and standalone machines that can be done entirely over ssh.

Now I’m off to upgrade all of my VMtools.


Awesome birthday present

And now my computer bag is no longer a spaghetti collection of miscellaneous cables and stuff. This is the Book Book Travel Journal from Twelve South.

So now I have the near complete mobile kit with:

  • USB to 30 pin
  • USB to lightning
  • USB to MicroUSB
  • USB to Ethernet
  • Several SD cards
  • USB SD card reader
  • DisplayPort to VGA
  • iPad 30pin to VGA
  • Power adaptor for the MacBook
  • Presenter Remote with laser pointer

The dangerous thing is that there’s a perfect slot on the side just made for the iPad Mini Retina that I am resisting buying.


Can't format a VMFS-5 volume on an existing disk?

If you are cobbling together a home lab for ESXi, sometimes you’ll be reusing various disks that have been used in other computers with other operating systems. This can cause a problem for ESXi when you want to format a disk as a VMFS volume if there is an existing partition table and you want to format it using GPT for VMFS-5 and there is an existing MBR partition.

When you try to format the disk you’ll get an error like:

Error:A specified parameter was not correct.
Error Stack
Call "HostStorageSystem.ComputeDiskPartitionInfo" for object "storageSystem" on ESXi "x.x.x.x" failed.

In my particular situation I am using a hosted Mac Mini so I can’t just grab the disk and wipe the partition table but it’s still possible from the command line. You’ll need to enable SSH and the ESXi Shell and use the partedUtil.

On my Mini I have a hard disk and an SSD which can be found under the /vmfs/devices/disks directory.

ls -lh /vmfs/devices/disks
total 1985717835
-rw-------    1 root     root        7.5G Jan 23 07:49 mpx.vmhba32:C0:T0:L0
-rw-------    1 root     root        4.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:1
-rw-------    1 root     root      250.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:5
-rw-------    1 root     root      250.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:6
-rw-------    1 root     root      110.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:7
-rw-------    1 root     root      286.0M Jan 23 07:49 mpx.vmhba32:C0:T0:L0:8
-rw-------    1 root     root      465.8G Jan 23 07:49 t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H
-rw-------    1 root     root      465.8G Jan 23 07:49 t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H:1
-rw-------    1 root     root      476.9G Jan 23 07:49 t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____
-rw-------    1 root     root      476.9G Jan 23 07:49 t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____:1
lrwxrwxrwx    1 root     root          20 Jan 23 07:49 vml.0000000000766d68626133323a303a30 -> mpx.vmhba32:C0:T0:L0
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:1 -> mpx.vmhba32:C0:T0:L0:1
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:5 -> mpx.vmhba32:C0:T0:L0:5
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:6 -> mpx.vmhba32:C0:T0:L0:6
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:7 -> mpx.vmhba32:C0:T0:L0:7
lrwxrwxrwx    1 root     root          22 Jan 23 07:49 vml.0000000000766d68626133323a303a30:8 -> mpx.vmhba32:C0:T0:L0:8
lrwxrwxrwx    1 root     root          72 Jan 23 07:49 vml.010000000020202020202054454c35313933394a42385531484150504c4520 -> t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H
lrwxrwxrwx    1 root     root          74 Jan 23 07:49 vml.010000000020202020202054454c35313933394a42385531484150504c4520:1 -> t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H:1
lrwxrwxrwx    1 root     root          72 Jan 23 07:49 vml.0100000000533141584e53414442313238313258202020202053616d73756e -> t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____
lrwxrwxrwx    1 root     root          74 Jan 23 07:49 vml.0100000000533141584e53414442313238313258202020202053616d73756e:1 -> t10.ATA_____Samsung_SSD_840_PRO_Series______________S1AXNSADB12812X_____:1

So now I need to delete the existing partitions on the two internal drives with the following command:

/sbin/partedUtil delete /vmfs/devices/disks/t10.ATA_____APPLE_HDD_HTS545050A7E362_____________________TEL51939JB8U1H 1

Note the 1 at the end which identifies the partition to delete. If you have a disk with multiple partitions, you’ll need to delete each of them.

Once you’ve done that, you can go back into the VI-Client or the web interface and format the disk as a VMFS-5 volume.


Mavericks NFS alert

Just a quick note to help those that are upgrading to Mavericks and use NFS automounts.

By default, Mavericks will use NFS4 which uses a different security model so you may end up with what appears to be a regular mount, but it’s pretty much empty. Don’t worry the data hasn’t gone away or been deleted. It’s just that the user mapping doesn’t just cross check the local and remote UID of the user the same way, especially if you’re using PAM to grab UIDs from an LDAP directory.

I haven’t yet pieced together the best method to move to using NFS4 natively with my Solaris and OmnisOS boxes, but in the meantime, you just need to force the connection in NFS3. If you’re using automounts for user home directories, then add ver=3 to the automount entry and all will go back to normal.

Connecting via Command-K in the Finder with an NFS URL doesn’t seem to be affected at the moment.

Page 1 ... 3 4 5 6 7 ... 50 Next 5 Entries »