Understanding the impact of scale-out storage

Scale-out has the ability to change everything

In the software-only space solutions like Datacore and Nexenta are really quite good (I have used and deployed both) and I still recommend them for customers that need some of their unique features, but they share a fundamental limitation in that they are based on a traditional scale-up architecture model. The result is that there is still a fair bit of manual housekeeping involved in maintaining, migrating and growing the overall environment. Adding and removing underlying storage remains a relatively manual task and the front end head units remain potential choke points. This is becoming more and more of an issue with the arrival of high performance flash, especially when installed directly on the PCIe bus. The hiccup is that you can end up in situations where a single PCIe Flash card can generate enough IO to saturate a 10GbE uplink and a physical processor which means you need bigger and bigger head units with more and more processing power.

So the ideal solution is to match the network, processor and storage requirements in individual units that spread the load around instead of all transiting through central potential choke points. We’re seeing a number of true scale-out solutions hitting the market right now that have eliminated many of the technical issues that plagued earlier attempts at scale-out storage.

The secondary issue with scale out changes the way you purchase storage over time. The over time part is a key factor that keeps getting missed in most analysis of ROI and TCO since most enterprises that are evaluating new storage systems are doing so in the context of their current purchasing and implementation methodology: They have an aging system that needs replacing so they are evaluating the solution as a full on replacement without truly understanding the long term implications of a modern scale-out system.

So why is this approach different? There are two key factors that come into play:

  • You buy incremental bricks of capacity and performance as you need them
  • Failure and retirement of bricks are perceived identically by the software

To the first point, technological progress makes it clear that if you can put off a purchase you will get a better price/capacity and price/performance ratio that you have today. Traditionally many storage systems are purchased with enough head room for the next 3 years which means you’re buying tomorrow’s storage at today’s prices.

So this gives us the following purchase model:

This is a simplified model based on the cost/Gb of storage but applies to all axes involved in storage purchase decisions such as IOPS, rack density, power consumption, storage network connections and so on. Also remembering that you might end up with bricks that still cost $x, but have 50% more capacity in the same space. A key feature of properly done scale out storage is the possibility of heterogeneous bricks where the software handles optimal placement and distribution for you automatically. For “cold” storage, we’re seeing 3Tb drives down under the $100 mark, but 6 Tb drives are now available to the general public. If you filled up your rack with 3Tb drives today, you’d need twice the space and consume twice the power than if you could put off the purchase until the 6Tb drives come down in price. For SSDs, Moore’s Law is working just fine as we see die-shrinks increase the storage density and performance on a regular cycle.

In some organisations this can be a problem since they have optimized their IT purchasing processes around big monolithic capital investments like going to RFP for all capital investments which means that the internal overhead incurred can be counterproductive. But these are often the same organisations that are pushing for outsourcing everything to cloud services so that storage becomes OpEx, but this type of infrastructure investment lives somewhere between the two and needs to be treated as such. Moving straight to the cloud can be a lot more expensive, even when internal soft costs are factored in. Don’t forget that your cloud provider is using the the exact same disks and SSDs as you are and needs to charge for their internal management plus a margin.

And on to the upgrade cycle…

The other critical component of scale-out shared-nothing storage is that failure and retirement are perceived as identical situations from a data availability perspective (although they are different from a management perspective). Properly designed scale-out systems like Coho Data, ScaleIO, VSAN, Nutanix and others guarantee availability of data by balancing and distributing copies of blocks across failure domains. At the simplest level a policy is applied that each block or object must have at least two copies in two separate failure domains, which for general purposes means a brick or a node. You can also be paranoid with some solutions and specify more than two copies.

But back to the retirement issue. Monolithic storage systems basically have to be replaced at least every 5 years since otherwise your support costs will skyrocket. Understandably so since the vendor has to keep warehouses full of obsolete equipment to replace your aging components. And you’ll be faced with all the work of migrating your data onto a new storage system. Granted, things like Storage vMotion make this considerably less painful that it used to be, but it’s still a big task and other issues tend to crop up, like do you have space in your datacenter for two huge storage systems during the migration? Enough power? Are the floors built to take the weight? Enough ports on the storage network?

The key here is that in case of a brick failure in a scale-out system, this is detected and treated as a violation of the redundancy policy. So all of the remaining bricks will redistribute/rebalance copies of the data to ensure that the 2 or 3 copy policy is respected without any administrative intervention. When a brick hits the end of its maintainable life, it just gets flagged for retirement, unplugged, unracked and recycled and the overall storage service just keeps running. This a nice two-for-one benefit that comes natively as a function of the architecture.

To further simplify things you are dealing with reasonably-sized server shaped bricks that fit into standard server racks, not monolithic full-rack assemblies.

Illustrated, this gives us this:

Again, this is a rather simplistic model, but with constantly growing storage density and performance, you are enabling the storage to scale with the business requirements. If there’s an unexpected new demand, a couple more bricks can be injected into the process. If the demand is static, then you’re only worried about the bricks coming out of maintenance. It starts looking at lot more like OpEx than CapEx.

This approach also ensure that the bricks you are buying use components that are sized together correctly. If you are buying faster and more space on high performance PCIe SSD, you want to ensure that you are buying them with the current processors capable of handling the load and that you can handle the transition from GbE to 10GbE to 40GbE, …

So back to the software question again. Right now, I think that Coho Data and ScaleIO are two of the best standalone scale-out storage products out there (more on hyperconvergence later), but they are both coming at this from different business models. ScaleIO is strangely the software-only solution from the hardware giant, while Coho Data is the software bundled with hardware solution from part of the team that built the Xen hypervisor. Andy Warfield, Coho Data’s CTO has stated in many interviews that the original plan was to sell the software, but that they had a really hard time selling this into the enterprise storage teams that want a packaged solution.

I love the elegance of the zero configuration Coho Data approach, but wish that I wasn’t buying the software all over again when I replace a unit when it hits EOL. This could be regulated with some kind of trade-in program.

On the other hand, I also love the tunability and BYOHW aspects of ScaleIO, but find it missing the plug and play simplicity and the efficient auto-tiering of Coho Data. But that will come with product maturity.

It’s time to start thinking differently about storage and reexamining the fundamental questions and how we buy and manage storage.


Understanding the value of software in storage

It’s all about the software

In today’s storage world, the reality is that the actual storage component and the surrounding hardware is all commodity based (with a few exceptions). A storage system is composed of disks, disk cases, communications links, processors, memory and networking.

Fundamentally, the disks are the same ones you can buy from Amazon, NewEgg et al. The only major observable difference is that enterprise storage drives tend to be equipped with SAS or NL-SAS which offers a more advanced command set and a more robust architecture permitting dual path connections as compared to SATA. NL-SAS drives are SATA drives with a smarter controller interface, but the mechanics are identical.

The disk cases | drawers | enclosures (pick your name) are all based on a standard structure with a SAS backplane that drives slot into and most of them are OEM’d from a very short list of vendors. Historically, these were often connected using Fibre Channel but pretty much everyone has come to terms with the fact that FC is unsustainably expensive for this and even the latest top of the line VMAX has gone over to SAS as the connection to the disk enclosures.

Internally, most proprietary interconnects (think RapidIO) have been standardized on 40GbE and Infiniband which, while expensive, are commercially available standard components.

On the processing front, with the exception of HP 3PAR’s custom ASICs, nearly everything else on the market are using standard Intel motherboards with standard Intel processors.

So why are storage systems so expensive? It’s all about the software that adds value to this collection of off the shelf parts in order to make them all work together in a coherent fashion and give you the features over and above just putting bits to disk and maintaining a certain amount of local redundancy.

How much am I paying for this software?

At the simplest level, go over to DELL or Supermicro and spec out a barebones DAS storage system per your requirements, add in a couple of servers with the number of 10GbE, FC & SAS ports you need. That’s your storage cost. Then get a quote from your quote from your storage provider. Ignore the costs assigned by part or by disk, at the end of the day it’s the negotiated package price that matters. The publicy-quoted prices are fantasies designed for impressing the purchasing department with huge rebates. I’ve even seen cases where the exact same part number has different list prices depending which model of storage controller you’re buying. So the only price that matters is the whole package with rebates.

The difference between the two is the software cost that you can now compare to a software-only solution like Nexenta or Datacore.

Now imagine that you are putting that money in the trash after the planned life-cycle of the storage investment; generally 3-5 years. You’ll be buying that software all over again with your next storage aquisition.

The key takeaway here is that the value in storage systems has moved from the actual storage hardware itself to the software. All of the storage components are commodity. IBM, EMC, NetApp et al, do not actually make any of the actual storage components. The disks are bought from Seagate, Western Digital, Toshiba, SSD’s from SANDisk, Intel, Samsung, RAID Controllers from LSI, Ethernet from Broadcom & Intel, FC from Qlogic and Brocade, motherboards from Intel.

You get integration and the software.

Is there a better way ?

The optimal approach would be to buy the commodity hardware and run your own software on it. This is the standard approach for companies like Nexenta and Datacore which bring all of the value add features one expects from enterprise storage like replication, snapshots, and so on, although granted through very different internal mechanisms.

Your software is a one-time cost with maintenance over time, but since it’s just software, the maintenance cost doesn’t skyrocket after 5 years. You replace the hardware as it becomes obsolete or your needs change inside the cost effective 5 year maintenance window, leveraging the software’s tools to make the migrations invisible to the servers consuming the storage. Your storage costs are reasonable since you’re only paying for the most basic of components without the markup that accompanies the software integrated into the system.

DELL Compellent has started thinking this way with their new licensing model that applies once you get to a certain size where you can replace the controllers for the cost of the hardware but migrate your existing software licences over, which puts it closer to Nexenta and Datacore from a business model standpoint.

But for some reason, a lot IT shops are leery of buying software to take this approach, for a variety of reasons from sales pressure from incumbent vendors (you should see the discounts when they feel threatened) and IT management’s desire to have “one throat to choke” in case anything ever goes wrong.

The other aspect is that while going the software route give your the ability to choose exactly what you want, this can also be a burden for IT shops that no longer have the in-house expertise to do basic server and storage design. The freedom of choice brings also the responsibility of making the right choices.

So when evaluating storage solutions, try and figure out exactly what you are paying for and understand how much of your investment is tied to the way that you buy it.


Telus rebuttal

An interesting article from Telus, partially debunking an article by Michael Geist concerning an OECD report on cellular service in Canada. It’s a good mix of information, some of which is entirely pertinent, other bits less so which I’m going to try and add a little bit more context from the point of view of a Canadian who has lived in the US and France and travelled for work in England and western Europe.

We’re not about to argue that Canada has the cheapest wireless rates in the world – that would be no more factual than the oft-repeated mythology that our prices are the highest. The facts, however, prove Canadians pay very competitive rates for some of the best wireless technology in the world – backed by very high investment by TELUS and our competitors.

A nice mix of truth and semi-truthfulness here. Canada clearly does not have the cheapest wireless rates, nor the highest. Competitiveness is in the eye of the analyst and it’s true that Telus and other Canadian telcos do invest a lot. It’s also worth noting that current investments are higher than normal due to the ongoing transition from existing networks to next generation LTE networks though so the investment figures need to be taken with a grain of salt.

One key fact – Canadian pricing is better than the U.S. in 12 of the 15 wireless pricing categories the report looks at. The U.S. has similar geography and economic conditions (including incomes) as Canada, though 10 times the population, and therefore makes for a better comparison than countries with far denser – and therefore less expensive to serve – populations and far lower average incomes.

Overall, I have to agree here that the US is the closest natural comparison economically. However, the vast bulk of western Europe is comparable in terms of economic activity and average income, albeit with generally higher average population densities.

A second fact – when you look at Mr. Geist’s chart, note we are being compared to countries like Slovenia, Slovakia, and Turkey – hardly a legitimate comparison when you consider their markets and economies are vastly different from ours. Despite that, if you focus on the high-usage tiers for smartphone service, which are the most relevant for comparison because that’s where the average Canadian sits, we finish 21st and 22nd out of 34 countries. That’s about average. If you compare us to just the G7 countries, which have more comparable economies, we finish third and fourth out of seven – right in the middle. As mentioned above, we do even better when compared to the U.S. one-to-one.

In general I have to agree with these statements. Slovenia, Slovakia and Turkey are emerging economies working withing vastly different economic conditions and regulatory frameworks.

A third key fact – the OECD report’s methodology is limited. It is not a comprehensive report, but rather it takes a random sampling of one or two plans for each country in each category and does not take into account the different flavour of services in countries. Their reports often end up comparing apples to oranges as a result.

Here I have to agree as the report methodology is non-optimal for this kind of comparison.

The wireless data-only plans are a good example of this. At first glance it looks like Canadian prices are high for these plans – which are typically used on a laptop with a thumb-sized wireless modem. However, scratch the surface and you find that Canadian data-only plans offer customers far better speeds than plans in other OECD countries do. The OECD itself notes that in the 10GB basket, the plan representing Finland only delivers up to 400Kbps, Estonia 1 Mbps, and Canada 100 Mbps! They’re more expensive because they’re better plans, on better networks, and deliver vastly superior customer experience.

It’s true that data only plans are badly regrouped in the report. I would note that this is worth revisiting in a couple of years once LTE is the predominant standard. For example, in France currently has 40% of their network running on LTE and there is no price differential between 3G and 4G access.

The OECD report also does not factor in the fact that most Canadian providers subsidize handsets for customers whereas that is uncommon or unknown in many OECD countries. Ignoring an upfront cost of $500 or more is not comparing apples to apples. It also does not factor in that typically Europeans pay for two or more wireless accounts because they frequently travel between countries or cannot get the all-in pricing North Americans take for granted – for example, one account for daytime calls, a second one for evening and weekend calls. That’s why you get penetration rates of 160 per cent in Norway – 1.6 wireless devices for every adult and child in the country.

This is actually untrue in western Europe. While there are many operators that offer unsubsidized plans, there is also a high mix of subsidized plans as well. However, I have yet to see significantly discounted plans in Canada for those people willing to pay up front for their terminal.

There is also the question of why many people have multiple devices in Europe. In many cases, employment legislation makes the use of “Bring Your Own Device” to the office non viable so employers furnish a telephone for work and people have their own personal devices. At the same time, à la carte offerings and no contract options make it possible for a higher percentage of people to have cellular plans for their iPads and Android tablets which bumps up the penetration rates.

The argument regarding all-in pricing is frankly bullshit. It’s worth noting that in most european countries, there is no notion of long distance calling, as price separation is set based on calls destined to land lines vs other cellular phones. To give the example, here is a resumé of the no-contract offering : - unlimited calls to cell phones in continental France, French islands and territories, USA, Canada, Hawaii - unlimited calls to land lines in France and 41 countries (USA, Canada, Belgium, England, …) - unlimited SMS to continental France, French islands and territories - unlimited MMS to continental France - unlimited use of Wifi hotspots (automatic connection via EAP-SIM) - 3 Gb of data in continental France (bandwidth reduction over this limit)

All of this for 20€/month. Note clearly that the european standard is moving to reduce bandwidth rather than surcharges for overage situations.

Which raises a fourth fact. The report highlights, and Mr. Geist ignores, that Canadian carriers invest almost twice as much in technology and infrastructure per customer than the OECD average. The only country that invests more is Australia, and that’s because they are undertaking a massive tax-payer funded infrastructure project. Our investment is all private, at no cost to taxpayers.

This is undeniably true. Part of this addresses the geographic constraints imposed by Canada’s huge size, but also must be considered in the light of the current infrastructure upgrades to LTE which are not being pursued with equal vigor in all countries, notably due to the economic issues affecting countries like Italy and Spain.

Consider that world-leading level of private investment in the context of population density, and the challenges that come with serving a vast, sparsely populated landscape. Canada has only 12 subscribers per square km, compared to an average of 37 in the U.S. and 312 in the UK. If you factor out the unpopulated areas of Canada that don’t have wireless networks and only include the geography that does have wireless service, we are still serving the 200th least densely-populated landmass in the world. Despite this, 99 per cent of Canadians have access to cutting-edge wireless technology.

While this is certainly true, when you map actual coverage to population density, Canada is a lot more like Europe than telcos would have you believe. Cellular coverage is destined primarily to urban environments where there is sufficient population to justify the investments. Extracting the unpopulated areas gets us closer to a reasonable comparison, but note that the density figures are not quoted in this comparison.

That investment back into service for our customers comes out of the profits we earn, and is directly responsible for the quality of the networks we offer. TELUS alone has invested $100 billion in Canada since 2000. On the back of that investment Nokia Siemens, an international telecom technology firm, found TELUS has the best 4G LTE network in the world – best quality, least dropped calls. I’m very proud of that, and the work that made that remarkable achievement possible.

So that’s a little less than $10B per year which is a perfectly reasonably outlay for covering a country like Canada. For reference, the 2012 investments by operators in France came to 10B€ and around 8B€ in 2011 so it looks like these numbers are pretty much par for the course.

When you consider our enormous investment, challenging geography, sparse population and outstanding networks Canada really SHOULD be the most expensive country for wireless service in the OECD, but we’re not. That’s a great success story we should be celebrating.

Agreed that the investment is proportional to the challenge, but the pricing is still significantly higher for equivalent service. The closest I’ve found according to Telus for a smartphone with 3Gb of data comes to $95 vs 20€ not including free international calling. If I consider the fact that I’m using a cellular equipped iPad plus an iPhone, I get to $165 vs 40€ in order to have 6Gb total.

Don’t let the critics with a vested interest in a well-established, but ill-informed, position spin you on this one. Scratch the surface of their arguments and get to the facts.

Scratch further and you get more contextualized facts.


Whither the Mac Mini?

Whither the Mac Mini ?

I’ve been a fan of the Mac Mini for ages now and with the ability to run (albeit unsupported) VMware ESXi makes it a wonderfully useful method for hosting multiple virtual machines including OS X VMs. It is my favorite home lab machine. It consumes hardly any power (11W-85W), is tiny, requires no big external power brick, has only a discreet white LED.

The current issue is that it hasn’t evolved in ages with the last update dating back to October 2012 which has been keeping me from buying a new one for upgrading my home lab.

My wish list for the next generation Mac Mini is to see them split it into two streams for desktop and server use. The Mini is already sold in this manner with a “server” model that comes with the OS X Server licence. But since OS X Server has become a $20 app purchase, there’s little practical difference between the two types.

The desktop model can continue with the current lineage with a Haswell processor bump and perhaps a better graphics card for those people that want to use it as a desktop or media station.

On the server front, the following changes would be relatively easy to accomplish and even retain the same form factor :

ECC memory

This is a requirement for VMware certification and support. This would enable sales into companies that want a more robust method for hosting their OS X Server machines in VMs that can then be moved around dynamically and more easily integrated into disaster recovery and business continuity plans. OS X Server is a great small business platform, but lacks in areas of clustering and so on, which is where VMware shines.

Bump the max memory to at least 32 Gb

This is the first core limitation for running a lot of virtual machines. I’ve tested with both the Core i5 and the Core i7 models and at steady state, unless you have some truly processor intensive applications, you’ll run out of memory long before you saturate the CPU. Currently there’s no point in buying the i7 model for most standard server workloads (mail, Open Directory, DNS, file services, caching, messages, etc.). With 32 Gb, you can push the consolidation rate up enough to justify the Core i7.

Dual ethernet interfaces

Currently I end run this problem by using a Thunderbolt adaptor in order to get a second gigabit ethernet interface so that I can separate storage traffic from regular network traffic, but dual integrated ethernet would simplify things immensely. No extra adapter to buy and clutter things up and no driver issues. I’d really really love to see 10GBase-T, but since we saw the Mac Pro arrive without 10 GbE, I don’t think that’s a likely scenario.


These are terribly unlikely options, but ones that would be nice to see :

Expansion options

Thunderbolt is an extension of the PCIe bus and there are solutions out there from Sonnet, OWC and others that let you install regular PCIe cards in a Thunderbolt connected box. But they are all relatively big, expensive and clunky. Currently Apple is the only one that is producing Thunderbolt ports at sufficiently large scale to be able to take advantages of economies of scale so it would be nice if they took advantage and proposed a stackable Mini formatted box with a PCIe slot, even if it was limited to half-length cards.

Management interface

Another one that is exceedingly unlikely, but having a ILO/DRAC style remote management interface would go a long way to making it a truly serious server that lives in a server room and can be managed remotely. But after adding the dual ethernet connections there’s not a lot of room left on the back if we keep the current form factor.

Here’s hoping there will be some news at WWDC…


Mac Mini Hosting (ZFS loopback)


I’ve been a fan of ZFS storage for a while now and I find it particularly useful in this type of standalone environment. One of my constant preoccupations (personally and professionally) is data protection from both a backup and disaster recovery perspective. So in this case, I’m going to create a VM locally that is going to play the role of a storage server that will publish NFS shares to the ESXi server internally.

From here, I gain all of the advantages of the ZFS built-in features like snapshots, compression, and the ability to efficiently replicate data back to the ZFS servers at home.

There are basically two main streams of ZFS based systems: Solaris kernel based solutions and BSD ones. ZFS on Linux is making some headway, but I haven’t yet had the time to poke at the current release to check out its stability.

If you want a simple setup with a really nice web interface for management, FreeNAS is probably the best bet. I personally prefer OmnisOS for this kind of thing since I drive mostly from the command line and it’s a very lean distribution with almost no extra overhead. Generally speaking the big advantage of FreeNAS is the extensive hardware support because of its BSD roots. But in my case, the VM “hardware” is supported so this doesn’t matter as much.

I’ll give a walkthrough on the process for using OmniOS here.


You could simply deploy the ZFS VM directly on the virtual LAN network you’ve created, but I like to keep storage traffic segmented from the regular network traffic. So I create a new virtual switch with a port for connecting VMs called Storage (or NFS or whatever). I also create a new VMkernel interface on this vSwitch using a completely different subnet that will be used to connect the NFS server to the ESXi server.

If this server were completely isolated and there was no need for routing traffic, I would just leave it this way, but there are a couple of things to keep in mind: how to you manage the server and how does it replicate data? You could manage the server by using the Remote Console in the VMware client, but ssh is more convenient. For replication the server will also need to be able to connect to machines on the remote network so it will need to be attached to the routed LAN subnet.

So the preferred configuration for the VM is with two network cards, one for the LAN for management and replication, and one for NFS traffic published to the ESXi server.

Installation & Configuration

The install process is just a matter of downloading the ISO, attaching it to a new VM and following the instructions. I configured mine with 3Gb of RAM (ZFS like memory), an 8 Gb virtual hard disk from the SSD for the OS and two 400Gb virtual disks, one from the SSD and one from the HD. When I build custom appliances like this, I usually use the vmDirectPath feature in order to pass a disk controller directly to the VM, but it’s a little hard to bootstrap remotely.

Then from the console, there’s some configuration to do. The first stages are best done from the console since you’ll be resetting the networking from DHCP to a fixed address.

Network configuration

Here’s a set of commands that will configure the networking to use a fixed IP address, setup DNS and so on. You’ll need to replace the values with ones appropriate to your environment. Most of this requires that you run this as root, either via sudo or by activating the root account by setting a password.

echo "" > /etc/defaultrouter # sets the default router to the LAN router
echo "nameserver" > /etc/resolv.conf # create resolv.conf and add a name server
echo "nameserver" >> /etc/resolv.conf # append a second DNS server
echo "domain" >> /etc/resolv.conf # and your search domain
echo "" > /etc/defaultdomain
cp /etc/nsswitch.dns /etc/nsswitch.conf # a configuration file for name services that uses DNS
svcadm enable svc:/network/dns/client:default # enable the DNS client service
echo " nfsserver" >> /etc/hosts # the LAN IP and the name of your server
echo "" > /etc/hostname.e1000g0 # the IP for the LAN card - a virtual Intel E1000 card
echo "" > /etc/hostname.e1000g1 # the IP for the NFS card
svcadm disable physical:nwam # disable the DHCP service
svcadm enable physical:default # enable fixed IP configuration
svcadm enable svc:/network/dns/multicast:default # enable mDNS

Reboot and verify that all of the network settings work properly.

Setting up zpools

The first thing you’ll need to do is to find the disks and their current identifiers. You can do this with the format command to list the disks and Ctrl-C to cancel the format application.

# format
Searching for disks...done
       0. c2t0d0 <VMware-Virtualdisk-1.0 cyl 4093 alt 2 hd 128 sec 32>
       1. c2t1d0 <VMware-Virtual disk-1.0-400.00GB>
       2. c2t2d0 <VMware-Virtual disk-1.0-400.00GB>
Specify disk (enter its number): ^C

In my case, I have my two 400 Gb disks available, and are available at the identifiers c2t1d0 and c2t2d0, which corresponds to Controller 2, Targets 1 & 2, Disk 0. The first entry is for the OS disk.

I’m going to create my first non-redundant zpool on the SSD disk which is the first one with:

zpool create ssd c2t1d0

Note that these names are cases sensitive. The result is a new zpool called ssd which is automatically mounted on the root file system so it’s contents are available at /ssd.

Before continuing there are a few settings that I like to configure since any filesystems I create will inherit these settings so I only need to do it once.

zfs set atime=off ssd
zfs set compression=on ssd

Disabling atime means that it won’t update the last accessed time metadata for files which incurs an unnecessary write overhead for our use case. On a regular file server this value may be more useful. Setting compression is a practically free way to get more usable space and the overhead is negligeable.

Now we have a pool, we can create filesystems on it. I’m going to keep it simple with top level filesystems only. So for my setup, I’m creating one filesystem for my mail server all by itself and a second one for general purpose VMs on the LAN.

zfs create ssd/mail
zfs create ssd/lan

To make these accessible to ESXi over NFS, we need to share the filesystems:

zfs set sharenfs=rw=@,root=@ ssd/mail

Here I’m publishing the volume over NFS to all IP addresses in the subnet. Currently the only other IP in this zone is the VMkernel interface on the storage vSwitch, but you could imagine having others here. This means that the NFS share is not available to the LAN, which is the desired configuration since the only thing on this volume should be the mail server VM. You can also set this to use specific IP addresses by ommiting the @ prefix which designates a subnet.

Now I’ve had some permissions issues from time to time with NFS and VMware, so to keep things simple, I open up all rights on this filesystem. Not really a security issue since the only client that can see it is the ESXi server.

chmod -R a+rwx


Now on the ESXi Configuration we need to mount the NFS share so that we can start using it to store VMs. This is done in Configuration > Storage > Add Storage… and select NFS.

The server address is it’s IP address on the storage subnet and the path to the share is the complete path: /ssd/mail in this case.

Snapshots and backups

Now that we’ve got a VMware datastore where we can put VMs, we can profit from the advanced features of the ZFS filesystem underlying the NFS share.

The first cool feature is snapshots where you can make a note of the state of the filesystem that you can then use to roll back to previous points in time, or mount it on a new filesystem so you can go pick out files you want to recover. Now it’s worth remembering that snapshots are not backups since if you lose the underlying storage device, you lose the data. But it is a reasonable tool for doing data recovery.

Creating snapshots is as simple as:

zfs snapshot ssd/mail@snapshot_name

Any new writes are now written to storage in such as way as to not touch the blocks referred to the filesystem at the time the snapshot was taken. The side effect is that you can consume more disk space since all writes have to go to fresh storage so you also need to delete (or commit) snapshots so that they don’t hang around forever.

zfs destroy ssd/mail@snapshot_name

Will delete the snapshot and free up any associated blocks that were uniquely referred to by the snapshot. Obviously, this is the sort of thing that you want to automate and there are a number of tools out there. I’ve created a couple of little scripts that you can pop into cron jobs to help manage these tasks. simple-snap just creates a snapshot with a date/time stamp as the name. The following line in crontab will create a new snapshot every hour.

0 * 0 0 0 /root/scripts/simple-snap.ksh ssd/mail

For cleaning them up, I have auto-snap-cleanup which takes a filesystem and the number of snaps to keep as arguments, so the following command will delete all but the last 24 snapshots:

1 * 0 0 0 /root/scripts/auto-snap-cleanup.ksh ssd/mail 24

So you’ll have a day’s worth of “backups” to go back to.

For the remote replication go check out the scripts under the projects tab. If you have two physical machines, each with an SSD and an HD a reasonable approach would be to replicate the SSD filesystems to the HD of the other machine (assuming you have an IPsec tunnel between the two). In this case, if one machine goes offline, you simply set the replicated filesystems on the HD to read/write, and mount them to the local ESXi host, register the VMs and you can start the machines. Obviously there will be a performance reduction since you’ll be running from the hard drive, but the machines can be made available practically immediately with at most an hour’s worth of lost data.

Time Machine

You can also use a ZFS server as a Time Machine destination with Netatalk which adds the AFP protocol to the server. Compiling netatalk can be a picky process so on OmniOS I tend to use the napp-it package which automates all of this and provides a simple web interface for configuration, making it a similar system to FreeNAS (but not as pretty).