Tuesday
Sep302014

The Cellular Hub

There has been an upsurge in articles and discussions around the wearable market in recent weeks after the Apple Watch announcement.

Some of the best thinking has come from Ben Thompson over at Stratechery, and John Gruber at Daring Fireball but I wonder if we are all parsing this through the restrictive lens of what we know and are familiar with. One thought is that the Apple Watch, a device that must be tethered to an iPhone, will perhaps be capable of becoming a fully autonomous device including native 3G, GPS etc. in the next few years.

I think that unlikely for a number of reasons. While Apple has accomplished miracles of miniaturization, the fundamental issue remains power autonomy. I don’t see battery technology making any leaps to 10x the current density that would allow a watch-sized device to run all of the radios for a full day and have space for some kind of SIM card.

They could save space by going the CDMA route and make the SIM card an intrinsic part of the device, but this requires large changes in the way the bulk of the world’s GSM/LTE technology is sold and deployed. This still doesn’t address the power consumption issue, other than freeing up space inside the device.

I think that the watch is the logical first step in the socialization of wearable technology because we already have context for the device. We’ve been wearing jewelry for thousands of years and timepieces for hundreds; it is a mature concept bifurcated into a utility market and a luxury market, where both exceed the utility needs of most people that just want to display the time.

The reason I wonder about other possibilities are the advances brought by iOS 8 that link all of your Apple devices into a small Bluetooth and WiFi mesh network.

Example: I have an iPad Retina at my desk attached to the (excellent) Twelve South HoverBar arm, serving as an ancillary display for Facetime, WebEx, Tweetbot, OmniFocus and so on. Yesterday, the iPad lit up with an incoming phone call while my iPhone was sitting in a pocket, thus the iPad became a fully featured speakerphone. This was done with basically zero configuration on my part other than signing into my iCloud account on both devices.

This got me to thinking about the utility of the phone device as the cellular conduit. We are used to the concept of “the phone”, including its heuristically necessary baggage like size, which is mostly dictated by the screen, and the form which is dictated by the use cases of alternately looking at it and holding it up to your ear.

If we remove the screen and leave only battery, radios and the crudest UI (on/off for example), a myriad of possible forms emerge. Imagine an integrated MiFi device that provides connectivity to a variety of devices around you – something that you could wear. This kind of device could be designed as a belt buckle for example, or a necklace, bringing an additional set of options to other surrounding screens. I no longer need an iPhone… An iPod Touch as a small screen device where I currently use the iPhone, an iPad serving the jobs requiring more screen real-estate, devices and screens enabling HomeKit, all of which become data and voice enabled by the presence of the cellular hub.

There is a competing thin-client concept that has been around for a while, but has been oriented towards enterprise devices, reducing computers to screens with no intelligence with content projected from a server. Think Citrix, Microsoft RDP, VMware Horizon View. I don’t think this is viable in this space since the latency imposed by passing Retina-quality display data over a wireless network is huge - fine for a mediated UI with mouse and keyboard, but not for a touch-enabled system that requires immediacy of reaction.

Current cellular devices claim a price premium over similar non-cellular devices, witness the iPhone vs the iPod Touch. You can get an iPhone 6+ for the extra battery performance, but retain all of the advantages of the one-hand manipulation by linking it to an iPod Touch. But why should I pay the premium for a iPhone with the big screen? If it’s going to live in a bag, why not something without a screen? And if I need it all the time, why can’t/shouldn’t I wear it?

By consolidating the responsibility of cellular communications to a single device, the satellite devices will be individually cheaper to acquire, and I would likely buy multiples for the various jobs to be done. As a quick example, the current 64Gb iPhone 6 sells for 819 € unlocked in France. A 64 Gb iPod Touch is only 319 €. At this kind of cost disparity, I can imagine buying multiple ancillary screens for various contexts. Apple would take a bath on the margins, but if you’ll pardon the phrase, they could make it up in volume…

This approach fits nicely with the idea of the Apple Watch as just another one of the screens that I have available to me, enabled by a cellular connected wearable that is always with me as well.

Thursday
Aug142014

Understanding the impact of scale-out storage

Scale-out has the ability to change everything

In the software-only space solutions like Datacore and Nexenta are really quite good (I have used and deployed both) and I still recommend them for customers that need some of their unique features, but they share a fundamental limitation in that they are based on a traditional scale-up architecture model. The result is that there is still a fair bit of manual housekeeping involved in maintaining, migrating and growing the overall environment. Adding and removing underlying storage remains a relatively manual task and the front end head units remain potential choke points. This is becoming more and more of an issue with the arrival of high performance flash, especially when installed directly on the PCIe bus. The hiccup is that you can end up in situations where a single PCIe Flash card can generate enough IO to saturate a 10GbE uplink and a physical processor which means you need bigger and bigger head units with more and more processing power.

So the ideal solution is to match the network, processor and storage requirements in individual units that spread the load around instead of all transiting through central potential choke points. We’re seeing a number of true scale-out solutions hitting the market right now that have eliminated many of the technical issues that plagued earlier attempts at scale-out storage.

The secondary issue with scale out changes the way you purchase storage over time. The over time part is a key factor that keeps getting missed in most analysis of ROI and TCO since most enterprises that are evaluating new storage systems are doing so in the context of their current purchasing and implementation methodology: They have an aging system that needs replacing so they are evaluating the solution as a full on replacement without truly understanding the long term implications of a modern scale-out system.

So why is this approach different? There are two key factors that come into play:

  • You buy incremental bricks of capacity and performance as you need them
  • Failure and retirement of bricks are perceived identically by the software

To the first point, technological progress makes it clear that if you can put off a purchase you will get a better price/capacity and price/performance ratio that you have today. Traditionally many storage systems are purchased with enough head room for the next 3 years which means you’re buying tomorrow’s storage at today’s prices.

So this gives us the following purchase model:

This is a simplified model based on the cost/Gb of storage but applies to all axes involved in storage purchase decisions such as IOPS, rack density, power consumption, storage network connections and so on. Also remembering that you might end up with bricks that still cost $x, but have 50% more capacity in the same space. A key feature of properly done scale out storage is the possibility of heterogeneous bricks where the software handles optimal placement and distribution for you automatically. For “cold” storage, we’re seeing 3Tb drives down under the $100 mark, but 6 Tb drives are now available to the general public. If you filled up your rack with 3Tb drives today, you’d need twice the space and consume twice the power than if you could put off the purchase until the 6Tb drives come down in price. For SSDs, Moore’s Law is working just fine as we see die-shrinks increase the storage density and performance on a regular cycle.

In some organisations this can be a problem since they have optimized their IT purchasing processes around big monolithic capital investments like going to RFP for all capital investments which means that the internal overhead incurred can be counterproductive. But these are often the same organisations that are pushing for outsourcing everything to cloud services so that storage becomes OpEx, but this type of infrastructure investment lives somewhere between the two and needs to be treated as such. Moving straight to the cloud can be a lot more expensive, even when internal soft costs are factored in. Don’t forget that your cloud provider is using the the exact same disks and SSDs as you are and needs to charge for their internal management plus a margin.

And on to the upgrade cycle…

The other critical component of scale-out shared-nothing storage is that failure and retirement are perceived as identical situations from a data availability perspective (although they are different from a management perspective). Properly designed scale-out systems like Coho Data, ScaleIO, VSAN, Nutanix and others guarantee availability of data by balancing and distributing copies of blocks across failure domains. At the simplest level a policy is applied that each block or object must have at least two copies in two separate failure domains, which for general purposes means a brick or a node. You can also be paranoid with some solutions and specify more than two copies.

But back to the retirement issue. Monolithic storage systems basically have to be replaced at least every 5 years since otherwise your support costs will skyrocket. Understandably so since the vendor has to keep warehouses full of obsolete equipment to replace your aging components. And you’ll be faced with all the work of migrating your data onto a new storage system. Granted, things like Storage vMotion make this considerably less painful that it used to be, but it’s still a big task and other issues tend to crop up, like do you have space in your datacenter for two huge storage systems during the migration? Enough power? Are the floors built to take the weight? Enough ports on the storage network?

The key here is that in case of a brick failure in a scale-out system, this is detected and treated as a violation of the redundancy policy. So all of the remaining bricks will redistribute/rebalance copies of the data to ensure that the 2 or 3 copy policy is respected without any administrative intervention. When a brick hits the end of its maintainable life, it just gets flagged for retirement, unplugged, unracked and recycled and the overall storage service just keeps running. This a nice two-for-one benefit that comes natively as a function of the architecture.

To further simplify things you are dealing with reasonably-sized server shaped bricks that fit into standard server racks, not monolithic full-rack assemblies.

Illustrated, this gives us this:

Again, this is a rather simplistic model, but with constantly growing storage density and performance, you are enabling the storage to scale with the business requirements. If there’s an unexpected new demand, a couple more bricks can be injected into the process. If the demand is static, then you’re only worried about the bricks coming out of maintenance. It starts looking at lot more like OpEx than CapEx.

This approach also ensure that the bricks you are buying use components that are sized together correctly. If you are buying faster and more space on high performance PCIe SSD, you want to ensure that you are buying them with the current processors capable of handling the load and that you can handle the transition from GbE to 10GbE to 40GbE, …

So back to the software question again. Right now, I think that Coho Data and ScaleIO are two of the best standalone scale-out storage products out there (more on hyperconvergence later), but they are both coming at this from different business models. ScaleIO is strangely the software-only solution from the hardware giant, while Coho Data is the software bundled with hardware solution from part of the team that built the Xen hypervisor. Andy Warfield, Coho Data’s CTO has stated in many interviews that the original plan was to sell the software, but that they had a really hard time selling this into the enterprise storage teams that want a packaged solution.

I love the elegance of the zero configuration Coho Data approach, but wish that I wasn’t buying the software all over again when I replace a unit when it hits EOL. This could be regulated with some kind of trade-in program.

On the other hand, I also love the tunability and BYOHW aspects of ScaleIO, but find it missing the plug and play simplicity and the efficient auto-tiering of Coho Data. But that will come with product maturity.

It’s time to start thinking differently about storage and reexamining the fundamental questions and how we buy and manage storage.

Thursday
Aug072014

Understanding the value of software in storage

It’s all about the software

In today’s storage world, the reality is that the actual storage component and the surrounding hardware is all commodity based (with a few exceptions). A storage system is composed of disks, disk cases, communications links, processors, memory and networking.

Fundamentally, the disks are the same ones you can buy from Amazon, NewEgg et al. The only major observable difference is that enterprise storage drives tend to be equipped with SAS or NL-SAS which offers a more advanced command set and a more robust architecture permitting dual path connections as compared to SATA. NL-SAS drives are SATA drives with a smarter controller interface, but the mechanics are identical.

The disk cases | drawers | enclosures (pick your name) are all based on a standard structure with a SAS backplane that drives slot into and most of them are OEM’d from a very short list of vendors. Historically, these were often connected using Fibre Channel but pretty much everyone has come to terms with the fact that FC is unsustainably expensive for this and even the latest top of the line VMAX has gone over to SAS as the connection to the disk enclosures.

Internally, most proprietary interconnects (think RapidIO) have been standardized on 40GbE and Infiniband which, while expensive, are commercially available standard components.

On the processing front, with the exception of HP 3PAR’s custom ASICs, nearly everything else on the market are using standard Intel motherboards with standard Intel processors.

So why are storage systems so expensive? It’s all about the software that adds value to this collection of off the shelf parts in order to make them all work together in a coherent fashion and give you the features over and above just putting bits to disk and maintaining a certain amount of local redundancy.

How much am I paying for this software?

At the simplest level, go over to DELL or Supermicro and spec out a barebones DAS storage system per your requirements, add in a couple of servers with the number of 10GbE, FC & SAS ports you need. That’s your storage cost. Then get a quote from your quote from your storage provider. Ignore the costs assigned by part or by disk, at the end of the day it’s the negotiated package price that matters. The publicy-quoted prices are fantasies designed for impressing the purchasing department with huge rebates. I’ve even seen cases where the exact same part number has different list prices depending which model of storage controller you’re buying. So the only price that matters is the whole package with rebates.

The difference between the two is the software cost that you can now compare to a software-only solution like Nexenta or Datacore.

Now imagine that you are putting that money in the trash after the planned life-cycle of the storage investment; generally 3-5 years. You’ll be buying that software all over again with your next storage aquisition.

The key takeaway here is that the value in storage systems has moved from the actual storage hardware itself to the software. All of the storage components are commodity. IBM, EMC, NetApp et al, do not actually make any of the actual storage components. The disks are bought from Seagate, Western Digital, Toshiba, SSD’s from SANDisk, Intel, Samsung, RAID Controllers from LSI, Ethernet from Broadcom & Intel, FC from Qlogic and Brocade, motherboards from Intel.

You get integration and the software.

Is there a better way ?

The optimal approach would be to buy the commodity hardware and run your own software on it. This is the standard approach for companies like Nexenta and Datacore which bring all of the value add features one expects from enterprise storage like replication, snapshots, and so on, although granted through very different internal mechanisms.

Your software is a one-time cost with maintenance over time, but since it’s just software, the maintenance cost doesn’t skyrocket after 5 years. You replace the hardware as it becomes obsolete or your needs change inside the cost effective 5 year maintenance window, leveraging the software’s tools to make the migrations invisible to the servers consuming the storage. Your storage costs are reasonable since you’re only paying for the most basic of components without the markup that accompanies the software integrated into the system.

DELL Compellent has started thinking this way with their new licensing model that applies once you get to a certain size where you can replace the controllers for the cost of the hardware but migrate your existing software licences over, which puts it closer to Nexenta and Datacore from a business model standpoint.

But for some reason, a lot IT shops are leery of buying software to take this approach, for a variety of reasons from sales pressure from incumbent vendors (you should see the discounts when they feel threatened) and IT management’s desire to have “one throat to choke” in case anything ever goes wrong.

The other aspect is that while going the software route give your the ability to choose exactly what you want, this can also be a burden for IT shops that no longer have the in-house expertise to do basic server and storage design. The freedom of choice brings also the responsibility of making the right choices.

So when evaluating storage solutions, try and figure out exactly what you are paying for and understand how much of your investment is tied to the way that you buy it.

Thursday
Jul102014

Telus rebuttal

An interesting article from Telus, partially debunking an article by Michael Geist concerning an OECD report on cellular service in Canada. It’s a good mix of information, some of which is entirely pertinent, other bits less so which I’m going to try and add a little bit more context from the point of view of a Canadian who has lived in the US and France and travelled for work in England and western Europe.

We’re not about to argue that Canada has the cheapest wireless rates in the world – that would be no more factual than the oft-repeated mythology that our prices are the highest. The facts, however, prove Canadians pay very competitive rates for some of the best wireless technology in the world – backed by very high investment by TELUS and our competitors.

A nice mix of truth and semi-truthfulness here. Canada clearly does not have the cheapest wireless rates, nor the highest. Competitiveness is in the eye of the analyst and it’s true that Telus and other Canadian telcos do invest a lot. It’s also worth noting that current investments are higher than normal due to the ongoing transition from existing networks to next generation LTE networks though so the investment figures need to be taken with a grain of salt.

One key fact – Canadian pricing is better than the U.S. in 12 of the 15 wireless pricing categories the report looks at. The U.S. has similar geography and economic conditions (including incomes) as Canada, though 10 times the population, and therefore makes for a better comparison than countries with far denser – and therefore less expensive to serve – populations and far lower average incomes.

Overall, I have to agree here that the US is the closest natural comparison economically. However, the vast bulk of western Europe is comparable in terms of economic activity and average income, albeit with generally higher average population densities.

A second fact – when you look at Mr. Geist’s chart, note we are being compared to countries like Slovenia, Slovakia, and Turkey – hardly a legitimate comparison when you consider their markets and economies are vastly different from ours. Despite that, if you focus on the high-usage tiers for smartphone service, which are the most relevant for comparison because that’s where the average Canadian sits, we finish 21st and 22nd out of 34 countries. That’s about average. If you compare us to just the G7 countries, which have more comparable economies, we finish third and fourth out of seven – right in the middle. As mentioned above, we do even better when compared to the U.S. one-to-one.

In general I have to agree with these statements. Slovenia, Slovakia and Turkey are emerging economies working withing vastly different economic conditions and regulatory frameworks.

A third key fact – the OECD report’s methodology is limited. It is not a comprehensive report, but rather it takes a random sampling of one or two plans for each country in each category and does not take into account the different flavour of services in countries. Their reports often end up comparing apples to oranges as a result.

Here I have to agree as the report methodology is non-optimal for this kind of comparison.

The wireless data-only plans are a good example of this. At first glance it looks like Canadian prices are high for these plans – which are typically used on a laptop with a thumb-sized wireless modem. However, scratch the surface and you find that Canadian data-only plans offer customers far better speeds than plans in other OECD countries do. The OECD itself notes that in the 10GB basket, the plan representing Finland only delivers up to 400Kbps, Estonia 1 Mbps, and Canada 100 Mbps! They’re more expensive because they’re better plans, on better networks, and deliver vastly superior customer experience.

It’s true that data only plans are badly regrouped in the report. I would note that this is worth revisiting in a couple of years once LTE is the predominant standard. For example, Free.fr in France currently has 40% of their network running on LTE and there is no price differential between 3G and 4G access.

The OECD report also does not factor in the fact that most Canadian providers subsidize handsets for customers whereas that is uncommon or unknown in many OECD countries. Ignoring an upfront cost of $500 or more is not comparing apples to apples. It also does not factor in that typically Europeans pay for two or more wireless accounts because they frequently travel between countries or cannot get the all-in pricing North Americans take for granted – for example, one account for daytime calls, a second one for evening and weekend calls. That’s why you get penetration rates of 160 per cent in Norway – 1.6 wireless devices for every adult and child in the country.

This is actually untrue in western Europe. While there are many operators that offer unsubsidized plans, there is also a high mix of subsidized plans as well. However, I have yet to see significantly discounted plans in Canada for those people willing to pay up front for their terminal.

There is also the question of why many people have multiple devices in Europe. In many cases, employment legislation makes the use of “Bring Your Own Device” to the office non viable so employers furnish a telephone for work and people have their own personal devices. At the same time, à la carte offerings and no contract options make it possible for a higher percentage of people to have cellular plans for their iPads and Android tablets which bumps up the penetration rates.

The argument regarding all-in pricing is frankly bullshit. It’s worth noting that in most european countries, there is no notion of long distance calling, as price separation is set based on calls destined to land lines vs other cellular phones. To give the example, here is a resumé of the no-contract Free.fr offering : - unlimited calls to cell phones in continental France, French islands and territories, USA, Canada, Hawaii - unlimited calls to land lines in France and 41 countries (USA, Canada, Belgium, England, …) - unlimited SMS to continental France, French islands and territories - unlimited MMS to continental France - unlimited use of Wifi hotspots (automatic connection via EAP-SIM) - 3 Gb of data in continental France (bandwidth reduction over this limit)

All of this for 20€/month. Note clearly that the european standard is moving to reduce bandwidth rather than surcharges for overage situations.

Which raises a fourth fact. The report highlights, and Mr. Geist ignores, that Canadian carriers invest almost twice as much in technology and infrastructure per customer than the OECD average. The only country that invests more is Australia, and that’s because they are undertaking a massive tax-payer funded infrastructure project. Our investment is all private, at no cost to taxpayers.

This is undeniably true. Part of this addresses the geographic constraints imposed by Canada’s huge size, but also must be considered in the light of the current infrastructure upgrades to LTE which are not being pursued with equal vigor in all countries, notably due to the economic issues affecting countries like Italy and Spain.

Consider that world-leading level of private investment in the context of population density, and the challenges that come with serving a vast, sparsely populated landscape. Canada has only 12 subscribers per square km, compared to an average of 37 in the U.S. and 312 in the UK. If you factor out the unpopulated areas of Canada that don’t have wireless networks and only include the geography that does have wireless service, we are still serving the 200th least densely-populated landmass in the world. Despite this, 99 per cent of Canadians have access to cutting-edge wireless technology.

While this is certainly true, when you map actual coverage to population density, Canada is a lot more like Europe than telcos would have you believe. Cellular coverage is destined primarily to urban environments where there is sufficient population to justify the investments. Extracting the unpopulated areas gets us closer to a reasonable comparison, but note that the density figures are not quoted in this comparison.

That investment back into service for our customers comes out of the profits we earn, and is directly responsible for the quality of the networks we offer. TELUS alone has invested $100 billion in Canada since 2000. On the back of that investment Nokia Siemens, an international telecom technology firm, found TELUS has the best 4G LTE network in the world – best quality, least dropped calls. I’m very proud of that, and the work that made that remarkable achievement possible.

So that’s a little less than $10B per year which is a perfectly reasonably outlay for covering a country like Canada. For reference, the 2012 investments by operators in France came to 10B€ and around 8B€ in 2011 so it looks like these numbers are pretty much par for the course.

When you consider our enormous investment, challenging geography, sparse population and outstanding networks Canada really SHOULD be the most expensive country for wireless service in the OECD, but we’re not. That’s a great success story we should be celebrating.

Agreed that the investment is proportional to the challenge, but the pricing is still significantly higher for equivalent service. The closest I’ve found according to Telus for a smartphone with 3Gb of data comes to $95 vs 20€ not including free international calling. If I consider the fact that I’m using a cellular equipped iPad plus an iPhone, I get to $165 vs 40€ in order to have 6Gb total.

Don’t let the critics with a vested interest in a well-established, but ill-informed, position spin you on this one. Scratch the surface of their arguments and get to the facts.

Scratch further and you get more contextualized facts.

Wednesday
May072014

Whither the Mac Mini?

Whither the Mac Mini ?

I’ve been a fan of the Mac Mini for ages now and with the ability to run (albeit unsupported) VMware ESXi makes it a wonderfully useful method for hosting multiple virtual machines including OS X VMs. It is my favorite home lab machine. It consumes hardly any power (11W-85W), is tiny, requires no big external power brick, has only a discreet white LED.

The current issue is that it hasn’t evolved in ages with the last update dating back to October 2012 which has been keeping me from buying a new one for upgrading my home lab.

My wish list for the next generation Mac Mini is to see them split it into two streams for desktop and server use. The Mini is already sold in this manner with a “server” model that comes with the OS X Server licence. But since OS X Server has become a $20 app purchase, there’s little practical difference between the two types.

The desktop model can continue with the current lineage with a Haswell processor bump and perhaps a better graphics card for those people that want to use it as a desktop or media station.

On the server front, the following changes would be relatively easy to accomplish and even retain the same form factor :

ECC memory

This is a requirement for VMware certification and support. This would enable sales into companies that want a more robust method for hosting their OS X Server machines in VMs that can then be moved around dynamically and more easily integrated into disaster recovery and business continuity plans. OS X Server is a great small business platform, but lacks in areas of clustering and so on, which is where VMware shines.

Bump the max memory to at least 32 Gb

This is the first core limitation for running a lot of virtual machines. I’ve tested with both the Core i5 and the Core i7 models and at steady state, unless you have some truly processor intensive applications, you’ll run out of memory long before you saturate the CPU. Currently there’s no point in buying the i7 model for most standard server workloads (mail, Open Directory, DNS, file services, caching, messages, etc.). With 32 Gb, you can push the consolidation rate up enough to justify the Core i7.

Dual ethernet interfaces

Currently I end run this problem by using a Thunderbolt adaptor in order to get a second gigabit ethernet interface so that I can separate storage traffic from regular network traffic, but dual integrated ethernet would simplify things immensely. No extra adapter to buy and clutter things up and no driver issues. I’d really really love to see 10GBase-T, but since we saw the Mac Pro arrive without 10 GbE, I don’t think that’s a likely scenario.

Dreaming

These are terribly unlikely options, but ones that would be nice to see :

Expansion options

Thunderbolt is an extension of the PCIe bus and there are solutions out there from Sonnet, OWC and others that let you install regular PCIe cards in a Thunderbolt connected box. But they are all relatively big, expensive and clunky. Currently Apple is the only one that is producing Thunderbolt ports at sufficiently large scale to be able to take advantages of economies of scale so it would be nice if they took advantage and proposed a stackable Mini formatted box with a PCIe slot, even if it was limited to half-length cards.

Management interface

Another one that is exceedingly unlikely, but having a ILO/DRAC style remote management interface would go a long way to making it a truly serious server that lives in a server room and can be managed remotely. But after adding the dual ethernet connections there’s not a lot of room left on the back if we keep the current form factor.

Here’s hoping there will be some news at WWDC…