Entries in apple (21)


Edition issues

Edition issues

I’ve been standing on the sidelines of discussions surrounding the Apple Edition Watch and the pricing, listening to points varying from the tech world’s point of view where value is derived uniquely from functionality, and their total incomprehension of markets that function differently, to the luxury watch world where value is derived from craftsmanship and the cost in person-hours, to the concepts from the fashion world surrounding things like Veblen goods and learning a lot.

I would just like to bring up a thought that occurred to me this morning about how much of the discussion surrounding the cost of goods and how this is such an incredibly limited method of analysis as a predictor of the eventual sale price of an object. This was underlined to me while reading a wonderful analysis of the videos Apple is showing about the manufacturing techniques they are using to produce the watches.

What this signaled to me is that there are complexities, costs and investments in the manufacturing process that go far beyond the raw materials costs that need to be accounted for. Granted, for some of these systems, Apple is working at such an incredible scale that these inputs can sometimes be marginalized when considered on a cost per unit basis, but the Edition presents a special case which will clearly never be production at the scale of any other product made by Apple.

And there’s one more thing…

The elephant in the room is simply this: It’s made of gold. Gold is both incredibly valuable on a price/weight and is also universally exchangeable. Many of the components in a modern smartphone, like individual chips and so on are probably higher value on a cost/weight valuation, but they are only valuable to the greater market when assembled into a final product.

Which means that it’s very likely that there is an entirely separate production chain and set of facilities set up specifically for managing these new risks, which brings the investment on a cost per unit up even higher. This also means more in depth security and background checking on the personnel that will be working in these facilities.

Gold as a major component to a product represents a hugely complicated security risk at all points in the production chain. This is an incremental cost that needs to be addressed from the source where gold is purchased and then transported to the factory where the gold is melted, converted to flattened ingots and then into blanks. From there the blanks will be taken to the facility where the machining is done (I find it doubtful that these processes are done on the same site), noting that anything dealing with machining produces swarf only in this case, the swarf is valued at $800-$1000/ounce rather than the commodity pricing of aluminum. Not to mention that even if I recover a couple of pounds of aluminum by putting sweepings in my pocket, the available marketplace for reselling it remains limited.

Then we have the additional security through all of the following stages of stocking and transporting and then additional security at the store level. This is a non-negligeable cost factor that pretty much all of the discussions are ignoring. The watch people ignore it because it’s second nature to them and therefore obvious, the tech press ignores it because it’s so outside of the scope of the way the world works for them.

Remember, like all products, the cost is greater than the simple sum of the parts.


The Cellular Hub

There has been an upsurge in articles and discussions around the wearable market in recent weeks after the Apple Watch announcement.

Some of the best thinking has come from Ben Thompson over at Stratechery, and John Gruber at Daring Fireball but I wonder if we are all parsing this through the restrictive lens of what we know and are familiar with. One thought is that the Apple Watch, a device that must be tethered to an iPhone, will perhaps be capable of becoming a fully autonomous device including native 3G, GPS etc. in the next few years.

I think that unlikely for a number of reasons. While Apple has accomplished miracles of miniaturization, the fundamental issue remains power autonomy. I don’t see battery technology making any leaps to 10x the current density that would allow a watch-sized device to run all of the radios for a full day and have space for some kind of SIM card.

They could save space by going the CDMA route and make the SIM card an intrinsic part of the device, but this requires large changes in the way the bulk of the world’s GSM/LTE technology is sold and deployed. This still doesn’t address the power consumption issue, other than freeing up space inside the device.

I think that the watch is the logical first step in the socialization of wearable technology because we already have context for the device. We’ve been wearing jewelry for thousands of years and timepieces for hundreds; it is a mature concept bifurcated into a utility market and a luxury market, where both exceed the utility needs of most people that just want to display the time.

The reason I wonder about other possibilities are the advances brought by iOS 8 that link all of your Apple devices into a small Bluetooth and WiFi mesh network.

Example: I have an iPad Retina at my desk attached to the (excellent) Twelve South HoverBar arm, serving as an ancillary display for Facetime, WebEx, Tweetbot, OmniFocus and so on. Yesterday, the iPad lit up with an incoming phone call while my iPhone was sitting in a pocket, thus the iPad became a fully featured speakerphone. This was done with basically zero configuration on my part other than signing into my iCloud account on both devices.

This got me to thinking about the utility of the phone device as the cellular conduit. We are used to the concept of “the phone”, including its heuristically necessary baggage like size, which is mostly dictated by the screen, and the form which is dictated by the use cases of alternately looking at it and holding it up to your ear.

If we remove the screen and leave only battery, radios and the crudest UI (on/off for example), a myriad of possible forms emerge. Imagine an integrated MiFi device that provides connectivity to a variety of devices around you – something that you could wear. This kind of device could be designed as a belt buckle for example, or a necklace, bringing an additional set of options to other surrounding screens. I no longer need an iPhone… An iPod Touch as a small screen device where I currently use the iPhone, an iPad serving the jobs requiring more screen real-estate, devices and screens enabling HomeKit, all of which become data and voice enabled by the presence of the cellular hub.

There is a competing thin-client concept that has been around for a while, but has been oriented towards enterprise devices, reducing computers to screens with no intelligence with content projected from a server. Think Citrix, Microsoft RDP, VMware Horizon View. I don’t think this is viable in this space since the latency imposed by passing Retina-quality display data over a wireless network is huge - fine for a mediated UI with mouse and keyboard, but not for a touch-enabled system that requires immediacy of reaction.

Current cellular devices claim a price premium over similar non-cellular devices, witness the iPhone vs the iPod Touch. You can get an iPhone 6+ for the extra battery performance, but retain all of the advantages of the one-hand manipulation by linking it to an iPod Touch. But why should I pay the premium for a iPhone with the big screen? If it’s going to live in a bag, why not something without a screen? And if I need it all the time, why can’t/shouldn’t I wear it?

By consolidating the responsibility of cellular communications to a single device, the satellite devices will be individually cheaper to acquire, and I would likely buy multiples for the various jobs to be done. As a quick example, the current 64Gb iPhone 6 sells for 819 € unlocked in France. A 64 Gb iPod Touch is only 319 €. At this kind of cost disparity, I can imagine buying multiple ancillary screens for various contexts. Apple would take a bath on the margins, but if you’ll pardon the phrase, they could make it up in volume…

This approach fits nicely with the idea of the Apple Watch as just another one of the screens that I have available to me, enabled by a cellular connected wearable that is always with me as well.


Restoring Open Directory from Time Machine on Mountain Lion

I just ran across an ugly situation where my Open Directory account went bad and was refusing to login to any services.

I was seeing these repeated errors in the System log :

Jun 20 18:40:51 www.infrageeks.com PasswordService[168]: -[AuthDBFile getPasswordRec:putItHere:unObfuscate:]: no entries found for d24bd7b0-d8a7-11e1-ad93-000c29b10837
Jun 20 18:40:51 www.infrageeks.com log[3195]: auth: Error: od(erik, Credential operation failed because an invalid parameter was provided.
Jun 20 18:40:51 www.infrageeks.com log[3195]: auth: Error: od(erik, authentication failed for user=erik, method=CRAM-MD5

And the Password Service log was full of: Jun 20 2013 16:25:24 74348us USER: {0xd24bd7b0d8a711e1ad93000c29b10837} bad ID.

Which were all of my various devices trying to catch up on mail.

So the obvious thing to do is restore Open Directory. But I know that I had made a number of changes since the last archive operation (yes, bad me) so I needed another way to get this back up and running quickly.

I do backup the server using Time Machine, SuperDuper and zfs snapshots, so I could easily do a full rollback to a previous point in time, but I would also lose whatever mail had arrived in the meantime. And the problem is so specific, I should be able to fix it by restoring just the Open Directory data.

So here’s how to restore your Open Directory from a Time Machine backup. Some steps can be accomplished different ways, but this is probably overall the easiest way.

  • On the server, go to the Time Machine menu item and select enter Time Machine. This will mount your Time Machine disk image automatically.
  • On another machine open up an ssh session as an administrator (or you can mount the Time Machine backup image manually and do this locally)
  • sudo bash to get a root shell (the Open Directory files are not accessible to a regular admin account)
  • Stop the Open Directory Service with “serveradmin stop dirserv”
  • cd to /Volumes/Time Machine Backups/Backups.backupdb/servername
  • Here you will find a list of directories with the Time Machine backup sessions. Find one that is just before OD started going south and cd into it and descend to :
  • /Volumes/Time Machine Backups/Backups.backupdb/servername/date/servername/private/var/db
  • Then sync the data from the backup onto the source disk with :
  • rsync -av openldap/ /private/var/db/openldap/
  • Start the Open Directory Service with “serveradmin start dirserv”

You should be back in business.


Mac Pro 2013 Storage

There’s been a lot of talk about the new Mac Pro just announced at WWDC 2013 and I’m really liking what I see even if I have no real use for anything with that kind of horsepower.

But as usual, when Apple giveth, Apple taketh away. One big thing that’s currently missing from the newest iteration of the Mac Pro is internal storage expansion. Much noise has been made about the simplest types of solutions involving direct Thunderbolt connections to external drives (individually or multiple drive cases) and the resulting problems concerning cable mess, noise issues and the like.

I’m curious to see if Apple is going to be launching the Mac Pro with a suite of associated Thunderbolt peripherals since there is currently a dearth of products in this space since a super quiet complementary multi-disk storage system seems to be an obvious product. But in the meantime, we still have to cope with things like the fact that Thunderbolt over copper is limited to about 3m which can be problematic if you want (potentially noisy) expandable storage that’s not right beside you.

But even before the machine is released, we can imagine some useful and powerful solutions to these issues. In the enterprise world we do any awful lot with high-end NAS and SAN boxes but there are ways to profit from these technologies in a reasonable budget. Well, reasonable to someone ready to drop a few grand on a Mac Pro or two…

In any case, those of us with Mac Minis have already gone through this process of outgrowing storage that is handled by individual drives, and are often in spots that are inconvenient for hooking up external storage like home media servers.

So how to get there from here? The idea is to build an external storage box and using connectivity options that permit you to place it away from the office space where spinning disks and fans disturb the ambiance without penalizing performance. I’ve been using this approach for quite some time now but contenting with standard Gigabit Ethernet since my needs are limited to video and music streaming, plus some basic virtual machines for testing.

The big news that has surprised me is the appearance (finally) of 10GBase-T copper cards and switches. Yes, that’s 10GbE so more than plenty fast enough to handle most anything that a set of SATA drives can spit out like what we see in the last generation of Mac Pros.

Sticking with standard Ethernet CAT6 we can maintain 10GBase-T over a 55 meter cable. So we can easily put our storage a fair distance away from the office.

In order to use a standard PCIe expansion card we need a means of plugging it in. For this we have options like the Sonnet Echo Express SE which is a box with an 8x PCIe slot that you connect via Thunderbolt to the Mac Pro (or any Thunderbolt equipped machine for that matter).

There are a number of different 10GBase-T cards out there and one thing that remains to be determined is the driver availability for OS X. Sonnet proposes the Myricom Myri 10-G, but they don’t currently offer a 10GBase-T version. They are available with SFP+ Fiber (expensive) and CX4 (short cables). I did find a few cards available on Amazon like the Intel X540T1 at $353 or the HP G2 Dual Port card at $300 so there are options out there and I’m hoping that someone with deeper pockets than me will test the waters here.

I have a soft spot for using ZFS as my preferred storage technology for a number of reasons, including reliability and flexibility, but any number of server solutions are possible as long as they can publish a protocol OS X can talk to. With a build your own approach, you can find all sorts of boxes optimized for small, medium and massive storage options.

If your needs are relatively small, you could go the route of something like the HP N40L Microserver. Currently my setup with these machines are capable of saturating a standard GbE link (sequential IO) with 4 low RPM SATA drives, so there’s some headroom left going to 10GbE with speedier disks or even SSD.

Accessing the storage server

I prefer using NFS (but am waiting with great interest to see what 10.9’s SMB2 implementation will bring) for sticking with a NAS protocol approach, but if you prefer working with storage that is seen as disks by the OS, you can use iSCSI if you’re sticking with ethernet and TCP/IP as the transport. Fortunately, the GlobalSAN iSCSI Initiator will do the trick.

If you are dedicating the storage to a single machine, you can simply connect everything directly, but this kind of setup can be shared, but you’ll need a switch. Currently the best deal that I’ve seen out there for reasonably priced 10GbE switches is the Netgear 8 port 10GBase-T switch (~$900).

It’s true that all of this is considerably more complicated that simply popping off the side of the machine and connecting a new disk, or buying a little Thunderbolt external array, but if you need serious performance and serious capacity, even the old Mac Pro would reach some limits pretty quickly. Moving to a dedicated storage system permits better performance, sharing across multiple machines and many more options for growing the system over time.

I suspect that the majority of folks doing massive video work are already using some kind of SAN, whether Fibre Channel or iSCSI so the impact to them is mostly buying the Sonnet expansion box. It’s all the people in the middle who are currently making do with 3-4 disks who have to start asking a lot of questions about how to plan for storage management.


BB10 Licensing ?

A few thoughts on Thorsten Heims’ recent comments about licensing BB10.

After reading a few of the articles about the subject I think that we can draw a number of parallels between RIM’s position and Apple’s position pre-iPod. Even the number of their next-generation OS is the same.

Apple was starting to see its market share decline with the arrival of Windows machines en masse. Very similar to RIM’s situation with the arrival of iOS and Android. At the moment, I think that we can discount the effect of Windows Phone on RIM’s situation.

In order to stave off the marketshare bleeding, Apple decided to license their operating system. This ended up being an absolutely horrible idea for them. The basic problem was that the clone makers were making cheap machines. Most importantly, cheaper than Apple. This is exactly what RIM seems to want other hardware manufacturers to do.

The problem is that this has the unintended side-effect of killing off your own hardware business since you can no longer compete on price. At the same time, the revenue from licensing is usually considerably less than what you get from your bundled hardware with operating system.

Unless RIM is willing to pivot and become purely a software and services vendor, this can only end badly.

To further compound the problem RIM is entering a software market with two entrenched players: Android and Microsoft. Android is ostensibly free, although many hardware manufacturers pay Microsoft some patent licensing fees to stay out of court. Next to these two, we have Apple competing in a similar manner to RIM’s traditional handset business with their own hardware bundled with their own OS.

Quite frankly, I don’t know who would be interested in licensing the Blackberry OS. Perhaps in some of the developing world this would be a viable option, but that’s currently the last place where RIM is making money, so they would be exchanging their handset revenue for licensing revenue, probably at a significant overall loss.

Two logical approaches that don’t seem to be on the table are:

  • licensing out the communications stack for integration with other operating systems similar to the way Microsoft licensed ActiveSync for Exchange integration.
  • Buy Good Technology and integrate the Blackberry services with their offering.

Licensing the useful, generic part of the stack would permit RIM to continue selling BlackBerry Enterprise Server (BES) licenses, and operator hosted BB services without the overhead of the full blown OS or the headaches of managing hardware reference designs for licensees.

Buying Good is a logical step as they are currently offering the what is perceived by the enterprise market as the valuable part of the BB service to all comers, and are completely OS agnostic. This would permit them to expand the Good service to an existing BB install base and culturally it would appear to be a good fit since the technical architectures of Good and RIM are practically identical.

Both of these options permit RIM to profit from the growth of their hardware competitors, but will require them to wrap their heads around the fact that they have a feature (secure messaging), not a platform.