Entries in iOS (5)

Tuesday
Sep302014

The Cellular Hub

There has been an upsurge in articles and discussions around the wearable market in recent weeks after the Apple Watch announcement.

Some of the best thinking has come from Ben Thompson over at Stratechery, and John Gruber at Daring Fireball but I wonder if we are all parsing this through the restrictive lens of what we know and are familiar with. One thought is that the Apple Watch, a device that must be tethered to an iPhone, will perhaps be capable of becoming a fully autonomous device including native 3G, GPS etc. in the next few years.

I think that unlikely for a number of reasons. While Apple has accomplished miracles of miniaturization, the fundamental issue remains power autonomy. I don’t see battery technology making any leaps to 10x the current density that would allow a watch-sized device to run all of the radios for a full day and have space for some kind of SIM card.

They could save space by going the CDMA route and make the SIM card an intrinsic part of the device, but this requires large changes in the way the bulk of the world’s GSM/LTE technology is sold and deployed. This still doesn’t address the power consumption issue, other than freeing up space inside the device.

I think that the watch is the logical first step in the socialization of wearable technology because we already have context for the device. We’ve been wearing jewelry for thousands of years and timepieces for hundreds; it is a mature concept bifurcated into a utility market and a luxury market, where both exceed the utility needs of most people that just want to display the time.

The reason I wonder about other possibilities are the advances brought by iOS 8 that link all of your Apple devices into a small Bluetooth and WiFi mesh network.

Example: I have an iPad Retina at my desk attached to the (excellent) Twelve South HoverBar arm, serving as an ancillary display for Facetime, WebEx, Tweetbot, OmniFocus and so on. Yesterday, the iPad lit up with an incoming phone call while my iPhone was sitting in a pocket, thus the iPad became a fully featured speakerphone. This was done with basically zero configuration on my part other than signing into my iCloud account on both devices.

This got me to thinking about the utility of the phone device as the cellular conduit. We are used to the concept of “the phone”, including its heuristically necessary baggage like size, which is mostly dictated by the screen, and the form which is dictated by the use cases of alternately looking at it and holding it up to your ear.

If we remove the screen and leave only battery, radios and the crudest UI (on/off for example), a myriad of possible forms emerge. Imagine an integrated MiFi device that provides connectivity to a variety of devices around you – something that you could wear. This kind of device could be designed as a belt buckle for example, or a necklace, bringing an additional set of options to other surrounding screens. I no longer need an iPhone… An iPod Touch as a small screen device where I currently use the iPhone, an iPad serving the jobs requiring more screen real-estate, devices and screens enabling HomeKit, all of which become data and voice enabled by the presence of the cellular hub.

There is a competing thin-client concept that has been around for a while, but has been oriented towards enterprise devices, reducing computers to screens with no intelligence with content projected from a server. Think Citrix, Microsoft RDP, VMware Horizon View. I don’t think this is viable in this space since the latency imposed by passing Retina-quality display data over a wireless network is huge - fine for a mediated UI with mouse and keyboard, but not for a touch-enabled system that requires immediacy of reaction.

Current cellular devices claim a price premium over similar non-cellular devices, witness the iPhone vs the iPod Touch. You can get an iPhone 6+ for the extra battery performance, but retain all of the advantages of the one-hand manipulation by linking it to an iPod Touch. But why should I pay the premium for a iPhone with the big screen? If it’s going to live in a bag, why not something without a screen? And if I need it all the time, why can’t/shouldn’t I wear it?

By consolidating the responsibility of cellular communications to a single device, the satellite devices will be individually cheaper to acquire, and I would likely buy multiples for the various jobs to be done. As a quick example, the current 64Gb iPhone 6 sells for 819 € unlocked in France. A 64 Gb iPod Touch is only 319 €. At this kind of cost disparity, I can imagine buying multiple ancillary screens for various contexts. Apple would take a bath on the margins, but if you’ll pardon the phrase, they could make it up in volume…

This approach fits nicely with the idea of the Apple Watch as just another one of the screens that I have available to me, enabled by a cellular connected wearable that is always with me as well.

Thursday
Aug022012

Siri tricks and iPad keyboards

I don't have much to add to the immense volume of articles concerning Siri, but I did discover one very useful feature that is not immediately obvious.

The dictation feature activated from the microphone in the keyboard is dynamically linked to the type of keyboard selected. Similar to the autocorrect feature switching dictionaries between languages, Siri will listen in French if the current keyboard is AZERTY and in English when using QWERTY.

Now the quality of the interpretation is heavily dependent on your accent. The closer it approaches a native accent, the better it will work. But I haven't yet had a chance to test it with some people with highly regional French accents. My imitations are pretty bad so that's not a fair test. But it's quite funny when you speak in English to a French keyboard as the results are complete utter gibberish.

Currently performance for interpreting French is highly variable, sometimes up to 20 seconds, sometimes under 2. But English interpretation for me has been frighteningly quick. I'm hardly finished talking and the results have appeared, even when I've been talking for 20-30 seconds.

I suspect that this first phase of deployment is being used for server and resource sizing on the back end so that Apple can determine an average load generated by user, by language, and time of day. Then Apple will be able to ramp up the necessary resources required to meet the demand if they open up Siri to the iPhone 4S or iPad one day.

iPad split keyboard

OK, is the split keyboard any easier to use? Personally, it's not a good fit, since I tend to be using the iPad to take notes in meetings where I've got the iPad on the table in front of me or on my lap.

In these cases, the split keyboard means that I have to pick it up with both hands in order to type with my thumbs, which is less comfortable for me. Plus I find that my eyes are constantly travelling back and forth tracking the letters that are on the borders of the keyboards, where on the regular keyboard, I'm in much more of a touch typing mode, where even if my hands block party of the keyboard, muscle memory combined with autocorrect is up to the task.

But like anything to do with data entry and keyboards, this is a very personal thing and I know some blackberry experts that are thumb ninjas so this may be easier for them than for me.

On the useful side, the split keyboard is slightly translucent and takes up significantly less screen real estate. Of course, less space means smaller touch targets, so this trade off needs to be taken into account.

Wednesday
Jun062012

Text to speech

Apple has included the text to speech feature in its OS for a long time now. I’ve played with it from time to time over the years, but never really found a use for it in any of my workflows. 

But today, I think that I may have found a perfect use case: reading draft blog posts back to me. I use Byword for writing blog posts with iCloud syncing to ensure I have everything everywhere. Now I jump into the preview mode to hide all of the markup and have the Mac read the post back to me. 

The API must exist on iOS as part of the accessibility API for the visually impaired, but I think that it’s a private API. I’m hoping that the next generation of Siri will open this up a bit.

Monday
Mar192012

iOS speech-to-text

Up until now my experience speech-to-text applications has been relatively limited and not terribly good. But since the latest generation included with iOS 5.1, I am finding that the iPhone and iPad are becoming truly effective tools that can be used in this manner.

Now I can take a coffee break and dictate a paragraph or two as ideas pop into my head which is considerably faster than doing this via the keyboard on the iPhone. I find that it's necessary to do a fair bit of correction once the text has been entered, but overall the quality is pretty impressive.

I find that the biggest difficulty in using speech-to-text is the formulation in my head of what I want to say in a structured manner, something that I don't have an issue with using the keyboard. There's still something about the use of the keyboard that allows me to order my thoughts differently and proactively edit what I want to say while I'm typing, where with the speech input I find my thoughts jumping ahead to the next subject before I've completely finished a phrase. But like any change in user interface, there's a learning curve to take into account. This is another area where the fact that I went with the 3G version of the iPad makes a huge difference in the functionality of the device.

One amazing feature of the iOS speech to text functionality is the ability to switch keyboards and do multi-lingual voice input. Je suis étonné par la qualité de ce système. Même avec mon accent anglophone la reconnaissance en français marche très très bien.

Now that the quality is pretty much there, the ability to create the beginning of an article in Byword on the iPhone, and access it when I get back to my desk with the iPad connected to the Bluetooth keyboard is very very slick. Or alternately, using Byword on the MacBook Air with the iCloud integration.

The complement to much of this is the iCloud or DropBox integration which means I'm that much closer to the continuous client where I don't need to think much about files and the like, as the data is simply available everywhere I need it when I need it. Currently my only complaint with iCloud is that it only refreshes when you open the Apps, which was a major blocking point on the old iPad since if I hadn't opened the app to force the synchronization before leaving the house in the morning, the data was unavailable until I found an internet connection. With the built-in 3G/4G service on the new one, the experience is so much better.

Wednesday
Aug102011

Keyboards and iOS

I’ve long been a fan of the software keyboard implementation in iOS, especially from the point of view of someone who frequently switches between languages. I do however sometimes prefer to work with a hardware keyboard since, despite the general accuray and flexibility of the on-screen keyboard I can go faster on hardware. Plus I get to use the entire iPad screen for content.

However, this brings me back into the ugly world of hardware keyboards and the language based dependencies. If I’m using my French (AZERTY) bluetooth keyboard, I need to switch the software keyboard to French so that the layout matches. If I select the English (QWERTY) layout in the software keyboard, that’s also what I get on the physical one. Bleah.

The problem comes when I want to type in English on the bluetooth keyboard since the autocorrect dictionary is based on the selected language and as a result it keeps trying to correct my English with French words. Now I can try and pay attention to the autocorrect pop-ups and cancel them as they appear, but this will pollute the French autocorrect dictionary with a pile of English words. Or I can disable autocorrent globally, but it really is nice to have most of the time. And since the option is buried a few layers deep in the Settings, it’s not something I want to be switching on and off as frequently as changing language context.

I realize that I’m a bit of an edge case here, but it would be nice to be able to disassociate the keyboard layout from the autocorrect language.

Or I need to carry around two keyboards…