Wednesday
May072008

Why Apple's iPhone is like a 1981 IBM PC - page 2 - at ZDNet.co.uk

Why Apple's iPhone is like a 1981 IBM PC - page 2 - at ZDNet.co.uk: "Perhaps the best arguments against Apple allowing background tasks are that they take up too much airtime, draining the battery, and that there's no way for them to communicate to the user when they need attention. If either of these two things were a given for background tasks, then Apple would have a point. But they're not, and it doesn't."

(Via ZDNet.)

Not a bad article, regarding Apple's decision to limit access to iPhone developers to userland applications without background tasking. I agree that at the OS layer there's not really any terribly good reason not to allow it as the kernel of OS X is more than up to the task.

However, but I don't think he's really invested a lot of time looking at the impact that radio communications have on battery life in a pocket-sized device.

The article mentions the battery only once in passing. First off, anything you're likely to do in the background would be looking up information somewhere over the network. For the moment I can't think of any background tasks that are limited to local data that have much value. If there's a need to alert the user, you can always interact with calendar and use its resources for meeting alerts etc. So by default you're going to be using the radio (bluetooth, wifi or Edge or eventually 3G) a lot.

If you think that doesn't have an impact on battery life, take an iPhone and set the Mail application to Auto check every 15 minutes. For fun, turn off Wifi and use the cell network exclusively. Then set it to Manual and see how much time you get out of your battery. I notice that on days where I have very heavy usage of Edge, I can sometimes drain the battery before getting back home at the end of the day. Constant network activity would practically guarantee that I wouldn't make it through the day.

Apple's doing two things here - first off, it's minimizing the risk. Not to the platform as a technical issue, but to its reputation. If people start installing apps left and right and then discover that their iPhone can't get through the day there will be a serious negative reaction.

Secondly, Apple's approach to new markets is to deliver what it promises, but still leaving some room to expand in order to keep up the buzz. We complained for a year that there were no 3rd party apps and it kept everyone talking about the iPhone. Now we'll be getting applications to play with and talk about and complain about the lack of background processing.

It's all about managing expectations, and we have very different expectations from a portable computer and something that looks like a phone. Imagine that your phone only got 3 hours of use out of a charge. We expect this from a computer, but I expect that, at worst, I should be able to go the entire day with an iPhone without worrying about running out of battery. While the iPhone is technically a computer in terms of it's OS architecture, it's a very special purpose computer with a very specific usage pattern and form factor that has to be respected.

I fully expect that Apple will release a background task API specific to the iPhone sometime in the next 18 months that will abstract much of the management of these activities and will be able to regroup queued demands from multiple applications in order to send them together rather than as a constant flow. Until battery technology gets a lot better this is always going to be an issue.

Tuesday
Apr292008

iPhone chat to use Jabber?

Found in multiple articles

Well duh.

If you look at the specs for OS X Server and your current iChat client, you’ll discover that Jabber is built-into both and when you use OS X Server to manage your accounts you get a secure end-to-end enterprise chat solution. See iChat Server at Apple.

For more details on how to extend your iChat Server out to external services such as Google Talk, see the OS X Server Wiki.

Tuesday
Apr292008

Canadian cell carrier Rogers announces iPhone deal

Macworld | Canadian cell carrier Rogers announces iPhone deal: "Canadian cell service provider Rogers Communications will be bringing its users the iPhone, according to a statement. ‘We’re thrilled to announced that we have a deal with Apple to bring the iPhone to Canada later this year,’ said Ted Rogers, President and CEO of Rogers Communications. ‘We can’t tell you any more about it right now, but stay tuned.’"

(Via MacWorld.)

Now the only question is just how badly Rogers intends to screw their customers on the data plan. I can only hope that Apple has negotiated some kind of reasonably priced unlimited data plan instead of the usual pillaging that Rogers prefers.
Monday
Apr282008

Fibre Channel to Software iSCSI Failover Failures

Fibre Channel to Software iSCSI Failover Failures: "Based on these results, I’m inclined to say that one of two things is true. Either: I did something very, very wrong; or ESX isn’t quite right to support automatic failover between FC and software iSCSI. Has anyone else tried this, or am I the only one? If you have tried it, did it work? If so, what steps did you have to take—if any—to make it work properly?"
(Via blog.scottlowe.org.)

Are you using a NetApp for the SAN or something else? I've been able to do this quite happily using Datacore's SANMelody storage virtualisation product. While I didn't test it under high stress loads, it worked just fine under regular production loads of 10-15 VMs running.

In my configuration I allocated the iSCSI Initiators and the FC WWNs to the same ESX Server object in the interface and it seemed to be fine with that.

Edited to add: Side note - I did these tests a while ago using ESX 3.0.1.  I haven't tested this against ESX 3.5.

Edited to add: Boy, re-reading my comment about FC WWNs made me realize just how easy it is to Good Morning Vietnam acronym speak.  That made sense to what, about 1% of the population?

Monday
Apr282008

Re: Is Xen ready for the data center? Is that the right question?

Sitting in my drafts folder forever.

Is Xen ready for the data center? Is that the right question?:

Actually I think that the question is "Is Xen ready for your datacenter and your staff? Or perhaps, are your staff ready for Xen?"

Globally speaking, the article covers the bases nicely and it's true that Xen is a solid virtualisation solution quite capable of producing really excellent performance. However, the virtualization engine itself is only a small piece of the puzzle.

I deal in a consulting with companies of various sizes and requirements and I think that in a "real"* datacenter you should have little or no trouble finding the skills to deal with running a Xen based solution, especially when coupled with some of the various administrative toolsets that are out there. However, the reality of many environments is that their Linux skills are sorely lacking and even the Service Console of ESX scares them. I wish this were not the case, but it's the reality in a lot of places.

"Sadly, most seem to think that IT professionals managing the data center are buffoons who are somehow incapable of working with anything that doesn’t include a highly refined set of GUI tools and setup wizards. Personal experience shines through when an author balks at the notion of editing a text or XML configuration file - a common task for any system administrator."

Everything depends on your datacenter. Don't forget that you're also coupling a project that involves a whole slew of technologies that can be new to the environment and the staff - VLAN implementations, Fiber Channel and/or iSCSI SANs, synchronous and asynchronous replication, snapshots at various levels, and the like. If you're asking them to go from being Windows sysadmins who've always worked with local storage to a Linux environment with shared storage that's a huge learning curve.

Or you have the environment where your team is supporting a variety of stuff, but since it's mostly Microsoft based, you've got your token Linux guy who does everything and with any luck leaves enough decent documentation for the others to deal with anticipated problems. But on his days off, everyone on the team prays that nothing screws up.

Consequently, a declaration of immaturity is often the result, without regard for the performance or functionality of the technology. In the case of Xen, this is particularly prevalent, as the Xen engine and management tools are distinctly separate. In fact, there are already several dozen management and provisioning tools available and/or in-development for the highly capable Xen engine, at varying degrees of maturity.

Another approach might be to say that many of the IT teams and their management are immature if they're not ready to accept new technology.

And yet, I can’t help but think that comparing features of management tools is completely missing the point. Why are we focusing on the tools, rather than the technology? Shouldn’t we be asking, ‘where is virtualization heading’ and ‘which of these technologies has the most long term viability?’

Yes and no. There is certainly some value is asking these questions, but the other stronger question that is pushing people is "what can I do today?" If I were to put too much weight on the long term approach, I'd say that Microsoft's Viridian might** in fact end up being the perfect solution to a mostly MS shop. However it's still a year out (assuming all goes as planned) and I have projects that need to start now. As for long term viability, this is becoming a non-issue with the maturity of the various P2V and V2V tools. Honestly, who cares what platform you're running as long as it meets your current and medium term requirements (read: duration of maintenance contract). Migrating to another platform is a pretty painless operation - after all, one of the big gains of virtualization is that the hard coupling with a platform is no longer an issue and that extends out past the hardware barrier.

In the long term the core virtualisation technology will become irrelevant and I suspect strongly that ESX will be given away free the way that Virtual Server and VMWare Server are today. What's more important is the accessibility and depth of the management toolkit. If all I was after was virtualisation for the sake of encapsulation, I'd be running VMWare Server on a Linux host off of local storage (or Xen, granted).

And which technology has everyone moved to? That’s simple - paravirtualization on the Xen hypervisor. Solaris, Linux, several Unix variants, and, as a result of their partnership with Novell, Microsoft will all either run Xen directly or will be Xen compatible in a very short time.

Oh I doubt strongly that Viridian has any intention of supporting native Xen images. Remember, embrace and extend. They'll make it easy to import the images, but I don't expect much more than that.

Of course, those with the most market share will continue to sell their solutions as ‘more mature’ and/or ‘enterprise ready’ while continuing to improve their tools. Unfortunately, they will continue to lean on an outdated, albeit refined technology core. The core may continue to evolve, but the approach is fundamentally less efficient, and will therefore never achieve the performance of the more logical solution.

Performance isn't everything (nor is logic). From what I've been able to gather from the various performance profiling battles going on is that the difference is usually 5-10% one way or the other depending on who defines the methodology. And honestly, that kind of difference is negligible in all but the highest volume environments. On top of that, the real bottleneck in large scale consolidation is almost always disk I/O which renders most of these discussions moot.

It reminds me of the ice farmers’ response to the refrigerator - rather than evolving their business, they tried to find better, more efficient ways to make ice, and ultimately went out of business because the technology simply wasn’t as good.

Umm - I think a better comparison would be two refrigerator manufacturers arguing over using one freon compound vs another. The fridge stills keeps things cold, and that's all the customer is interested in. Mmmm...cold beer.

Of course, you could simply choose to wait for Veridian, but I would assert that there are several advantages to going with Xen now. First, you’ll already be running on Xen, so you’ll be comfortable with the tools and will likely incur little, if any conversion cost when Veridian goes golden. And second, you get to take advantage of unmatched, multi-platform virtualization technology, such as native 64bit guests, and 32bit paravirtualized guests on 64bit hosts.

Umm - we can already do that today on ESX.

So what’s the weak spot? Complexity and management. While the engine is solid, the management tools are distinctly separate and still evolving.

Bang. Nail, meet hammer. This is the show stopper. IT management makes the budget and technology decisions, and in almost all of the medium and small customer sites I've been to, the IT Manager wants to be able to look at a console and have someone explain it to them in under 5 minutes. If it's more complicated than that, you've inspired fear. In larger, more open minded shops this is less of a barrier, but it's a real barrier.

The other weak spot is the lack of depth of the surrounding ecosystem. In this case, Xen is the lucky benefactor of the ISVs surrounding VMWare who compete with one another by extending their functions to include Xen, but these folks' priority is VMWare.

Ultimately the market will decide, but she's a fickle, schizoid, irrational mistress. If ergonomics and accessibility drove the market, we'd all be using Macs. If price/performance calculations were king, we'd all be using Linux. Most of the world uses Windows. Go figure. 


* By real, I mean a datacenter composed of multiple environments, a staff of multiple teams with defined and differentiated roles,

** I said "might" not "is" or "will"