Tuesday
Apr292008

Canadian cell carrier Rogers announces iPhone deal

Macworld | Canadian cell carrier Rogers announces iPhone deal: "Canadian cell service provider Rogers Communications will be bringing its users the iPhone, according to a statement. ‘We’re thrilled to announced that we have a deal with Apple to bring the iPhone to Canada later this year,’ said Ted Rogers, President and CEO of Rogers Communications. ‘We can’t tell you any more about it right now, but stay tuned.’"

(Via MacWorld.)

Now the only question is just how badly Rogers intends to screw their customers on the data plan. I can only hope that Apple has negotiated some kind of reasonably priced unlimited data plan instead of the usual pillaging that Rogers prefers.
Monday
Apr282008

Fibre Channel to Software iSCSI Failover Failures

Fibre Channel to Software iSCSI Failover Failures: "Based on these results, I’m inclined to say that one of two things is true. Either: I did something very, very wrong; or ESX isn’t quite right to support automatic failover between FC and software iSCSI. Has anyone else tried this, or am I the only one? If you have tried it, did it work? If so, what steps did you have to take—if any—to make it work properly?"
(Via blog.scottlowe.org.)

Are you using a NetApp for the SAN or something else? I've been able to do this quite happily using Datacore's SANMelody storage virtualisation product. While I didn't test it under high stress loads, it worked just fine under regular production loads of 10-15 VMs running.

In my configuration I allocated the iSCSI Initiators and the FC WWNs to the same ESX Server object in the interface and it seemed to be fine with that.

Edited to add: Side note - I did these tests a while ago using ESX 3.0.1.  I haven't tested this against ESX 3.5.

Edited to add: Boy, re-reading my comment about FC WWNs made me realize just how easy it is to Good Morning Vietnam acronym speak.  That made sense to what, about 1% of the population?

Monday
Apr282008

Re: Is Xen ready for the data center? Is that the right question?

Sitting in my drafts folder forever.

Is Xen ready for the data center? Is that the right question?:

Actually I think that the question is "Is Xen ready for your datacenter and your staff? Or perhaps, are your staff ready for Xen?"

Globally speaking, the article covers the bases nicely and it's true that Xen is a solid virtualisation solution quite capable of producing really excellent performance. However, the virtualization engine itself is only a small piece of the puzzle.

I deal in a consulting with companies of various sizes and requirements and I think that in a "real"* datacenter you should have little or no trouble finding the skills to deal with running a Xen based solution, especially when coupled with some of the various administrative toolsets that are out there. However, the reality of many environments is that their Linux skills are sorely lacking and even the Service Console of ESX scares them. I wish this were not the case, but it's the reality in a lot of places.

"Sadly, most seem to think that IT professionals managing the data center are buffoons who are somehow incapable of working with anything that doesn’t include a highly refined set of GUI tools and setup wizards. Personal experience shines through when an author balks at the notion of editing a text or XML configuration file - a common task for any system administrator."

Everything depends on your datacenter. Don't forget that you're also coupling a project that involves a whole slew of technologies that can be new to the environment and the staff - VLAN implementations, Fiber Channel and/or iSCSI SANs, synchronous and asynchronous replication, snapshots at various levels, and the like. If you're asking them to go from being Windows sysadmins who've always worked with local storage to a Linux environment with shared storage that's a huge learning curve.

Or you have the environment where your team is supporting a variety of stuff, but since it's mostly Microsoft based, you've got your token Linux guy who does everything and with any luck leaves enough decent documentation for the others to deal with anticipated problems. But on his days off, everyone on the team prays that nothing screws up.

Consequently, a declaration of immaturity is often the result, without regard for the performance or functionality of the technology. In the case of Xen, this is particularly prevalent, as the Xen engine and management tools are distinctly separate. In fact, there are already several dozen management and provisioning tools available and/or in-development for the highly capable Xen engine, at varying degrees of maturity.

Another approach might be to say that many of the IT teams and their management are immature if they're not ready to accept new technology.

And yet, I can’t help but think that comparing features of management tools is completely missing the point. Why are we focusing on the tools, rather than the technology? Shouldn’t we be asking, ‘where is virtualization heading’ and ‘which of these technologies has the most long term viability?’

Yes and no. There is certainly some value is asking these questions, but the other stronger question that is pushing people is "what can I do today?" If I were to put too much weight on the long term approach, I'd say that Microsoft's Viridian might** in fact end up being the perfect solution to a mostly MS shop. However it's still a year out (assuming all goes as planned) and I have projects that need to start now. As for long term viability, this is becoming a non-issue with the maturity of the various P2V and V2V tools. Honestly, who cares what platform you're running as long as it meets your current and medium term requirements (read: duration of maintenance contract). Migrating to another platform is a pretty painless operation - after all, one of the big gains of virtualization is that the hard coupling with a platform is no longer an issue and that extends out past the hardware barrier.

In the long term the core virtualisation technology will become irrelevant and I suspect strongly that ESX will be given away free the way that Virtual Server and VMWare Server are today. What's more important is the accessibility and depth of the management toolkit. If all I was after was virtualisation for the sake of encapsulation, I'd be running VMWare Server on a Linux host off of local storage (or Xen, granted).

And which technology has everyone moved to? That’s simple - paravirtualization on the Xen hypervisor. Solaris, Linux, several Unix variants, and, as a result of their partnership with Novell, Microsoft will all either run Xen directly or will be Xen compatible in a very short time.

Oh I doubt strongly that Viridian has any intention of supporting native Xen images. Remember, embrace and extend. They'll make it easy to import the images, but I don't expect much more than that.

Of course, those with the most market share will continue to sell their solutions as ‘more mature’ and/or ‘enterprise ready’ while continuing to improve their tools. Unfortunately, they will continue to lean on an outdated, albeit refined technology core. The core may continue to evolve, but the approach is fundamentally less efficient, and will therefore never achieve the performance of the more logical solution.

Performance isn't everything (nor is logic). From what I've been able to gather from the various performance profiling battles going on is that the difference is usually 5-10% one way or the other depending on who defines the methodology. And honestly, that kind of difference is negligible in all but the highest volume environments. On top of that, the real bottleneck in large scale consolidation is almost always disk I/O which renders most of these discussions moot.

It reminds me of the ice farmers’ response to the refrigerator - rather than evolving their business, they tried to find better, more efficient ways to make ice, and ultimately went out of business because the technology simply wasn’t as good.

Umm - I think a better comparison would be two refrigerator manufacturers arguing over using one freon compound vs another. The fridge stills keeps things cold, and that's all the customer is interested in. Mmmm...cold beer.

Of course, you could simply choose to wait for Veridian, but I would assert that there are several advantages to going with Xen now. First, you’ll already be running on Xen, so you’ll be comfortable with the tools and will likely incur little, if any conversion cost when Veridian goes golden. And second, you get to take advantage of unmatched, multi-platform virtualization technology, such as native 64bit guests, and 32bit paravirtualized guests on 64bit hosts.

Umm - we can already do that today on ESX.

So what’s the weak spot? Complexity and management. While the engine is solid, the management tools are distinctly separate and still evolving.

Bang. Nail, meet hammer. This is the show stopper. IT management makes the budget and technology decisions, and in almost all of the medium and small customer sites I've been to, the IT Manager wants to be able to look at a console and have someone explain it to them in under 5 minutes. If it's more complicated than that, you've inspired fear. In larger, more open minded shops this is less of a barrier, but it's a real barrier.

The other weak spot is the lack of depth of the surrounding ecosystem. In this case, Xen is the lucky benefactor of the ISVs surrounding VMWare who compete with one another by extending their functions to include Xen, but these folks' priority is VMWare.

Ultimately the market will decide, but she's a fickle, schizoid, irrational mistress. If ergonomics and accessibility drove the market, we'd all be using Macs. If price/performance calculations were king, we'd all be using Linux. Most of the world uses Windows. Go figure. 


* By real, I mean a datacenter composed of multiple environments, a staff of multiple teams with defined and differentiated roles,

** I said "might" not "is" or "will"

Monday
Apr282008

Apple in the enterprise

Wow, following a few links here and there and ended up on this thread which is pretty much representative of all of the Apple in the enterprise threads that I see on message boards: a whole lot of non-sequiturs. It's not even a discussion, there's just people popping in with comments out of the blue that are for the most part wrong, anecdotal and do nothing to move the discussion forward.

The link that led me to this thread appeared to have nothing to do with the flow of the conversation, but was regarding how Psystar's offering would impact the availability of virtualisation of OS X Server. I fail to see just how this will have any impact at all, but the point of virtualising OS X Server is near and dear to my heart.

The current EULA for OS X Server permits the virtualisation of OS X Server as long as you have a license for each instance. Fairly normal, straightforward policy. The catch is that you must be running it on Apple hardware which is basically an extension of Apple's OS X licencing policy and the fact that they are a hardware vendor with a software development arm that goes beyond the basics of writing device drivers for their hardware.

Your current options for running a virtual machine on Apple hardware are reasonable enough for small scale are Fusion or Parallels or Microsoft Virtual Server. These are all pretty simple products that are based on the "old" virtualisation model which is an application running on a commercial operating system. The problem with this is that your OS was not designed to host multiple virtual machines running concurrently. We can do so, but it's not optimal.

If you're serious about virtualising your production environment, you're running VMware ESX (or possibly Xen if you're the adventurous type). Currently, ESX does not run on xServes or any other Apple equipment, mostly due to the EFI vs BIOS setup. For VMware to release an EFI enabled version should be relatively easy, but there needs to be a business case for it. That business case is waiting on Apple's go ahead. VMware is certainly capable of building and running OS X compatible virtual machines on ESX, but it's the sort of thing that you want to do in partnership rather than as a shot across the bow (resulting in lawyers making money). You have to factor in that the only current benefit would be for existing Apple customers to consolidate both Windows and OS X Servers on one machine. I imagine that this market is probably pretty small so there's little incentive for VMware to port ESX to the xServe.

Something to remember about the impact of the kind of virtualisation offered by ESX is that server vendors are selling a lot fewer servers and the projections over the next 10 years aren't pretty. I've done a number of ESX deployments where the server refresh cycle went from 100 servers hitting end of life and instead of generating 100 purchases, were consolidated onto 6-8 servers. Granted these were heavy duty servers, but it wasn't a 10-1 price difference either. This is going to accelerate as virtualisation becomes more and more mainstream. I visit clients' server rooms and see a lot of empty racks these days.

Apple is certainly evaluating the best way to approach the marketplace given the impact virtualisation is having. As a server vendor, this is a direct threat to their current hardware business model. I see a number of a different approaches they could be considering:

Try and follow the same path as the desktop. They have succeeded in selling the value of OS X to the consumer with the bonus that with Boot Camp or Fusion or Parallels they can continue to use their existing operating systems and software. This is seeing great fruits in the consumer space, but I don't think that this is viable in the server space since ESX-style consolidation is based on pure bang for the performance buck and requires lots of connectivity that you can't easily squeeze into a 1U server box. So even if there was an ESX version capable of running on the xServe and although it's a competent enough server, it's simply not designed for this kind of massive consolidation.

Go head to head with the current server offerings from DELL and HP. The primary machines I see being used for ESX are 2U bi-pro machines at the low end in order to have enough slots for the connectivity and redundancy required. And I'm seeing fewer and fewer of these as larger clients consolidate on 4-processor, 4-core servers with 64-128Gb of memory. Apple could take a completely different route and build a partnership with VMware to offer an ESX 3i version on the xServe and add a 4-processor model. But this seems highly unlikely given that Apple doesn't have the service and support infrastructure in place to make this a credible offer to those companies that already have long-standing maintenance agreements with the existing big players. Why would I make such a large investment in Apple-specific hardware in order to be able to run a few OS X Server instances? A very hard sell.

Sell OS X Server as a software-only product licensed for virtualisation deployments only. Treating the Server version as an entirely different software only product makes sense. They've already made the first step with the virtualisation on Apple hardware, so now all that remains is to offer a version tailored for specific virtual environments.

This would rarely be in competition with an xServe sale. The companies that have deployed serious virtualisation environments won't be buying xServes, no matter how attractive the software capabilities of OS X Server. I've already run into this situation where the OS X Server wiki and collaboration tools were evaluated and found to be a good fit, but that the current policy was everything virtualised. So there's no lost hardware sale, but rather an easier entry point into an client that might otherwise be impenetrable.

Apple gets to keep some of their historic advantages by running in a virtual machine. Everything about the virtual hardware is tightly controlled and rarely subject to change. The traditional technical advantages of a stable hardware platform remain valid.

Selling OS X Server in a virtual only version strikes me as a no-brainer. I only hope that someone high up at Apple agrees.

Monday
Apr282008

More on the Mini

I've been following along the fiasco that is Psystar and noticed that a recurring theme in many forums was the comparison of the machine proposed by Psystar to the Mac Mini.

At the risk of being horribly cliché, this is comparing apples to oranges.

The Mini is the machine that begs the comparison since it's the closest thing that Apple sells that targets the entry-level market. But there are a number of very important reasons that this is pointless, leaving aside the entire OS X compatibility question (more on that later).

You simply can't compare the price/performance profiles of these two machines without also factoring in the impact of the form factor. The Mini is very very small and the PC equivalent is based on a nano-ITX motherboard. In a tower case, you have many more options for the motherboard and physically more space to put things, which means more connectivity options and better cooling which means more powerful components. You're buying a motherboard that is very mainstream and produced in relatively high quantities for multiple PC-builders.

Storage

Obviously when you make something that small you are limited in your choice of components. The Mini uses a 5400RPM 2.5" laptop drive, whereas a tower design has the space to physically hold a larger 7200RPM 3.5" drive and the ability to cool it. Right away this gives the performance advantage to a larger machine, but it comes at the cost of size. Now depending on your needs, this will be more or less important (witness how many people use a notebook computer as their primary machine). A laptop drive can produce perfectly acceptable performance, up to and including running virtual machines, photoshop and even small scale video work.

Would it be faster with a faster drive? Of course! But there's no room in a Mini for this kind of drive. You can gain the benefits of a faster drive by either adding an external drive connected by Firewire or if you're feeling ambitious and really want the speed boost, you put a cable on the internal SATA connection and pass this out to an external SATA drive. Obviously the SATA connection outperforms the Firewire connection, but comes at the price of having to do some pretty serious modifications.

Networking

Not much to say here as it's hard to buy a computer without an ethernet card (probably gigabit) and possibly an integrated Wifi card of varying modernity. The only hiccup here is price. Laptop components are going to be more expensive that desktop ones.

Video

We're again dealing with the issue of space limitations. If you have a tower with PCI slots, you can select the video card you want, but in a Mini, the choice is pretty much limited to on-board graphics. That said, the Mini's video is a pretty competent as long you're not playing 3D games. I have a Mini setup connected to an LCD TV at 720p and it's quite capable of handling this style of HD content. It does strain when downscaling 1080p content to the 720p resolution. I don't have a 1080p screen to test against to see if it would be less stressed if the content matched the resolution.

Noise

This is a huge deal and something that is overlooked in the standard specifications list. I think that every computer should come with a dB rating at idle and under load. I had originally bought a Shuttle to use as a media station in the living room as it was reasonably compact, decent hardware. That lasted all of a week after it got packed off to the office due to the noise issues. Office environments are more forgiving for background noise, but quiet environments like a living room (assuming no kids...) you can pick out each individual contributor to background noise and they're noticeable.

On Hackintoshes

At the risk of grossly oversimplifying the situation, the claims made by Psystar could also be made by just about any PC manufacturer out there right now. If you have the time and the technical chops to do so, you can run OS X on most modern Intel based hardware as demonstrated by a number of recent articles in MacWorld. You need to find a way to boot OS X off a BIOS based Intel machine and have the appropriate drivers for the hardware. Nothing terribly complicated here, just a PITA since these pieces aren't built into the installer. Note: You can run the Darwin layer legally, although that's not the part of OS X that most people are interested in.

If you have the time to burn and like playing with this kind of thing, have fun! I know that when I was younger and had more spare time on my hands, this kind of playing around was immensely satisfying and taught me a lot about how computers work. But if you depend on your computer as a business tool, then I would say that this is not really a good approach to saving money since you'll invest a fair bit of time getting things running (although the pre-installation service from Psystar will help a lot) and you can't simply run Software Update and be sure that you'll have a running machine afterwards unless you do your research for each and every update that comes down the pipe.

I'm hoping that all of the noise and interest will communicate to Apple the market demand for a modular machine between the Mini and the Mac Pro. It's clearly the gaping hole in Apple's current line-up as a computer vendor. But looking at it from Apple's perspective as an innovator, what can they do to stand out from the crowd in this space? The iMac's all-in-one design is now being copied, but they were the first to really get it right and make an impact with this style of machine. The Mini is still the best price/performance option in it's market niche. The new MacBook Air is a whole new class of laptop.

Apple's strategy

Apple's not going to sell anything that they can't trumpet as being the best of class in some way (at least at launch). The Mini was the smallest full featured computer, the iMac the simplest, the Air the lightest, the Pro was the fastest.

What superlative can you attach to a mid-range tower? Hint: Cheapest is not an option.