Monday
Apr282008

Apple in the enterprise

Wow, following a few links here and there and ended up on this thread which is pretty much representative of all of the Apple in the enterprise threads that I see on message boards: a whole lot of non-sequiturs. It's not even a discussion, there's just people popping in with comments out of the blue that are for the most part wrong, anecdotal and do nothing to move the discussion forward.

The link that led me to this thread appeared to have nothing to do with the flow of the conversation, but was regarding how Psystar's offering would impact the availability of virtualisation of OS X Server. I fail to see just how this will have any impact at all, but the point of virtualising OS X Server is near and dear to my heart.

The current EULA for OS X Server permits the virtualisation of OS X Server as long as you have a license for each instance. Fairly normal, straightforward policy. The catch is that you must be running it on Apple hardware which is basically an extension of Apple's OS X licencing policy and the fact that they are a hardware vendor with a software development arm that goes beyond the basics of writing device drivers for their hardware.

Your current options for running a virtual machine on Apple hardware are reasonable enough for small scale are Fusion or Parallels or Microsoft Virtual Server. These are all pretty simple products that are based on the "old" virtualisation model which is an application running on a commercial operating system. The problem with this is that your OS was not designed to host multiple virtual machines running concurrently. We can do so, but it's not optimal.

If you're serious about virtualising your production environment, you're running VMware ESX (or possibly Xen if you're the adventurous type). Currently, ESX does not run on xServes or any other Apple equipment, mostly due to the EFI vs BIOS setup. For VMware to release an EFI enabled version should be relatively easy, but there needs to be a business case for it. That business case is waiting on Apple's go ahead. VMware is certainly capable of building and running OS X compatible virtual machines on ESX, but it's the sort of thing that you want to do in partnership rather than as a shot across the bow (resulting in lawyers making money). You have to factor in that the only current benefit would be for existing Apple customers to consolidate both Windows and OS X Servers on one machine. I imagine that this market is probably pretty small so there's little incentive for VMware to port ESX to the xServe.

Something to remember about the impact of the kind of virtualisation offered by ESX is that server vendors are selling a lot fewer servers and the projections over the next 10 years aren't pretty. I've done a number of ESX deployments where the server refresh cycle went from 100 servers hitting end of life and instead of generating 100 purchases, were consolidated onto 6-8 servers. Granted these were heavy duty servers, but it wasn't a 10-1 price difference either. This is going to accelerate as virtualisation becomes more and more mainstream. I visit clients' server rooms and see a lot of empty racks these days.

Apple is certainly evaluating the best way to approach the marketplace given the impact virtualisation is having. As a server vendor, this is a direct threat to their current hardware business model. I see a number of a different approaches they could be considering:

Try and follow the same path as the desktop. They have succeeded in selling the value of OS X to the consumer with the bonus that with Boot Camp or Fusion or Parallels they can continue to use their existing operating systems and software. This is seeing great fruits in the consumer space, but I don't think that this is viable in the server space since ESX-style consolidation is based on pure bang for the performance buck and requires lots of connectivity that you can't easily squeeze into a 1U server box. So even if there was an ESX version capable of running on the xServe and although it's a competent enough server, it's simply not designed for this kind of massive consolidation.

Go head to head with the current server offerings from DELL and HP. The primary machines I see being used for ESX are 2U bi-pro machines at the low end in order to have enough slots for the connectivity and redundancy required. And I'm seeing fewer and fewer of these as larger clients consolidate on 4-processor, 4-core servers with 64-128Gb of memory. Apple could take a completely different route and build a partnership with VMware to offer an ESX 3i version on the xServe and add a 4-processor model. But this seems highly unlikely given that Apple doesn't have the service and support infrastructure in place to make this a credible offer to those companies that already have long-standing maintenance agreements with the existing big players. Why would I make such a large investment in Apple-specific hardware in order to be able to run a few OS X Server instances? A very hard sell.

Sell OS X Server as a software-only product licensed for virtualisation deployments only. Treating the Server version as an entirely different software only product makes sense. They've already made the first step with the virtualisation on Apple hardware, so now all that remains is to offer a version tailored for specific virtual environments.

This would rarely be in competition with an xServe sale. The companies that have deployed serious virtualisation environments won't be buying xServes, no matter how attractive the software capabilities of OS X Server. I've already run into this situation where the OS X Server wiki and collaboration tools were evaluated and found to be a good fit, but that the current policy was everything virtualised. So there's no lost hardware sale, but rather an easier entry point into an client that might otherwise be impenetrable.

Apple gets to keep some of their historic advantages by running in a virtual machine. Everything about the virtual hardware is tightly controlled and rarely subject to change. The traditional technical advantages of a stable hardware platform remain valid.

Selling OS X Server in a virtual only version strikes me as a no-brainer. I only hope that someone high up at Apple agrees.

Monday
Apr282008

More on the Mini

I've been following along the fiasco that is Psystar and noticed that a recurring theme in many forums was the comparison of the machine proposed by Psystar to the Mac Mini.

At the risk of being horribly cliché, this is comparing apples to oranges.

The Mini is the machine that begs the comparison since it's the closest thing that Apple sells that targets the entry-level market. But there are a number of very important reasons that this is pointless, leaving aside the entire OS X compatibility question (more on that later).

You simply can't compare the price/performance profiles of these two machines without also factoring in the impact of the form factor. The Mini is very very small and the PC equivalent is based on a nano-ITX motherboard. In a tower case, you have many more options for the motherboard and physically more space to put things, which means more connectivity options and better cooling which means more powerful components. You're buying a motherboard that is very mainstream and produced in relatively high quantities for multiple PC-builders.

Storage

Obviously when you make something that small you are limited in your choice of components. The Mini uses a 5400RPM 2.5" laptop drive, whereas a tower design has the space to physically hold a larger 7200RPM 3.5" drive and the ability to cool it. Right away this gives the performance advantage to a larger machine, but it comes at the cost of size. Now depending on your needs, this will be more or less important (witness how many people use a notebook computer as their primary machine). A laptop drive can produce perfectly acceptable performance, up to and including running virtual machines, photoshop and even small scale video work.

Would it be faster with a faster drive? Of course! But there's no room in a Mini for this kind of drive. You can gain the benefits of a faster drive by either adding an external drive connected by Firewire or if you're feeling ambitious and really want the speed boost, you put a cable on the internal SATA connection and pass this out to an external SATA drive. Obviously the SATA connection outperforms the Firewire connection, but comes at the price of having to do some pretty serious modifications.

Networking

Not much to say here as it's hard to buy a computer without an ethernet card (probably gigabit) and possibly an integrated Wifi card of varying modernity. The only hiccup here is price. Laptop components are going to be more expensive that desktop ones.

Video

We're again dealing with the issue of space limitations. If you have a tower with PCI slots, you can select the video card you want, but in a Mini, the choice is pretty much limited to on-board graphics. That said, the Mini's video is a pretty competent as long you're not playing 3D games. I have a Mini setup connected to an LCD TV at 720p and it's quite capable of handling this style of HD content. It does strain when downscaling 1080p content to the 720p resolution. I don't have a 1080p screen to test against to see if it would be less stressed if the content matched the resolution.

Noise

This is a huge deal and something that is overlooked in the standard specifications list. I think that every computer should come with a dB rating at idle and under load. I had originally bought a Shuttle to use as a media station in the living room as it was reasonably compact, decent hardware. That lasted all of a week after it got packed off to the office due to the noise issues. Office environments are more forgiving for background noise, but quiet environments like a living room (assuming no kids...) you can pick out each individual contributor to background noise and they're noticeable.

On Hackintoshes

At the risk of grossly oversimplifying the situation, the claims made by Psystar could also be made by just about any PC manufacturer out there right now. If you have the time and the technical chops to do so, you can run OS X on most modern Intel based hardware as demonstrated by a number of recent articles in MacWorld. You need to find a way to boot OS X off a BIOS based Intel machine and have the appropriate drivers for the hardware. Nothing terribly complicated here, just a PITA since these pieces aren't built into the installer. Note: You can run the Darwin layer legally, although that's not the part of OS X that most people are interested in.

If you have the time to burn and like playing with this kind of thing, have fun! I know that when I was younger and had more spare time on my hands, this kind of playing around was immensely satisfying and taught me a lot about how computers work. But if you depend on your computer as a business tool, then I would say that this is not really a good approach to saving money since you'll invest a fair bit of time getting things running (although the pre-installation service from Psystar will help a lot) and you can't simply run Software Update and be sure that you'll have a running machine afterwards unless you do your research for each and every update that comes down the pipe.

I'm hoping that all of the noise and interest will communicate to Apple the market demand for a modular machine between the Mini and the Mac Pro. It's clearly the gaping hole in Apple's current line-up as a computer vendor. But looking at it from Apple's perspective as an innovator, what can they do to stand out from the crowd in this space? The iMac's all-in-one design is now being copied, but they were the first to really get it right and make an impact with this style of machine. The Mini is still the best price/performance option in it's market niche. The new MacBook Air is a whole new class of laptop.

Apple's strategy

Apple's not going to sell anything that they can't trumpet as being the best of class in some way (at least at launch). The Mini was the smallest full featured computer, the iMac the simplest, the Air the lightest, the Pro was the fastest.

What superlative can you attach to a mid-range tower? Hint: Cheapest is not an option.

Thursday
Apr242008

DELL: It's not a bug, it's a feature...

Sigh. This kind of stuff is really annoying. I'm in the process of building up a storage system using some of the latest kit from DELL and just ran into some very interesting and annoying problems.

The setup is two DELL R900s coupled to MD1000s with the latest PERC6/E SAS controllers. Our initial benchmarks on the system are really quite impressive. Now I've moved onto the acceptance tests to validate the way the system reacts to various types of failure and how it recovers.

I'm using SANMelody on the systems to form a high availability SAN and I have a set of standard failure tests that I've run on various similar setups using HP and IBM equipment, mostly MSA30 and MSA50 disk bays and various other JBODs. One test is a brutal crash of the bay that's acting as the primary storage. The reaction is just as expected, all of my servers fail over gracefully to the second server, even under extremely high IO load from multiple ESX Servers with VMs running IOMeter. The machine comes back up and the two servers agree that they can't trust the data on the crashed system so the mirrors are cleaned and automatically resynchronised. It takes a while but even with the IOMeter load hammering the backup server it puts everything back and goes merrily along its way.

All of my other standard tests of gracefully stopping the SANMelody service, cutting the replication link, etc., all work as expected. Very smooth and everyone is happy.

Then I get to the power failure of a disk bay and my day goes to hell. Cut the power to the bay, let it sit for a while and watch SANMelody reroute IO to the other server, still no interruption of service. So far so good. I power the MD1000 back up and watch the SANMelody console waiting for the volumes to come online. Wait. Wait. Wait. That's not good. Every other disk system I've tested this on brings the disks back online automatically.

Rescan the disks - still nothing from the disk bay. Open up the Dell Open Manage web console and see that it has identified the disks as being a foreign configuration. That's bad. I reimport the disk configuration from the disks using the Open Manage console and the volumes start coming back online. A couple of things that are really not good here. I shouldn't need a manual intervention to bring my disks back. On top of that Open Manage has created a phantom hot spare in slot 22 of a 15 disk bay. I have no idea what happens to my bay if it tries to rebuild a RAID from this imaginary disk and I don't think that it will be pretty.

Going back to the classic trouble shooting techniques, it's time to reboot and see if things get better. They don't. First off, the controller really isn't convinced that it can use the disk configuration since it hasn't rescanned all of the RAID volumes so the boot sequence stops waiting on a confirmation to import the foreign configuration again and the system is now busy doing a background initialisation. Well, not really, but the UI really needs to clarify the difference between a background initialisation and a background validation. And my phantom disk is still visible.

Hello? Dell support? (after waiting 25 minutes to get to talk to someone). Explain situation. Response: that's the way it's supposed to work. If the controller sees the disks go offline, it refuses to bring them back online without a manual intervention to import the "foreign" configuration. I still have the ticket open on the phantom disk.

Now perhaps I'm being stupid here, but if it thinks that it's a foreign configuration, wouldn't that mean that it doesn't match with what's in the PERC? Since nothing has changed on the configuration, how could it be different? At the very least it should be able to read the disks configuration, compare it to the last known configuration of the controller and decide that it can remount the volumes? Older controllers from Dell used to let me set a switch to tell the system how to react so I could specify to always use the disk configuration, the card's configuration or wait for user input. I'd really really like that option back.

Now I wouldn't be that upset overall since I can manually import the foreign configuration without restarting the server (so even if it's 3AM, I can VPN in and access the console), but it then requires a background initialisation before it changes the state of the disks from foreign to online. And that hammers the disks and degrades my IO on the bay. On an MD1000 with 15 750Gb SATA drives, I'm good for a few days of validation.

I'm beginning to think that there are some serious problems with the current generation of Dell SAS controllers since I have another client that's getting some grief with random loss of the RAID configurations on their ESX boot volumes. The standard RAID 1 internal SAS setup (PERC6i) that you see everywhere and for no apparent reason some of the machines will lose the configuration and stop accessing the drives while the server is running. This plays royal hell with everything since it's a malfunction that does not trigger the ESX HA function as the OS is still alive (albeit on life support) but you can't ask the server to do anything.

Anyone else seeing odd behaviour from DELL SAS controllers?

Note: Yes, everything is using redundant power supplies, connected to separate electric feeds in an battery/generator backed data center, but sh*t happens so you have to be prepared and know how things are going to react.

Thursday
Apr242008

iriver's W7 portable media player gets reviewed

Wow - doesn't that look like a Newton MessagePad 2000?

iriver's W7 portable media player gets reviewed: "


(Via Engadget.)

Tuesday
Apr222008

Groupware Bad

Groupware Bad: "If you want to do something that's going to change the world, build software that people want to use instead of software that managers want to buy."

Nicely written little article on the perils of developing solutions that nobody wants to use. I think that this is exactly why Apple is seeing a resurgence these days. They're not targetting the buzz-word laden feature lists demanded by IT managers, but are designing applications that appeal to real people.