March 15, 2011

Gio Coglitore (Facebook) Does Not Understand Virtualization

PC Mag today reported on an Intel event where director of Facebook Labs' Gio Coglitore described Facebook's server strategy using a bunch of non sequitors that demonstrate he doesn't understand virtualization.

First of all, he admits that they don't practice IT 101 testing standards:

Coglitore also said that Facebook believes in "testing in production," adding test machines to a live network.
"If you ever experience a glitch [while using the site], it might be Gio testing something out," Coglitore said.
Hey, there's a good idea, let's throw half-baked systems out there and see if it blows up our business.  At least Microsoft gives you the courtesy of choosing to install a service pack/point a loaded gun at your head or not.  Perhaps this explains why Facebook has repeatedly introduced and then had to back off of new software versions, no doubt at great expense.

Coglitore goes on to unnecessarily coin a new phrase, "realized environments" as a counterpoint to virtual environments.  I never realized that running software natively required a new term.  This is like calling a conversation between two people in a room "local acoustic communications" to make it sound like a novel innovation over the telephone.  We already have terms like "native" and "bare metal" that make a term like "realized" unnecessarily abstract.

Within the front end, testing has proven that Facebook's front-end code is better realized, not virtualized, Coglitore said. "Software layers tend to be locking," he said. "One of the things we enjoy at Facebook is rapid iteration."
What does that have to do with virtualization?  Want rapid iteration?  Spawn up a new virtual machine to run your new software.  Bam!  That's a lot faster than installing a new piece of hardware in your data center.  You want to experiment with different configurations of servers?  Spawn up a bunch of virtual machines in your new configuration and performance test it.  Reconfigure on the fly until you have the right balance.  Try doing that with physical servers.

But here's where we really go off the deep end:
But if a front-end server dies at Facebook... well, so what, Coglitore seemed to say. "The microserver model is extremely attractive," he said. "I've said this before: it's foot soldiers, the Chinese army model. When you go into these battles, you like to have cannon fodder to some degree, an ovewhelming force and ability to lose large numbers of them and not affect the end-user experience. When you have a realized environment, you can do that. It's hard to that with a virtualized environment."
Nobody today deploys a SINGLE large server supporting all VMs--putting all of your eggs in one basket, which is the strawman Coglitore seems to be alluding to.  We use a cluster or cloud of equal server, each of which can run VMs.  When you have an army of virtual servers with a technology like VMWare's vMotion, if one of them goes down, another VM is automatically spun up on another machine within seconds in the same execution state as the failed VM was in.  If an entire physical server goes down, all of those VMs find new homes on other physical machines, and processing is hardly interrupted.  Virtualized environments allow you to dynamically reallocate processing resources to accomodate whatever the overall computing environment demands, across the whole array of virtual machines.  The Chinese army analogy supports virtualized cloud/cluster environments, not old-school bare metal environments of the sort Coglitore is advocating!  

To achieve the level of redundancy in a virtualized environment, Facebook would have to deploy a standby node, which takes away the cost advantage, Coglitore said. "I'd have t keep multiple large-pipe pieces of hardware in my environment, where I'd prefer to keep little segments," with a load balancer directing traffic to the smaller computers, he said. 

Again, virtualized cloud/cluster environments do not have idle standby hardware, so this argument is bunk.  The only difference with the virtualized environment is you may subdivide your 4-8-16 node cluster into many more virtual machines, allocate resources asymetrically between them (a web server VM may only need a small RAM allocation vs a database that needs more), allocate those resources on demand (vs having to guess right the first time with a physical hardware implementation), and your VM architecture will be more robust overall because you can prevent any data loss whatsoever by using technologies like vMotion.  The VMs themselves can be the "little segments".

There is also another dark side to these microserver environments that was learned with the older blade systems: because they share common backplanes and power supplies, you can lose a whole bank of servers all at once--a near impossibility with traditional servers that are completely self-contained.  Blade systems were notorious for a bad firmware update wiping out a whole rack of blades, or a power supply failure taking down a node.  I believe that this is why Google does not use blades.  The denser you get, the greater this vulnerability risk becomes.  What's your high availability plan when 16 physical servers become doorstops at once when a power supply fries them?  You don't have this problem with a cluster of better-made higher-end servers running VMs.

I predict that Facebook will within the next two years have some major system downtime due to their production system control practices.  I didn't mean for this article to sound like a pitch for virtualization, but the arguments being made for buying a bunch of microservers instead of virtual machines were ridiculous.  Maybe Intel compensated him for his appearance.

No comments:

Post a Comment

Spammers-B-Gone. Please keep your comments germane or inane.