Monthly Archives: February 2012

Cloudy, with a chance of confusion

Saying I’m a Cloud engineer is like saying I work for Willy Wonka.  I usually get the kind of look most people use when stumbling over a pile of dog poo on the sidewalk, a blend of surprise and disgust.  The land of technology is a dangerous place littered with acronyms like unexploded ordnance and dual-meaning words like landmines in a demilitarized zone.  Navigating this battleground, even for IT professionals, is hazardous and usually results in a casualty of pride when meaning is lost in a dense fog of change.  The only constant with technology is confusion.  Cloud has become one of the most abused and misunderstood buzzword the past few years, and I hope to emancipate it from the confines of Dilbert comic strips and re-establish its use to the lofty height of functioning jargon.

Put simply, cloud computing is infrastructure as a service, a phrase which won’t impress at parties so don’t try it. Think of it like this: renting a movie used to be structured around a product – first videocassettes, then DVD’s. Mom-and-pop stores and Hollywood Video and Blockbuster made a killing renting a product. But technology changed. Instead of renting a product, Netflix viewed it as a service – the service of delivering videos to consumers. Mailing a DVD was far cheaper than renting out a brick-and-mortar store, and saved on overhead. And because their business model was focused on the service of providing customers with things to watch, as technology changed the business adapted and Netflix started streaming videos. Brick-and-mortar video rental stores have passed into the realm of history books, and we can tell future generations: “why, back in my day you had to go out to rent something and streaming meant peeing in an alley.”

Cloud computing takes (relatively) inexpensive computers and makes them function as one large computer.  All computers, even the largest and most complicated, are really just logic engines.  A problem is defined in computational terms (how much money does Jack owe?), a program runs in the computer’s processor (add up all of Jack’s debt using these addition instructions), and arrives at an answer (wow, Jack owes more than the GNP of Norway!).  While this is an oversimplification, it shows enough of the basic process for us to work with.  Let’s say that all of Jack’s debt is spread over thousands of separate accounts.  Adding up all of those accounts at once will take some time, and let’s face it, that’s time I need to work so I can pay off my debt.  If those addition instructions could be split up and given to multiple computers who can all add their parts at once, then deliver results to a master computer which then adds all the results together, the problem can be worked much faster.  This concept is what makes the Cloud work.

In the grand old days, computers were astronomically expensive (I think it’s because brave explorers had to fight dinosaurs to find the precious metals used in making big computers).  Most specific-purpose computers were custom designed and ran software customized for their purpose.  Where it might take a desktop computer all week to crunch payroll numbers, a big payroll computer could be designed to do it in a few hours.  If there’s one thing computer geeks are good at, it’s finding better and cheaper ways to do things.  Parallel computing was born from the need to do big jobs in small chunks without buying a stupidly expensive custom designed machine to do it.  The concept is still, at its heart, simple: one computer acts as the master (the Head Node) and divides up the job to all of its member nodes who then go out and perform their tasks and report back to the head node.  The more member nodes there are, the faster most tasks can be accomplished.  Better yet, having all those distributed nodes means our program won’t crash if one or more nodes stop working.  The rest of the nodes take over. It works great for storage too – instead of sticking my files on a single hard drive, I can spread them among many. The cloud takes that a step further – files are split into chunks and distributed to member nodes, and often those files are triplicated. This makes it faster to read big files (dozens of computers can read their pieces at the same time), and if computers crash or hard drives die, there will be enough of the file left to rebuild the missing pieces.

When we talk about cloud, we’re talking about big clusters of inexpensive computers linked together to act as one big, specialty purpose computer. Apple’s iCloud and Amazon’s cloud store the software and music and books their members buy. Those clouds are themselves replicated to other data centers allowing faster regional access. When I’m on the east coast, I can get to my files from the Amazon data center in Ashburn, Virginia. And when some crazed militia storms the data center and cuts the communication lines, I’ll get my files from the data center in Palo Alto, California.

Aside from distributed storage, most Computing Clouds today are built for analytics or for infrastructure. Analytic clouds are designed to crunch numbers, usually modeling and simulating. The National Institute of Health uses a cloud to predict the spread of disease and infection, or simulates protein folding in the pursuit of new drugs. Infrastructure as a service is a tad more complicated. This site is run off a computing infrastructure cloud. Years ago, I’d “rent” a web server in a data center. Expensive and not very efficient. Virtualization changes all that, and allows multiple virtual computers to run on a single physical computer. Think of your desktop PC – it doesn’t really do much when you’re not using it, and even when you’re surfing the web, you’re only using a tiny fraction of its available resources. Virtualization changes that and allows multiple computers to run, using the resources available. Each of those virtual machines operates as if it’s a real physical computer installed on its own dedicated hardware. That single web server I used to rent can now host multiple sites, each one thinking they have a dedicated web server of their very own. But if that single server crashes, it takes down all the virtual machines with it. Cloud changes that by distributing the load across multiple machines. The failure of one or more servers won’t affect the systems running in the cloud because they’re distributed. Amazon has a cloud service it rents out to companies that need more computing power.

Rendering CGI is expensive because of the computing power needed. Pixar had to invest in some beefy hardware to make movies, and special effects houses need to run their own data centers. Those machines are constantly crunching when a movie is being made, performing the calculations needed to render special effects, or to draw computer-generated scenes. When the movie’s finished, those machines are idle. And for small companies and most television stations, buying hardware to render special effects is far beyond their budget. Enter cloud computing: now any special effects house can rent computing space in a cloud and render their scenes. My movie about a bacon-monster can be made on a low budget. I can rent out part of Amazon’s cloud to render the 80-foot bacon monster in stunning detail.

But I’m pretty sure nobody would watch it.