Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


emergencyThe Internet was built to withstand nuclear attack. That was why it was built in the ’60s in the first place, as a communications system with redundancy built in so that the military could communicate even if one of the nodes went down.

We saw some of that happen today, as news of Michael Jackson’s death spread like wildfire through the Internet. TMZ.com got the scoop about Jackson being sent to the hospital. But the site went down from the surge of traffic. The LA Times reported he was in a coma, but then that site went down too. The LA Times managed to report that Jackson was dead, and then everyone else started buzzing about it. Twitter went down. Keynote Systems, which measures web site performance, said that the following sites all slowed significantly: ABC, AOL, LA Times, CNN Money and CBS. Starting at 230 pm PST, the average load time for a news site slowed from 4 seconds to 9 seconds.

This is not supposed to happen. More than a decade ago, when I was writing about computer servers and Sun Microsystems was advertising itself as “We’re the dot in dotcom,” the hardware vendors were all talking about “utility computing.” Carly Fiorina, then the chief executive of HP, touted “adaptive computing,” where software would automatically route traffic from one overloaded server to another. Sun called its version of utility computing “N1,” after the code name for a project that aimed at rebalancing server loads on the fly. IBM, meanwhile, operated on a vision that it called “on demand.”

These visions were great and they all made sense based on an understanding of traffic as a flow of data. Companies such as Akamai set up networks to deliver video in real time for events, such as Victoria’s Secret’s annual lingerie show on the web. In years past, Victoria’s Secret had lots of trouble keeping a site up. But now it’s not as hard. Akamai sets up server centers around the country to feed video to users as needed. But now we’re talking the need to update in micro-seconds.

Servers have gotten better at being multi-headed beasts, especially with the arrival of hardware innovations such as low-power processors and chips with multiple cores, or processing engines, on a single chip. Virtualization software from VMware and others has arrived. That allows a server to split itself into two or three or more machines, just like the old mainframe computers, which had to do tasks in batches by necessity. Each instance of the server can handle a computing task, like fetching a web page from memory and sending it back to the user that requested it. Servers have become like hydras, doing all sorts of these trivial computing tasks at the same time.

And yet networks still buckle under the weight of traffic when something like today’s events shakes the whole world. Mobile networks are particularly weak, as AT&T’s activation problems related to the launch of the iPhone 3G S showed. In some ways, the servers worked today. As one site went down, another picked up the torch. But the transitions were rocky. The promise of utility computing is that you will be able to switch on and off server capacity as if you were switching on and off your lights.

And that leads me to consider the future. As tragic as Michael Jackson’s death is, it’s only a small taste of what would happen in a true calamity. If the servers go down, how are we going to get our Gmail or Yahoo Mail? Who will be there to listen when we collectively Tweet for help? What will we do if the emergency plan is stored on the network?

It’s a wake-up call for the web, and for those who are building its infrastructure and plumbing for it.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.