“Life is awful; it just sucks,” CloudFlare chief executive officer Matthew Prince jokes.
His company is now processing 150 billion page views every month. Every minute, it generates 30 gigabytes of server log files — records of pages and images that have been sent to web-surfers around the globe. Last year, CloudFlare had more traffic than Amazon, Wikipedia, Twitter, Instagram, and Apple combined. Now it provides sites to more global surfers than the king of online engagement itself, Facebook: 1.5 billion every month.
“We keep growing, despite our best efforts,” Prince told me last week.
CloudFlare is a content-delivery network among many other things. Add your site to its cloud with a few lines of code, and it’ll be served out by thousands of machines prepositioned all over the globe — faster and more reliably than you can yourself and (for most sites) for free. Want better performance, more analytics, guarantees, and a “2,500 percent service level” agreement? That you’ll pay for.
What the company is focusing on now, Prince told me, is building its own equipment direct from Quanta. In other words, like Facebook and Google, it’s no longer buying from an original equipment manufacturer like Dell or HP but designing its own servers straight from the original assembler in Asia to cut about a third of the cost, Prince says. And importantly, it’s partnering with up to 1,000 global Internet service providers to put those servers right in their data centers — as close as possible to clicking data consumers.
“It’s like a giant game of Risk,” Prince says as he talks about trying to put servers in Turkey, which is hard, and settling for Bulgaria, which is the gateway to the country. “Increasingly, ISPs are inviting us to take our servers and install them in their data centers. There’s a Chilean ISP that we sent a gigabit of data to every second of every day that is just begging us for servers.”
The key when you’re a gigantic global mover of data is to preposition the data as close as possible to where it’s needed and then replicate it and cache it locally. That not only saves on bandwidth transfer costs but also speeds up access for users. Right now, CloudFlare is in only 23 ISPs. By the end of the year, Prince plans to ramp that to 50 of the world’s largest and then to 1,000 by the end of 2014.
If he accomplishes that goal, he’ll have joined a very, very exclusive group of companies.
“Only 2 companies in history that have pulled that trick: Akamai and Google,” Prince said. “We’re joining that club.”
One country that’s particularly annoying is Australia. While Asian Internet traffic is expensive, Prince said — on the order of five times the cost of U.S. and Western Europe traffic — Australian bits cost five times more to move, making the Aussie Internet 25 times more expensive than that of the U.S. Prince blames the Aussie’s former national telephone company, Telstra, for being noncompetitive.
The other key when your bandwidth requirements are massive and doubling roughly every four months is peering.
“As you get larger, you can start peering traffic off your network,” Prince says. “ISPs are asking us to send traffic across private peering exchanges, and as we move closer to the ISPs, that’s easier.”
Peering allows other companies to use your infrastructure to transfer their data for free while they allow you to use theirs for the same price. Interestingly, when it comes to transferring traffic over other companies’ pipes, the U.S. is one of the most difficult countries in the world.
I asked Prince when he sees the company’s massive growth leveling off.
“You’d think with the law of large numbers it would slow down,” he said. “But it doubled in the last two and a half months recently … which is way scary … and it’s only accelerating.
Image credit: Viewminder/Flickr