After a period of reliability, Twitter has been struggling with uptime for the last couple of months, thanks in part to a surge of World Cup-related traffic. Now the company just announced that it will open a data center in the Salt Lake City area to provide a long-term fix to these issues.
Twitter has strained against the capacity of its hosting providers in the past. For example it ditched Joyent for NTT at the beginning of 2008, presumably over uptime issues, then its rapidly growing traffic spurred NTT to expand its own data centers last year.
The company says it will continue working with NTT while opening more data centers of its own over the next 24 months. In the announcement blog post, Twitter identifies four main benefits to the move:
- “First, Twitter’s user base has continued to grow steadily in 2010, with over 300,000 people a day signing up for new accounts on an average day. … Having dedicated data centers will give us more capacity to accommodate this growth in users and activity on Twitter.”
- “Second, Twitter will have full control over network and systems configuration, with a much larger footprint in a building designed specifically around our unique power and cooling needs. Twitter will be able to define and manage to a finer grained SLA on the service as we are managing and monitoring at all layers.”
- “Third, having our own data center will give us the flexibility to more quickly make adjustments as our infrastructure needs change.”
- “Finally, Twitter’s custom data center is built for high availability and redundancy in our network and systems infrastructure.”
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here