Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Google Cloud Platform today introduced a public beta for Cloud TPU v2 and Cloud TPU v3 pods. Connected by a toroidal mesh network across multiple racks, a TPU pod can contain more than 1,000 tensor processing units.

TPU Pod Slices are also available for as little as 16 TPU chips at once.

Liquid-cooled Cloud TPU v3 pods can deliver more than 100 petaflops of computing power. Conversely, Cloud TPU v2 pods released in alpha last year can achieve 11.5 petaflops.

For extreme machine learning tasks, Google began to develop its tensor processing units (TPU) years ago. The programmable custom chips for machine learning have been used for AI workloads at Google since 2015, and in recent years have increasingly been made available to researchers and developers in need of high-speed training of AI models.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

The news was announced today at Google’s annual I/O developer conference in Mountain View, California.

The second-generation TPU made its debut in 2017, and the liquid-cooled TPU v3 was teased at I/O last year.

Other news announced today includes the debut of the Nest Hub Max, Google Assistant upgrades, the Pixel 3a smartphone, and the launch of Android Q beta 3.

Also new today for developers: ML Kit to bring AI to Android and iOS apps got object detection and language translation for 59 languages, the same models that power the Google Translate app.

AutoML Video Intelligence and Vision Edge were also introduced today, and add to the launch of Anthos for hybrid computing last month, as well as the launch of TensorFlow 2.0 and TensorFlow Lite 1.0 in March.

Google I/O 2019: Click Here For Full Coverage