The world’s most popular open source framework for machine learning is getting a major upgrade today with the alpha release of TensorFlow 2.0. Created by the Google Brain team, the framework is used by developers, researchers, and businesses to train and deploy machine learning models that make inferences about data.
A full release is scheduled to take place in Q2 2019.
The news was announced today at the TensorFlow Dev Summit being held at the Google Event Center in Sunnyvale, California. Since the launch of TensorFlow in November 2015, the framework has been downloaded over 41 million times and now has over 1,800 contributors from around the world, said TensorFlow engineering director Rajat Monga.
TensorFlow maintains open source projects with the largest number of contributors on GitHub, according to the 2018 Octoverse report.
TensorFlow 2.0 will rely on
tf.keras as its central high-level APIs to simplify use of the framework. Integration with the Keras deep learning library began with the release of TensorFlow 1.0 in February 2017.
A number of APIs seen as redundant — such as the Slim and Layers APIs — will be eliminated.
“In 2.0, we just sort of decided OK, we’re just going to stick to Keras — not have two different APIs that you can do almost the same things [with]. And so Keras is front and center, and all the other APIs go away,” he said.
Improvements to runtime for Eager Execution, a platform for experimentation and research with machine learning, are also on the way with TensorFlow 2.0. Eager Execution was first introduced last year. TensorFlow 2.0 is “Eager-first,” meaning it uses Eager execution by default, so ops run immediately when they’re called.
“We used to work with only graphs, and then about a year ago we launched Eager execution, in addition to graphs. So with 2.0, we’ve really put that front and center and said, OK, you can combine these two, which gives you the flexibility and ease of use of Python, along with really nice APIs,” Monga said.
To help developers and people interested in learning how to use TensorFlow 2.0, training courses from Sebastian Thrun’s Udacity and Andrew Ng’s deeplearning.ai are being launched today.
Thrun and Ng teach popular online learning courses for machine learning that have attracted hundreds of thousands of users.
A Fast.ai course was also introduced today for TensorFlow with Swift.
The evolution of TensorFlow
It’s been more than two years since Google first made TensorFlow 1.0 publicly available for use, and many changes have taken place to support the work of AI practitioners in that time.
The most recent major addition may be TensorFlow Datasets, a collection of ready-to-use public research datasets, which was released last week. Roughly 30 popular datasets are available at launch.
Happy 3rd birthday TensorFlow! We've come a long way since the first release in 2015 & TensorFlow wouldn't be the framework it is today without you. As we work on #TensorFlow20, look at all the features we've added over the years to make TensorFlow easier to use. #HappyBirthdayTF pic.twitter.com/hLoHQnQLkn
— TensorFlow (@TensorFlow) November 9, 2018
Monga said that the most significant changes made since the release of 1.0 include TensorFlow Lite; TensorFlow Hub, a central repository for reusable machine learning modules; and the Tensor2Tensor library of deep learning models for researchers. The TensorFlow Probability Python library for researchers using machine learning was also an important step forward, he said.
Google has also gradually opened up access to TensorFlow Extended, a tool used internally at Google that developers manage models, preprocess their data, and better understand what’s happening with their models while training.
“Over the last year, we’ve slowly been putting out pieces, and now we’re actually releasing that entire thing as a way to orchestrate that and [let you] really manage your entire ML pipeline together. It really shows the extension to the full platform in being able to do whatever you want with ML,” Monga said.
Introduced in September 2017, TensorBoard allows developers to observe visualizations of their AI models while they’re training.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here