Facebook today introduced PyTorch 1.1 with TensorBoard support and an upgrade to its just-in-time (JIT) compiler. PyTorch creator Soumith Chintala called the JIT compiler change a milestone performance improvement for the deep learning framework.

PyTorch is used internally at Facebook to power its AI services such as PyText for things like language understanding tasks. Since it was open-sourced in 2017, PyTorch has become one of the most popular deep learning frameworks in the world.

The 2018 GitHub Octoverse report last fall named PyTorch one of the most popular open source projects on the GitHub platform, used by 31 million developers worldwide.

PyTorch 1.1 comes with new APIs, support for Boolean tensors and custom recurrent neural networks, and an upgrade of the JIT compiler for optimizing computational graphs.

“We’ve been working closely with Nvidia to basically add all the optimization into our compiler itself. So if you actually write the recurrent neural network that deviates from the norm, that may be users expressing a new idea, and they want to see if they can fire up a better LSTM, or a better RNN of some sort, it will also actually go close to cuDNN speeds, and that means they just are more productive by a huge amount,” Chintala said.

The news was shared today in a keynote address at the second day of the F8 developer conference in San Jose, California.

Facebook’s PyTorch deep learning framework made its debut at F8 last year in alpha, and 1.0 made its public debut last fall.

The alpha version of the compiler was available in version 1.0 last fall but didn’t work much faster than PyTorch’s basic mode, he said. The new JIT compiler was a highly requested feature among researchers and autonomous driving model makers, Chintala said. It also brings more Python programming language concepts to PyTorch.

“We actually start supporting more Python concepts like decks and lists to be shipped to production, so in 1.1 we’re effectively starting to build a Python to production story, which is a bit larger than a PyTorch to production story,” he said.

The JIT compiler is now able to determine at runtime how to generate the most efficient code. Chintala expects JIT compiler changes to deliver better performance for custom RNN models.

“The problem people had was whenever they deviated from the norm, their code would be 10 times slower, because the fast version would be massive and we didn’t have high performance cuDNN kernels,” he said.

The release of PyTorch 1.1 coincides with new training courses with Udacity and Fast.ai. New software modules will be introduced as part of the Fast.ai course, including fastai.audio and fastai.vision. Training from Udacity and Fast.ai was also part of the release of 1.0 last fall.

Also new today from Facebook: machine learning experimentation platform Ax and Bayesian model optimization package Botorch to power parameter and tuning optimization.

Next on the roadmap for PyTorch are quantization to run neural networks with fewer bits for faster CPU and GPU performance, and support for naming dimensions in tensors created by AI practitioners.

“That might seem like a small feature,” Chintala said about dimension naming, “but it’s actually a fundamental feature and changes the way people write their code. It makes it easier for them to do bookkeeping and be more productive.”

PyTorch will also continue to work with projects like PySyft, an initiative to use federated to train machine learning systems with PyTorch.

“I think it’s not just Facebook. I think the field in general is looking at this direction pretty seriously, and yeah I think you will absolutely see more effort, more direction, more packages, both in terms of PyTorch and others coming in this direction for sure,” Chintala said.

At the TensorFlow Developer Conference in March, Google introduced the latest version of its popular TensorFlow machine learning framework, as well as TensorFlow Federated for decentralized computation and TensorFlow Privacy for training AI models with privacy guarantees.

Facebook's F8 2019: Click Here For Full Coverage