Facebook today announced that Microsoft has expanded its participation in PyTorch, the social network’s machine learning framework, to take ownership of the development and maintenance of the PyTorch build for Windows. The intent is to bring the experience on Windows in line with other platforms, like Linux; historically, PyTorch on Windows has lagged behind due to a lack of test coverage, a convoluted installation experience, and missing functionality.
PyTorch, which Facebook publicly released in January 2017, is an open source machine learning library based on Torch, a scientific computing framework and script language that in turn is based on the Lua programming language. While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see rapid uptake in the data science and developer community. It claimed one of the top spots for fastest-growing open source projects last year, according to GitHub’s 2018 Octoverse report, and Facebook recently revealed that in 2019 the number of contributors to the platform grew more than 50% year-over-year to nearly 1,200.
“According to the latest Stack Overflow developer survey, Windows remains the primary operating system for the developer community (46% Windows vs 28% MacOS). Microsoft is happy to bring its Windows expertise to the table and bring PyTorch on Windows to its best possible self,” Facebook and Microsoft wrote in a jointly-authored blog post.
In perhaps a sign of things to come, Microsoft earlier this year released a preview adding graphics card compute support to Windows Subsystem for Linux (WSL) 2, which over 3.5 million monthly active developers use to run Linux-based tools on Windows. It explicitly brought support for AI and machine learning applications, enabling PyTorch training workloads across hardware in the Windows ecosystem, including Nvidia cards with CUDA cores.
Facebook says it will work with Microsoft to continue to improve the quality of the PyTorch build for Windows, chiefly by bringing test coverage up to par. Microsoft will also maintain relevant binaries and libraries (like TorchVision, TorchText, and TorchAudio) and support the PyTorch community on GitHub as well as the PyTorch Windows discussion forums.
“We will continue improving the Windows experience based on community feedback and requests. So far, the feedback we received from the community points to distributed training support and a better installation experience using pip as the next areas of improvement,” Facebook and Microsoft wrote.
In related news, Facebook also said that it’s moved mixed-precision functionality into PyTorch core, which supports Windows. While PyTorch trains with 32-bit floating point (FP32) arithmetic by default, Facebook notes this isn’t essential to achieve full accuracy for many deep learning models. This new mixed-precision functionality, which was developed by Nvidia in 2017 and which combines single-precision (FP32) with half-precision (e.g. FP16) format, manages the same accuracy as FP32 training with additional performance benefits on Nvidia graphics cards (like shorter training time and lower memory requirements).
PyTorch 1.6 — the latest release — can automatically convert certain graphics card operations from FP32 precision to mixed precision. Facebook claims it delivers a 1.5 times to 5.5 times speedup over FP32 on an Nvidia V100 card while converging to the same final accuracy.