We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Members of the Google Brain team today launched the TensorBoard API to allow people training AI models to create visualizations of training as it happens. As part of the effort, the TensorBoard dashboard also got an upgrade. TensorBoard has been part of TensorFlow since it was open-sourced by Google in 2015.
“However, in the absence of reusable APIs, adding new visualizations to TensorBoard was prohibitively difficult for anyone outside of the TensorFlow team, leaving out a long tail of potentially creative, beautiful and useful visualizations that could be built by the research community,” software engineers Chi Zeng and Justine Tunney wrote in a Google Research blog post. “To allow the creation of new and useful visualizations, we announce the release of a consistent set of APIs that allows developers to add custom visualization plugins to TensorBoard.”
To test out plugins, Google made the Greeter plugin, which collects and displays greetings during model runs. A list of available TensorBoard plugins for things like audio, images, and precision-recall curves can be found on Github.
The TensorBoard API is the latest initiative from Google to open-source machine learning tools and encourage the adoption of AI.
Last month, the TensorFlow and AIY (AI+DIY) teams from Google open-sourced speech recognition datasets to allow people to create their own basic voice commands for a range of smart devices. In June, weeks after the launch of TensorFlow Lite for running AI models on mobile devices, Google open-sourced MobileNets, pre-trained computer vision models made especially for smartphones.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.