OpenAI today launched Microscope, a library of neuron visualizations starting with nine popular or heavily neural networks. In all, the collection encompasses millions of images. Like a microscope can do in a laboratory, Microscope is made to help AI researchers better understand the architecture and behavior of neural networks with tens of thousands of neurons.
Initial models in Microscope include historically important and commonly studied computer vision models like AlexNet, 2012 winner of the now retired ImageNet challenge. AlexNet has been cited over 50,000 times in research. There’s also 2014 winner ImageNet winner GoogleNet (aka Inception V1) and ResNet v2. Each model visualization comes with a handful of scenarios, and images are available in the OpenAI Lucid library for reuse under a Creative Commons license.
“While we’re making this available to anyone who’s interested in exploring how neural networks work, we think the primary value is in providing persistent, shared artifacts to facilitate long-term comparative study of these models. We also hope that researchers with adjacent expertise — neuroscience, for instance — will find value in being able to more easily approach the internal workings of these vision models,” OpenAI said on the Microscope website.
In a blog post introducing Microscope this morning, OpenAI said it hopes Microscope will contribute to circuits collaboration work being done to reverse-engineer neural networks through understanding connections among neurons.
In addition to Microscope’s neuron visualizations, several works in recent years have attempted to visualize the activity of machine learning models.
Introduced last fall, Facebook’s Captum uses visualizations to explain decisions made by machine learning models, while in March 2019 OpenAI and Google released the activation atlases technique for visualizing decisions made by machine learning algorithms. There’s also the popular TensorBoard tool for visualization when training machine learning models.