Researchers at the Massachusetts Institute of Technology have developed an artificial intelligence (AI) system that can isolate small, nearly transparent imperfections in poorly lit images in order to reproduce objects. A blog post published by MIT News today describes a deep neural network — layered mathematical functions loosely mimicking the behavior of neurons in the brain — that can erase target artifacts from grainy images.

George Barbastathis, professor of mechanical engineering at MIT, believes this might have applications in medicine.

“In the lab, if you blast biological cells with light you burn them, and there is nothing left to image,” he told MIT News. “When it comes to X-ray imaging, if you expose a patient to X-rays, you increase the danger they may get cancer. What we’re doing here is — you can get the same image quality but with a lower exposure to the patient. And in biology, you can reduce the damage to biological specimens when you want to sample them.”

To assemble a corpus, the team sourced a collection of 10,000 integrated circuits (IC), each of which they exposed to a phase spatial light modulator that displayed a unique, etch-like pattern of horizontal and vertical bars on a glass slide. Photos of all 10,000 ICs taken in the dark were used to “teach” the AI system to reconstruct transparent, obscured objects.

The pictures, interestingly, were captured out of focus — this was to create ripples in the detected light, which signaled to the neural network a given object’s presence. The researchers corrected for the resulting blur by incorporating a law in physics that describes the behavior of light when a camera is defocused.

“Invisible objects can be revealed in different ways, but it usually requires you to use ample light,” Barbastathis said. “What we’re doing now is visualizing the invisible objects, in the dark. So it’s like two difficulties combined. And yet we can still do the same amount of revelation.”

After sufficiently training the model, the team validated their work by exposing it to a pattern not present in the training set. In darkness, both with and without the physical law embedded, it managed to reconstruct the original transparent pattern accurately. Moreover, when trained on a new dataset of 10,000 images of people, animals, places, and other subjects and fed an image of a transparent etching of a scene, it produced a reconstruction more accurate than the original image.

“We have shown that deep learning can reveal invisible objects in the dark,” Alexandre Goy, a lead author on the paper, told MIT News. “This result is of practical importance for medical imaging to lower the exposure of the patient to harmful radiation, and for astronomical imaging.”