VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

Researchers from Nvidia, MIT, and Aalto University are using artificial intelligence to reduce noise in photos. The team used 50,000 images from the ImageNet dataset to train its AI system for reconstructing photos, and the system is able to remove noise from an image even though it has never seen the image without noise.

Named Noise2Noise, the AI system was created using deep learning and draws its intelligence from 50,000 images from the ImageNet database. Each came as a clean, high-quality image without noise but was manipulated to add randomized noise. Computer-generated images and MRI scans were also used to train Noise2Noise.

Denoising or noise reduction methods have been around for a long time now, but methods that utilize deep learning are a more recent phenomenon.

Noise can appear as grainy snow in a photo taken in low light or be found in other forms of photos, like medical imagery, computer-generated images, and pictures of space. Digital camera images taken in low light or using a digital zoom often contain noise.


AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.


Learn More

By training Noise2Noise with only noise, researchers hope the method can be used on images known to contain high amounts of noise, such as astrophotography, that picture you took at dusk that got robbed of its definition, and MRI or brain scans.

Above: Left to right: Noise image, denoised image, and original image

Nearly 5,000 images from 50 human subjects in the IXI dataset were used to train Noise2Noise’s MRI intelligence. Results can appear slightly more blurry than the original image without artificial noise but still appear to have restored definition.

“This is a proof of concept that we trained on a public MRI database, but it might show promise sometime in the future that this can be practically applied,” Nvidia researcher Jacob Munkberg told VentureBeat in a phone interview.

The work will be presented this week at the International Conference on Machine Learning in Stockholm, Sweden.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.