Google today said that on some Android devices it has started showing high-quality images at lower file sizes on its Google+ social network. To achieve this, Google is using a technique called Rapid and Accurate Image Super Resolution, or RAISR (as in razor-sharp).
Google researchers laid out the RAISR method in a paper last year, noting that it can have the added bonuses of sharpening images and increasing their contrast, while also preventing compression artifacts. Recently Google started using RAISR in its Motion Stills iOS app to sharpen videos based on Live Photos when people are ready to export them.
Now the system is being incorporated into a more widely used Google service, and it’s resulting in up to 75 percent less bandwidth for a single image, which means images load faster and consume less data, Google+ product manager John Nack wrote in a blog post.
“While we’ve only begun to roll this out for high-resolution images when they appear in the streams of a subset of Android devices, we’re already applying RAISR to more than 1 billion images per week, reducing these users’ total bandwidth by about a third,” Nack wrote.
Google has been very active when it comes to improving services with deep learning, a type of artificial intelligence that involves training artificial neural networks. But that’s not what’s going on here, even though other researchers have drawn on a convolutional neural network for super-resolution. (That said, Google does call on machine learning: As the researchers wrote in the original paper, RAISR applies “a set of pre-learned filters on the image patches, chosen by an efficient hashing mechanism” — and then blends the originally upscaled image with the filtered version.) And yet, the Google researchers have still found RAISR to be faster than other algorithms.
“In the coming weeks we plan to roll this technology out more broadly — and we’re excited to see what further time and data savings we can offer,” Nack wrote.