Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

In a preprint paper published on the web this week, researchers affiliated with Microsoft Research Asia and the University of Science and Technology of China propose what they claim is a state-of-the-art AI technique for video enhancement and upscaling (i.e., boosting the resolution of footage while retaining quality). Their model (and others like it) could be of use to virtually any person with low-quality archival footage, including corporate video teams looking to incorporate historical clips into new material.

It comes on the heels of AI that promises to improve the quality of any video. In a paper, scientists at the University of Rochester, Northeastern University, and Purdue University proposed a framework that generates high-resolution slow-motion video from a low frame rate, low-resolution video. They claimed that their approach was three times faster than previous leading models.

In the same vein, this latest technique aims to recover high-resolution details from noisy and low-resolution frames using two components. A module called Separate Non-Local explores the relations among video frames and fuses the frames efficiently, while a channel attention residual block captures the relation among feature maps (functions that map data vectors to feature spaces) for video frame reconstruction. The model — dubbed VESR-Net, for “video enhancement and super resolution” — takes 7 consecutive frames as inputs to reconstruct the middle frame.

video upscale AI


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

In experiments, VESR-Net was fed 1,000 video clips from a data set — 50 of which were used for evaluation (the rest were reserved for training) — and trained on a machine with 4 Nvidia Titan 1080Ti graphics cards. The researchers submitted it to the public Youku-VESR challenge, which saw 1,500 registered teams submit video super-resolution algorithms along with codes, executables, and fact sheets. They say it ranked first in the competition, with a score 0.2 points higher than the second and third teams.

Increasingly, researchers are using AI to transform historical footage — like the Apollo 16 moon landing and 1895 Lumière Brothers film “Arrival of a Train at La Ciotat Station” — into high-resolution, high-framerate videos that look as though they’ve been shot with modern equipment. It’s a boon for preservationists, and as an added bonus, the same techniques can be applied to footage for security screening, television production, filmmaking, and other such scenarios.

Such up-resolution approaches have been applied in the video game domain, for instance. Fans of Final Fantasy recently used a $100 piece of software called A.I. Gigapixel to improve the resolution of Final Fantasy VII’s backdrops. And it was revealed this week that the EA team charged with remastering Command & Conquer employed AI to upscale the game’s cinematics.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.