Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.

Whenever an Adobe event offers Sneaks — quick previews of what its researchers are working on — there’s a caveat that the “projects” might not wind up in shipping products. But they often do, and if that happens with the company’s just-revealed collection of intriguing features, Adobe users can look forward to a lot of AI assistance in the foreseeable future, including tools for photography, animation, and audio editing.

On the photographic front, Project Light Right harnesses Adobe’s Sensei AI system to bring time- and date-appropriate lighting edits to images. Rather than applying a light source and shadows to an image based solely on a user-selected position on a 3D globe, Light Right uses AI and multiple images to deduce the sun’s position and add directionally appropriate light and shadows during edits. It can also use videos and Adobe Stock photos as inputs for its lighting calculations.

A more subtle application of AI is Project About Face (shown above), which can be used to detect edited images, generating an automated “Probability of Manipulation” and heatmap to show where edits have been made — including those that are too subtle for the human eye. About Face will likely be used to contribute to Adobe’s upcoming Content Authenticity program, which promises to offer photo and video viewers a sense of whether they’re seeing edited or unedited imagery, and might even be used to reverse the edits, revealing the original image.


MetaBeat 2022

MetaBeat will bring together metaverse thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 3-4 in San Francisco, CA.

Register Here

Last, but not least, Project All In promises to solve a classic photographer’s issue — the inability of a person standing behind the lens to be part of a group photo. All In uses Sensei to automatically blend two photos together, such that two people could take turns shooting the same background while one person stands in the image, resulting in a composite where both people stand together in the same environment. Alternately, All In can be used to exclude a second instance of a person who appears differentially in two shots.

Adobe also showed off several AI-aided animation tools. Like Samsung’s recent 3D scanning and AR avatar demos, Adobe’s Project Go Figure turns video of a real person’s movements into skeletal frame animations that can be exported for use by a virtual character. Project Pronto can add 3D objects to smartphone videos, such that the objects naturally follow the motion path of the video with AR-style blended live and digital results. And Project Sweet Talk (shown above) promises to automate the animation of lip synchronization, converting recorded audio into a mesh that can be applied to flat images — even paintings — and animated characters.

The researchers are also using AI to speed up audio editing. Project Sound Seek automates the process of eliminating repeated sounds, such as “um” or “ah” tics, from recordings. Reducing noise is the focus of Project Awesome Audio, which claims to “awesomize” even a mediocre internal PC microphone recording with a single button click, adjusting levels and removing background interference.

Whether these features will wind up in Adobe apps in the near future remains to be seen, but the company has aggressively brought some of its Sneaks — including Project Aero — into actual shipping apps this year. The timeline from preview to release can be a year or more, but for smaller individual features it may be less.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.