Folks snap self-portraits with their smartphones all the time, whether for the benefit of followers on Facebook, Instagram, or LinkedIn. But these “selfies” tend to look unnatural because they require that the subject stretch out their arms in order to capture the best angle. Fortunately, researchers at Adobe Research, the University of California, Berkeley, and KU Leuven in Flanders have developed an AI technique that automatically translates selfies into neutral-pose portraits. By identifying a target pose and generating a body texture, it’s able to refine and composite a person on a given self-portrait’s background.

Beyond social media, the work has obvious applications in the enterprise. Such a system could form the basis of an employee badge-creating pipeline, where employees snap selfies that are then transformed into posed portraits. Or it could be used to capture professional-looking photos for the “about us” pages on company websites.

The researchers’ approach — “unselfie” — aims to make selfie photos look like “well-composed” portraits captured by photographers, showing relaxed arms, shoulders, and torsos. It moves any raised arms downward and adjusts the position of the shoulders and torsos, tweaking the details of the clothing before filling in any exposed background regions.

To train the AI system underlying their technique, the researchers collected 23,169 photos of people in frontal and neutral poses and 4,614 selfie photos from internet searches and open source data sets. They applied an algorithm to extract pose information from the pose images, which they wrote to a database, and then they segmented foreground humans from each photo before pasting them into random images to increase diversity. Based on the collected neutral-pose portraits, the team algorithmically generated corresponding selfie data. And through the correspondences, they mapped the portrait image pixels to the nearest selfie pose.

The system leverages a retrieval-based approach to self-portrait transformation. During training, given a target neutral pose, it searches for a matching selfie in the aforementioned database. This allows it to generate synthetic data that’s used to self-supervise discrete inpainting and composition models. (The inpainting model reuses visible body pixels to fill in any invisible body parts, while the composition model adds details and fixes artifacts in the body region while filling in gaps between the body and the background.) When fully trained, given a selfie, the system can automatically search for the nearest neutral poses.

Owing to its novelty, the researchers note their “unselfie” method has several limitations. The nearest pose search sometimes struggles to find compatible neutral poses for side selfies and viewpoints, yielding results containing arms or shoulders too slim or wide compared with the person’s head region. The system also occasionally struggles with background generation and fails to detect things like limbs.

Despite this, the researchers report that in a qualitative experiment, users rated “unselfies” from their system highly compared with baselines. “To the best of our knowledge, this work is the first to target the problem of selfie to neutral-pose portrait translation, which could be a useful and popular application among casual photographers,” they wrote in a paper describing their work.