Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.
Platforms that let shoppers virtually try on cosmetics, apparel, and accessories have exploded in popularity over the past decade, and it’s easy to see why. According to a survey conducted by banking company Klarna, 29% of shoppers prefer to browse for items online before actually buying them, while 49% are interested in solutions that take their measurements so they can be sure something will fit before buying.
With this top of mind, a team of researchers hailing from Adobe, the Indian Institute of Technology, and Stanford University explored what they describe as an “image-based virtual try-on” for fashion. Called SieveNet, it’s able to retain the characteristics of an article of clothing (including wrinkles and folds) as it maps the item to virtual bodies — without introducing blurry or bleeding textures.
SieveNet’s objective is to take an image of clothing and a body model image and generate a new image of the model wearing the clothing with the original body shape, pose, and other details preserved. To accomplish this, it incorporates a multi-stage technique that involves warping a garment to align with the body model’s pose and shape before transferring the warped texture onto the model.
The warping, note the authors of a paper detailing this work, requires accounting for variations in shape or pose between the image of clothing, as well as occlusions in the model image (for example, long hair or crossed arms). Specialized modules within SieveNet predict coarse-level transformations and fine-level corrections on top of the earlier coarse transformations, while another module computes the rendered image and a mask atop the body model.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
In experiments using four Nvidia 1080Ti graphics cards on a PC with 16GB of RAM, the researchers trained SieveNet on a data set consisting of around 19,000 images of front-facing female models and upper-clothing product images. They report that in qualitative tests the system handled occlusion, variation in poses, bleeding, geometric warping, and overall quality preservation better than baselines. They also say it achieved state-of-the-art results across qualitative metrics, including Fréchet Inception Distance (FID), which takes photos from both the target distribution and the system being evaluated (in this case SieveNet) and uses an AI object recognition system to capture important features and suss out similarities.
SieveNet isn’t the first of its kind, exactly. L’Oréal’s ModiFace, which recently came to Amazon’s mobile app, lets customers test different shades of lipstick on live pics and videos of themselves. Vue.ai’s AI system susses out clothing characteristics and learns to produce realistic poses, skin colors, and other features, generating model images in every size up to 5 times faster than a traditional photo shoot. And both Gucci and Nike offer apps that allow people to virtually try on shoes.
But the researchers assert that a system like SeiveNet could be more easily incorporated into existing apps and websites. “Virtual try-on — the visualization of fashion products in a personalized setting — is especially important for online fashion commerce because it compensates for the lack of a direct physical experience of in-store shopping,” they wrote. “We show significant … improvement[s] over the current state-of-the-art methods for image-based virtual try-on.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.