Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Can AI-driven applications, developed with synthetic data, analyze the details of how people move?
During the COVID-19 pandemic, home fitness apps were all the rage. From January through November 2020, approximately 2.5 billion health and fitness apps were downloaded worldwide. That trend held and shows no signs of slowing down, with new data predicting growth from $10 million in 2022 to $23 million by 2026.
As more people use fitness apps to train and track their development and performance, fitness apps are increasingly using AI to power their offerings by providing AI-based workout analysis, incorporating technologies including computer vision, human pose estimation, and natural language processing techniques.
Tel-Aviv-based Datagen, which was founded in 2018, claims to provide “high-performance synthetic data, with a focus on data for human-centric computer vision applications.”
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
The company just announced a new domain, Smart Fitness, on its self-service, visual synthetic data platform that helps AI developers produce the data they need to analyze people exercising and train smart fitness equipment to “see.”
“At Datagen, our focus is to aid computer vision teams and accelerate their development of human-centric computer vision tasks,” Ofir Zuk, CEO of Datagen, told VentureBeat. “Almost every use case we see in the AI space is human-related. We are specifically trying to solve and help understand the interconnection between humans and their interaction with surrounding environments. We call it human in context.”
Synthetic visual data represents fitness environments
The Smart Fitness platform provides 3D-annotated synthetic visual data in the form of video and images. This visual data accurately represents fitness environments, advanced motion, and human-object interactions for tasks related to body key point estimation, pose analysis, posture analysis, repetition counting, object identification and more.
In addition, teams can use the solution to generate full-body in-motion data to iterate on their model and improve its performance quickly. For example, in cases of pose estimation analysis, an advantage the Smart Fitness platform provides is the capability to quickly simulate different camera types for capturing a variety of differentiated exercise synthetic data.
Challenges to training AI for fitness
Pose estimation, which is a computer vision technique that helps determine the position and orientation of the human body with an image of a person, is one of the unique solutions that AI has to offer. It can be used in avatar animation for artificial reality, for example, as well as markerless motion capture and worker pose analysis.
To correctly analyze posture, it is necessary to capture several images of the human actor with its interacting environment. A trained convolutional neural network then processes these images to predict where the human actor’s joints are located in the image. AI-based fitness apps generally use the device’s camera, recording videos up to 720p and 60fps to capture more frames during exercise performance.
The problem is, computer vision engineers need vast amounts of visual data to train AI for fitness analysis when using a technique like pose estimation. Data involving humans performing exercises in various forms and interacting with multiple objects is highly complex. The data must also be high-variance and sufficiently diverse to avoid bias. Collecting accurate data which covers such a variety is nearly impossible. On top of that, manual annotation is slow, prone to human error, and expensive.
While an acceptable level of accuracy in 2D pose estimation has already been reached, 3D pose estimation lacks in terms of generating accurate model data. That is especially true for inference from a single image and with no depth information. Some methods make use of multiple cameras pointed at the person, capturing information from depth sensors to achieve better predictions.
However, part of the problem with 3D pose estimation is the lack of large annotated datasets of people in open environments. For example, large datasets for 3D pose estimation such as Human3.6M were captured entirely indoors to eliminate visual noise.
There is an ongoing effort to create new datasets with more diverse data regarding environmental conditions, clothing variety, strong articulations, and other influential factors.
The synthetic data solution
To overcome such problems, the tech industry is now widely using synthetic data, a type of data produced artificially that can closely mimic operational or production data, for training and testing artificial intelligence systems. Synthetic data offers several significant benefits: It minimizes the constraints associated with the use of regulated or sensitive data; can be used to customize data to match conditions that real data does not allow; and it allows for large training datasets without requiring manual labeling of data.
According to a report by Datagen, the use of synthetic data reduces time-to-production, eliminates privacy concerns, provides reduced bias, annotation and labeling errors, and improves predictive modeling. Another advantage of synthetic data is the ability to easily simulate different camera types while generating data for use cases such as pose estimation.
Exercise demonstration made simple
With Datagen’s smart fitness platform, organizations can create tens of thousands of unique identities performing a variety of exercises in different environments and conditions – in a fraction of the time.
“With the prowess of synthetic data, teams can generate all the data they need with specific parameters in a matter of a few hours,” Zuk said. “This not only helps retrain the network and machine learning model, but also allows you to get it fine-tuned in no time.”
In addition, he explained, the Smart Fitness platform optimizes your ability to capture millions of substantial visual exercise data, eliminating the repetitive burden of capturing each element in person.
“Through our constantly updating library of virtual human identities and exercise types, we provide detailed pose information, such as locations of the joints and bones in the body, that can help analyze intricate details to enhance AI systems,” he said. “Adding such visual capabilities to fitness apps and devices can significantly improve the way we see fitness, enabling organizations to provide better services both in person and online.”
Fitness AI and synthetic data in the enterprise
According to Arun Chandrasekaran, distinguished VP Analyst at Gartner, synthetic data is, so far, an “emerging technology with a low degree of enterprise adoption.”
However, he says it will see growing adoption for use cases for which data must be guaranteed to be anonymous or privacy must be preserved (such as medical data); augmentation of real data, especially where costs of data collection are high; where there is a need to balance class distribution within existing training data (such as with population data), and emerging AI use cases for which limited real data is available.
Several of these use cases are key for Datagen’s value proposition. When it comes to enhancing the capabilities of smart fitness devices or apps, “of particular interest will be the ability to boost data quality, cover the wide gamut of scenarios and privacy preservation during the ML training phase,” he said.
Zuk admits that it is still early days for bringing AI and synthetic data, and even digital technologies overall, into the fitness space.
“They are very non-reactive, very lean in terms of their capabilities,” he said. “I would say that adding these visual capabilities to these fitness apps, especially as people exercise more in their own home, will definitely improve things significantly. We clearly see an increase in demand and we can just imagine what people can do with our data.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.