Missed the GamesBeat Summit excitement? Don't worry! Tune in now to catch all of the live and virtual sessions here.
Epic Games showed off some new technology along with the developers of Microsoft’s Ninja Theory for Senua’s Saga: Hellblade II.
The Metahuman Animator is a tool that can be used to create extremely realistic animations, based on video captured from human actors and almost instantly converted into an animated structure that can be used to create 3D animations for games and films.
It was one of the cool demos that Epic Games showed at its State of the Unreal event at the Game Developers Conference in San Francisco today.
>>Follow VentureBeat’s ongoing GDC 2023 coverage<<
Epic showed off the tech through the Ninja Theory game, which is a sequel to Hellblade: Senua’s Sacrifice, a game with outstanding human animation from 2016. Melina Juergens, the motion capture expert and lead actor for the game, made an appearance to show how the same tech can now be used on an iPhone as a metahuman animation. It works with the Livelink face application on mobile devices.
The tech can generate a face model from a few captured pictures within a minute or so and convert it into something that can be used in computer-animated films or games. Ninja Theory also gave us a glimpse of what Senua will look like in the upcoming Hellblade II title.
Epic Games also unveiled new tech to make it easier for creators to create using cool 3D animations.
NCSoft’s Songyee Yoon, president and chief strategy officer, showed off imagery from Project M, an upcoming game. It’s an action-adventure game with extremely realistic graphics and stellar human animations.
Yoon was on stage to introduce the company’s latest project.
In the trailer, a digital human version of NCSoft’s chief creative officer (CCO), Taekjin Kim, appeared on screen and guided the viewers through Project M’s world and core gameplay.
This digital human was developed utilizing NCSoft’s AI technology and its advanced art and graphics technological capabilities. The trailer’s digital human speech voice was generated from the company’s AI text-to-speech (TTS) synthesis technology. It is used to translate text information into natural human speech reflecting a certain person’s voice, speech accent, and emotions.
The digital human facial expression and lip-sync were generated with the help of the company’s voice-to-face technology. It is an AI-based facial animation technology that automatically produces facial animation, matching the given text or voice. The AI technology and the company’s visual technologies created the digital human’s realistic facial look.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.