I put my head on The Mandalorian, and it wasn’t such a hard thing to do. While at the recent CES 2023 tech trade show in Las Vegas, I stopped by the booth of 3D printer company Formlabs.
David Lakatos of Formlabs and Patrick Marr of Hasbro walked me through a demo of the Hasbro Selfie Series, where you can put your head on a 3D-printed action figure of your choice. This is another example of our future that will be filled with user-generated content (UGC).
The common thread about the different forms of UGC, including creations for video games, is that it is getting easier and easier to do. And if it keeps going down this path, I wonder if it will pave the way to the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. James Gwertzman, who recently left A16z to return to the startup life, penned an article about this trend of generative AI invading games.
All I had to do to place my head on the six-inch body of The Mandalorian was pose. With the smartphone app, they scanned my face and sent the capture off to a 3D printer. It costs $60, which is more than I would otherwise pay for an action figure, and it arrived in the mail a month later with my head in the right place. This is as easy as it gets, and Marr said it plays into the remix culture that is sweeping through so many industries now, like video games. I don’t know if you can call me a “maker” since this is so automated, but I did supply my head.
GamesBeat Next 2023
Join the GamesBeat community in San Francisco this October 24-25. You’ll hear from the brightest minds within the gaming industry on latest developments and their take on the future of gaming.
The great obstacle to UGC is that most of it is pretty crappy. I’m not a sculptor and I can’t do quality work when it comes to artistry. But when something comes along to automate the process of creation, then we’re getting somewhere.
Consider generative AI and its combination with UGC. AI supplies the talent, artistry, and expertise in a way that we didn’t think was possible just a year or so ago. But with the explosion of generative AI recently, including the amazing launch of conversational tech ChatGPT, UGC is poised to take big leaps.
This week, Paris-based Kinetix launched AI tools to enable people to create their own emotes, or unique expressions that can animate their avatars.
On February 2, Ready Player Me revealed it had started using DALL-E’s generative AI to help players customize the clothing for their custom avatars. You don’t have to best an artist or fashion designer to create that clothing; the AI creates it for you and you decide which one really looks the best.
On January 31, Metaphysic teamed up with CAA to bring generative AI tech to Hollywood to use deepfake technology to create high-resolution photorealistic faceswaps and de-ageing effects on top of actors’ performances live and in real-time without the need for further compositing or VFX work. The primary people who will use this? Influencers and creators.
Also on January 31, Auxuman, an AI gaming startup, teamed up with Oorbit to bring generative AI multiplayer gaming to LG Electronics TVs. In Auxuman’s Auxworld app, players can use AI to generate their own multiplayer metaverse by typing text input, similar to how AI tools generate images. They don’t need to know how to make games to do this. They just need to know how to type.
It makes use of Auxuman’s custom AI network, and it’s all part of LG’s plan to make the “metaverse” more accessible to consumers by providing metaverse-like experiences on TVs.
Using the Scriptic platform, human writers collaborated with DALL-E 2 by OpenAI. This created a collaborative back-and-forth between the human writers and the machine, as they worked together to create an expansive dream world full of original horrors.
Nihal Tharoor, CEO of ElectricNoir, told me he believes that using generative AI makes it infinitely scalable for consumers who voraciously consume content.
And it’s not just the UGC amateurs who can use generative AI. The professionals are getting hip to it too. Nvidia announced on January 3 that its Omniverse tools would incorporate generative AI via Unity’s game engine to develop better custom experiences.
I put down the dates of these announcements to show you the pace of innovation. It’s moving so ridiculously fast.
Roblox, the reigning king of UGC, draws more than 50 million people a day to user-generated games on its platform. But to create those games, users have to know Lua programming and other tools to make professional-looking work. Now the pressure is on Roblox to make its game generation much easier. I can’t say enough how amazed I am to see how fast this is moving.
What are the consequences going to be? I wonder if the companies that forbid their users from doing UGC will become dinosaurs. Will UGC creations become more popular than those created by professional developers? I don’t know about that, but it’s already happening in so many ways and the creator community has a way of surfacing the best UGC work. So the day may not be far away when professional developers have to step aside and bow to the user demand for UGC. But that’s OK, as the pros can create their own UGC as well, and I suspect it will still be a lot better than mine.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.