Back in August I explored the profound role that avatars will play in shaping our experience of immersive worlds in VR. I discussed how virtual embodiment will stretch our sense of self on multiple levels, be it physical, physiological, or psychological. While it is not at all an exaggeration to suggest that avatars will shift our conception of identity and self-expression into uncharted territory that we cannot, as of yet, readily comprehend or fully appreciate, that is only half the story.
Avatars will not only be our vehicle for entering immersive worlds, but also the doorway through which a new breed of intelligent characters will step through to greet us and guide our navigation through it. Let me explain how.
The rise of virtual influencers
During the past few years, we have witnessed the growing popularity of virtual celebrities like Brud’s Lil Miquela and Activ8’s Kizuna AI, which have spawned a whole industry to rise in their wake. The appeal of these fictional characters goes hand in hand with their talent for believability, while their market potential is confined to a trade off between their potential reach and the degree of interactivity made possible by any particular medium.
Take a virtual celebrity on Instagram, for instance. It has broad reach but the level of interactivity is low, which means a pretty passive relationship between a virtual character and its audience. YouTube, on the other hand, offers a digital persona like Kizuna AI greater interactivity since it is human-powered, but it is still ultimately restricted to the limitations of a video.
Immersive mediums like VR and AR, however, can offer these virtual characters the level of interactivity they need in order for their evolution to continue to manifest as a natural progression enabled by the fact that spatial computing represents arguably the strongest convergence point for AI.
XR + AI brings 2D Pinocchio to life
AI is the tech that can infuse virtual characters with the intelligent anatomy required to engage with humans in a fashion that is becoming increasingly, for lack of a better word, empathic. Branches of AI like machine learning, natural language processing, computer vision, and sentiment analysis wrap together as a sort of spatial computing synergy that quickens the artificial souls of these characters to life, not just as the singular personal assistants that we are already familiar with, but as a multitude of purposeful personas that wander through the enabled AR landscape in embodied form.
“We believe that everyone’s favorite characters from books, movies, video games, comics, and toys will come alive and begin interacting with their fans. These characters will be joined by the avatars of celebrities, musicians, and athletes.” said Armando Kirwin, cofounder at Artie, the company that enables brands to create and share AI avatars. “Not only that, but banks, grocery stores, retailers, and other businesses will each want to have their own avatar too. If you combine all of these use cases, you get what we call “the avatar layer,” a layer of millions of avatars around the world that become increasingly useful, intelligent, and fun to interact with.”
Enter the “Avatar Layer”
The vision pairs inseparably with the emergence of 5G and its capacity to upend the mobile market as we know it by wiping out our conception not only of apps, but the idea of “downloading and installing”, which may very well become archaic terms within 5 to 10 years. As 5G becomes commonplace, so too will the avatar layer offer instantaneous immersive experiences in the form of virtual characters that I hope will make some readers reminiscent of the magically interdimensional world as depicted in the 1988 film, Who Framed Roger Rabbit.
Last December I discussed how the AR Cloud will serve as the connective tissue that binds AR experiences together to the physical world, which will allow for simultaneous and multiplayer access for everyone in real-time. The AR Cloud is also referred to as the mirrorworld, which is an apt way to describe how this parallel digital reality will exist simultaneously and in-sync with the real one.
Here is how Ryan Horrigan, cofounder and CEO at Artie, describes how it will play out in practical terms:
“In our view, you won’t need the Disney app loaded onto your phone before you arrive with your kids at Disneyland. A Mickey Mouse avatar will simply appear, if you give it permission to, as an instant app over 5G as it senses you are in the geo-fence of the park. Mickey’s avatar will then lead you through the park, booking you on rides and making you lunch reservations, all while entertaining you as your guide.”
“Similarly, for GEICO, when I’m in a car accident, I won’t need their app to already be on my phone. I’ll just ask to talk to the lizard and he will help me collect photos of the scene and file my report. We believe instant apps, enabled by 5G, will use avatars as their primary UI in the future, and we’re very excited about that.”
The mirrorworld will be populated by its own parallel universe of objects; innate and alive, passively and actively manifesting within its own scale and scope, as well as embedded as the digital dopplegangers of real world objects. It is in this context that autonomous avatars will be to the mirrorworld what humans are to the real one, and it is Artie’s Wonderfriend Engine platform that will allow brands and companies to generate their first wave of characters, and with the accompanying intelligence and narrative design to actively live within the avatar layer.
“Immersive characters will outnumber humans 100x in the near future. Your physical reality will be filled with AI holograms, living toys, virtual teachers, proxies, and familiars. Every fictional character you ever loved will be walking around IRL with you.” Kirwin has pinned to his Twitter profile.
Reactive to the human condition
Autonomous avatars literally personify the culmination of what it means to be human-centric. It is an appeal back to our evolutionary nature, which is the yearning for an interface that is human-like and interactive at an empathetic level. For example, Artie’s tech can detect dozens of objects and a host of facial expressions so that avatars can be responsive to the dynamic world around it, but more importantly, the mood of a user in real-time.
This is where the breakthrough invariably rests: in the AI’s capacity to enable avatars to invoke a genuine emotional connection with humans by responding to them as if intuitively.
The still very nascent space has surprisingly few active players, one of which is Fable and their work in enabling their autonomous VR avatars, or what they are referring to as “virtual beings”, establish a two-way dialogue between the character and the user. The approach is embodied in their own virtual star, a 8-year old girl named Lucy, who can respond, react, and remember in an ongoing spoken dialogue that transcends the usual fabric of time and space for what would otherwise be a scripted chatbot.
“We are trying to create a natural, organic conversation,” Jessica Yaffa Shamash, Creative Director at Fable told VentureBeat in an interview in January. “It feels like you can’t see the technology behind the hood and it should feel like you are conversing with a real person.”
Lucy is designed to remember what a user says or did in one exchange and refer to it later as a reference point that offers the kind of atypical talent in memory that humans are generally considered to be unique and alone in the universe in possessing. In fact, Fable plans to allow Lucy to carry her memory to go metaversal by holding its artificial psyche intact across platforms and immersive worlds; wherever it is you might encounter her and her “kind”.
What is crystal clear is that as device capabilities and new methodologies allow us to better capture, digest, and leverage the nonverbal parts of a communication — like eye tracking, EEG sensors, and my startup’s neurometrics — the avatar layer’s bandwidth for embracing the human condition will continue to widen to an elevated degree that is unprecedented, yet not necessarily surprising. Autonomous avatars, along with the work directed towards creating a bonafide brain-computer interface, will begin to pierce this next veil in frictionless design.
Amir Bozorgzadeh is CEO at Virtuleap, the startup that enables the body to speak in VR and AR using neuroscience research and machine learning, so that companies and brands can know if their users are excited, angry, or bored out of their minds.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more