All the sessions from Transform 2021 are available on-demand now. Watch now.
Increasingly, game developers are asking whether a 17-button controller or a mouse/keyboard are the best possible interfaces for interacting with games — or if there is something more “naturalistic” that could improve the connection between what we want to do in a game and what actually happens.
It may be the stuff of dreams, but Ambinder said many researchers are working on solving the problem today, and it’s hard to predict how soon someone will make a breakthrough.
The whole point is to cut the middleman, in this case the game controller, between the intention of the player and the game simulation.
Three top investment pros open up about what it takes to get your video game funded.
“In the long run, this will give us the most bang for the buck,” said Ambinder, in terms of directly wiring into our brains.
For instance, we know there are both verbal and nonverbal parts of a conversation. The nonverbal includes the change in someone’s tone of voice, facial expressions, and where someone is facing.
“With games, we have traditional inputs, but we might be missing the nonverbal part of the conversation. There might be other data that can be provided to us as game designers that we’re not acquiring.”
A mouse and keyboard has lots of different inputs that can be very precise, but they might be very hard for humans to remember them all.
“Memory is actually a fundamental limitation,” Ambinder said. “How many possible combinations you can remember off the top of your head when you’re playing a game? What if you didn’t have to remember everything? What if you could just think about what you wanted to do and it happened? Wouldn’t that change how you play games?”
Gamepads can be simpler, but they still have all those buttons. There are also gesture controls — for things like swinging your arm and boxing. Those can be more intuitive, but they also make you tired. In the case of both controllers and keyboards, you have to think about a movement and translate it into a movement that triggers an interaction in a game.
A new kind of controller might be able to help people play better, including those who are disabled in some way. Microsoft showed that with its Xbox Adaptive Controller for people with limited mobility. They could perhaps even help people see again who can’t see, Ambinder said. Maybe we could send data that bypasses the eyes and goes straight to the brain.
An ideal interface?
“What happens if you didn’t have to use those things?” Ambinder asked. “What are better ways of interacting with games?”
Ambinder thinks we can come up with things that can make us respond quicker, give us a broader set of input commands, achieve more complex patterns of input like chaining together commands, and being able to undo things that we’ve done more quickly.
With a brain-computer interface, Ambinder believes game designers can get more data from the player.
“What I’m really fascinated by is what happens [when] we get additional data from a player we are not getting with current-generation interfaces,” he said.
Developers know what players are trying to do. But they don’t know how they are experiencing it.
“So are they happy? Are they sad? Are they engaged? Are they detached? Are they challenged? Or are they bored? Or frustrated? Or are they exploring and solving puzzles?” Ambinder said.
There are privacy reasons around this kind of access to a player’s thoughts, for sure. But Ambinder said that knowing the internal thinking and emotions of a gamer could help game developers respond and make a game adapt to the player’s state of mind.
Ambinder (and many other researchers) has used an open-source headset with encephalogram capability — or the ability to detect the electrical activity in your head using sensors attached to the scalp. That kind of contraption isn’t comfortable, but it could fit inside some kind of helmet — like a virtual reality headset.
Neurons are nerve cells, the atomic units of the brain. If you put them together, and you organize them in various functional and hierarchical ways, you get a brain. They need to communicate by firing, or sending electrical signals down various pathways. There are about 100 billion neurons in the brain. There are one quadrillion connections, or synapses, that connect those neurons.
And when neurons fire, they produce every single aspect of conscious and unconscious experience — every thought you have, Ambinder said. Every feeling you have is a consequence of neurons or bundles of neurons firing together.
“So, in some respects, we’re actually already inside the Matrix. What we see and experience and feel right is constructed by neurons firing up here,” he said, pointing to his head. “There is a neurological coordinator origin for every single thing that happens in our brain. And our reality is constructed by these patterns environments. So, if we can reliably measure them, and then we can start doing useful things with them.”
If we can measure patterns of activity, whether they are temporal (happening over time) or spatial.
“We simply want to understand like, locations have increased levels of activity in the brain and hopefully what they are,” Ambinder said. “If we can take these patterns of activity, and describe them in terms of something a player is experiencing,” then we can understand whether a set of electrical impulses means the player is happy or sad or something else.
Since we can’t really drill into our heads, the tools for measurement are crude. There are the EEG monitors, as well as infrared spectroscopy (which measures scattering of blood flow), and expensive magnetic resonance imaging machines, which measure oxygenated blood flow.
There are also sensors for measuring your heart rate, galvanic skin response (sweat), muscle tension, and posture. These things might help decipher what someone is thinking. And there are our hands. One researcher estimated that it could take a signal 100 milliseconds to travel from your brain to your finger.
But what if you could shave off 10 to 30 milliseconds from your reaction time? That would matter a lot to competitive gamers. What if you could predict movement before it happens?
“There are people investigating this right now,” Ambinder said.
You can do this noninvasively or invasively. The latter would mean we would have to wire something into the brain, which conjures up the image at the top of this story.
“How many of us are willing to have someone drill a hole in our head,” Ambinder asked.
The encephalograms can detect different kinds of waves — like delta waves or beta waves. If you could correlate what’s happening to a player’s feelings, then the measurement could detect something useful for game designers.
Right now, the sensors can measure if someone is learning something, if they’re surprised, if they’re excited or relaxed, whether they have a positive or negative growth in emotion, if they’re engaged or bored.
Ambinder noted that a lot of people are concerned about the capabilities of artificial intelligence, and whether humans can keep up. But if we create better brain-computer interfaces, then maybe we could keep up better. That means this research could have uses beyond games.
A VR headset could be outfitted with electroencephalogram (EEG) sensors, Ambinder said.
Valve’s research in playtesting games
Valve has been testing its theories in this space by watching the way people play games. It can do so by watching people play, running surveys, and doing other kinds of tests. But if you ask people questions about their gameplay, they may rationalize their answers or make them up.
That’s because we don’t understand what we do or why we do it. Measuring helps but it is pretty hard to measure a lot of people right now. If people wore BCI headsets, they could generate a lot of data that generates moment-to-moment insight. Maybe then, developers could measure what a player thinks when they choose a character, kill a character, get overwhelmed, or die in a game.
The chance to do this in real time has Ambinder really excited. If we can get this data, Ambinder said, “We’d be much better game designers.”
If a game designer could measure the feeling of a player when they get a kill in a game, then they could compare that feeling to other parts of the game where the player has a similar reaction. That might be a good measure of a highlight in a game.
For instance, if a part of a game is too easy, the developer could adapt that part on the fly, making it harder. If someone is about to quit, the developers could predict that and do something about it or adjust the game balance. If an enemy gets more challenging within a game, how would a player react?
Right now, a vocal minority of players has undue influence on game designers.
Designers could learn about the emotional peaks and valleys in their games, and they could reproduce them if the game turned out to be a popular one.
“So think about adaptive enemies. What kinds of animals do you like playing against in gaming?” Ambinder said. “If we knew the answers to these questions, you could have the game give you more of a challenging time and less of the boring time.”
Difficulty levels could become a thing of the past, as a game will adapt to your skill. Matching people with multiplayer teammates could be easier. And it might get easier to identify toxic players, who could be separated from the other players more easily.
Rewards could also be adjusted based on your reaction to the reward. Developers could figure out what kind of reward you like and give you more of those. (That can also have a dark side, making a game more addictive.) What keeps you intrinsically motivated?
Ambinder also said that developers could create avatars that mimic your mood, so other players in a multiplayer came could read your expressions better. What if a player is in a state of flow? What if a game needs to be more accessible for someone? What if you were playing a spy game, and you had to actually act like a spy, deceiving other players who could read your emotions?
“All of a sudden, we start becoming able to assess how you’re responding to various elements in game,” Ambinder said. “We can make small changes to make big changes.”
Gameplay becomes adaptive and personalized. And everyone gets a different experience as games respond to a player, not the other way around. Games could also augment your own memory, helping you to sense things you wouldn’t otherwise detect. Developers could stimulate your nerves and put you into the Matrix directly, Ambinder said.
But there are challenges. Ambinder knows we have to be realistic. So much of the sensory technology is in its infancy.
“Data is incredibly noisy, especially inside the brain,” he said. “There is so much we don’t understand.”
Analyzing it isn’t easy. And it’s pretty hard to get past someone’s skull and find out what is going on inside a person’s brain without being invasive. It’s like being outside of a football stadium and trying to understand what is happening inside the stadium, based on the cheering that you hear, he said.
I’m looking forward to the day when Ambinder can do everything he talked about in his speech. Maybe then, I’ll be able to beat Ninja in a game. Ambinder is optimistic that brain-computer interface technology will deliver better experiences and play, but we’ve got a long way to go.
GamesBeatGamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. How will you do that? Membership includes access to:
- Newsletters, such as DeanBeat
- The wonderful, educational, and fun speakers at our events
- Networking opportunities
- Special members-only interviews, chats, and "open office" events with GamesBeat staff
- Chatting with community members, GamesBeat staff, and other guests in our Discord
- And maybe even a fun prize or two
- Introductions to like-minded parties