I’ve spent a number of years studying artificial intelligence, and, until recently, I believed emotional intelligence would remain one of the core advantages left to humans after AI takes over most tasks requiring memorization and logic.
The more I have looked into this area, however, the more convinced I have become that people may no longer be ahead of AI in terms of emotional intelligence.
In his best-selling book, Homo Deus, Yuval Noah Harari writes that humans are essentially a collection of biological algorithms shaped by millions of years of evolution. He goes on to claim that there is no reason to think non-organic algorithms couldn’t replicate and even surpass most organic algorithms.
This sentiment is echoed by Max Tegmark in his book Life 3.0: Being Human in the Age of Artificial Intelligence.
The idea is that our emotions and feelings are the product of organic algorithms, which are shaped by our cultural history, upbringing, and life experiences, and that they can thus be reverse-engineered.
If we agree with Dr. Harari, who is a professor at the Hebrew University of Jerusalem, and Dr. Tegmark, who is a professor at MIT in Boston, computers will eventually become better at manipulating human emotions than humans themselves.
People are generally not emotionally intelligent
In real life situations, we are actually pretty bad at emotional intelligence.
Most of us are ignorant about even the most basic emotional triggers we set off in others. We end up in pointless fights, dismiss good arguments because they go against our biases, and judge people based on stereotypes.
We tend to overlook the effects of people’s cultural conditioning, family upbringing, and current life situation, and rarely make the effort to put ourselves in another’s position.
Online, the situation is much worse. We draw hasty and often mistaken conclusions from comments made by people we don’t know at all and lash out at them if they challenge our biases.
This can be partially attributed to our evolutionary tendency to see life as “survival of the fittest,” which predisposes us to take advantage of others and protect our own ego and position.
AI is advancing rapidly at emotional intelligence
While humans are still struggling to understand each other, emotionally intelligent AI has advanced rapidly.
Cameras in phones are ubiquitous, and face-tracking software is already advanced enough to analyze the smallest details of our facial expressions. The most advanced cameras are able to tell fake emotions from the real thing in certain cases. Voice recognition and natural language processing algorithms are also getting better at figuring out our sentiment and emotional state from audio.
There are even artificially intelligent systems that can look at our faces and recognize such private qualities as sexual orientation, political leaning, or IQ.
The technologies used to analyze emotional responses have begun to advance beyond the skills of an average human, and in many areas they exceed the abilities of all but the most skilled humans.
And yet, while these emotionally intelligent systems continue to learn more about us, we have yet to apply ourselves to understanding the full scope of this technology.
Advances in this field are currently almost solely driven by commercial interests. Media giants like Facebook and Youtube, for example, have teams of engineers working to increase user engagement until it approaches the level of addiction, which I wrote about earlier in “The worrying growth of the business of addiction.”
But some of the core developers of these algorithms have become concerned about the power of this technology and its potential to influence or even hijack our minds.
Big data gives emotionally intelligent AIs an edge
In many cases, AI can already leverage our entire online history, and some of the most advanced machine learning algorithms developed at Facebook and Google have been applied to a treasure trove of data from billions of people.
By analyzing our communications, friends, and cultural context, these algorithms are already able to identify many of our desires and emotional triggers.
In fact, the algorithms are getting so complex that in some cases they are becoming impossible to fully control.
Facebook and Google have been accused of creating filter bubbles that affect public opinion and even sway elections. And yet Facebook’s chief of security, Alex Stamos, recently tweeted that accusations of manipulation are unfair because there is no solution currently available to eliminate this kind of bias.
As artificial intelligence gets better at manipulating users, I see a scenario in which people happily turn even more of their lives over to algorithms. People already touch their phones an average 2,617 times a day, a level of engagement that indicates a future controlled by technology is fast approaching.
Mikko Alasaarela is the found of Inbot, a global community of people who help innovative B2B companies grow.
This story originally appeared on Medium. Copyright 2017.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here