We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
If we need to learn one thing about the numerous AI applications around us today, it is that they are examples of “artificial specific intelligence.” In other words, they rely on algorithms that are great at very particular tasks, such as selecting a movie based on our watching history or keeping our car in the proper lane on the highway. Because it is so highly specialized, AI greatly outperforms human intelligence in those narrowly defined tasks. Take it from a person who recently spent 50 minutes picking a movie that itself lasted 77 minutes.
However, AI’s effectiveness at specialized jobs comes at the price of severe context blindness and a general inability to develop meaningful feedback loops: The typical algorithm does not and cannot consider the wider implications of the decisions it makes and hardly affords us users any control over its inner workings. But the convenience of these algorithms — that show us the best route to our destination or recommend an item for our grocery list — lulls us into a dangerous cult of AI that gradually pushes our human needs out of the picture. User experience (UX) quickly becomes secondary to the algorithm’s success at its narrow task.
This ultimately results in a number of aberrations that we as users should not accept quietly because they have an outsized effect on our daily lives and our cognition.
AIX — today’s trendy and misunderstood fusion of “AI” and “UX”
Today’s purveyors of ultra-specific AI applications make one fundamental error: They adjust user experience parameters to an algorithm’s functionality and not vice versa. In practical terms, this means that algorithms get the final say on what we watch, what we buy online, and in which lane our smart vehicle drives. Meanwhile, we get close to no say in these matters. This is one reason social media nowadays has turned into an endless AI-curated content stream. TikTok is a perfect example of this: Users end up scrolling through a continuous feed of short videos with repetitive content and nowhere else to go.
The tendency to shove content down our throats is evident on movie streaming services, too, where end credits have become a rare commodity. Five to ten seconds after an episode or movie ends, the autoplay feature — enabled by default, of course — takes you to the next movie you might like. There is no time to reflect on what you just saw, and the carefully selected end-credit music is cut short. This is the modus operandi of AIX: It serves the experience of the algorithm by pushing content your way and learning about what you like or dislike. Meanwhile, you have no time to reflect on the content you are consuming and do not learn anything about yourself. Worse yet, your cognitive processes gradually give out, and you become what AI curates for you.
A prime example of AIX-generated identity loss is the deluge of conspiracy theories online. Social media and online forums allow conspiracy theories to run amuck because they are “sticky” and viral. Just like those infamous flat Earthers, former believers in the QAnon conspiracy report being so brainwashed they could not talk about anything else. Millions of social media users were sucked into the AI-curated vortex of ludicrous yet engaging content and primed to seek out their next “fix,” all by well-trained algorithms that did not have user experience in mind.
There is no “U” in “AIX”
AIX intentionally circumvents UX; on a Venn diagram, they would not have a single touching point, let alone overlap. The two concepts serve fundamentally different ends. AIX is looking for the lowest common denominator, a homogenized formula that produces the greatest engagement; then it puts it on repeat. In the best case, it’s cat videos or witty memes, but it can get much worse as incidents around the world show us time and time again.
Homogenization of AI-curated online experiences has another malicious byproduct: dumbing down. When your video streaming platform of choice continually feeds you cat videos, soon enough you will forget that dog videos also exist. If the platform is particularly restrictive and aggressively automated, it won’t allow you to look for dog videos in the first place. Even if you try to break out of the algorithmic mold, these freedoms and functionalities have been quietly excised with the latest round of “bug fixes and performance improvements.”
Technologists around the globe can do better than that. They owe it to their users to dethrone AI, strip it of its outsized power over online experiences, and put UX front and center. The same resources that fuel the homogenization of online offerings and continuous mass consumption can instead support better individualization, empowerment, and self-determination on the web, one application and one platform at a time.
Enter U-AI-X and algorithms for users’ sake
So how can AI-powered services adapt and put UX at the top? Frankly, it is not that difficult. First and foremost, let AI be AI and let humans be humans. Automatize the tedious and boring parts of the experiences you offer, but always leave users in control. Seek out their feedback as to what they liked and — maybe even more importantly — what they disliked. Empower users further by setting clear expectations of what your algorithms can and cannot do for them. Knowing that AI has its limits reminds users to stay active and engaged instead of passively consuming what the machine feeds them.
Harness the power of AI to be your users’ faithful assistant, not their digital kidnapper. Let AI learn from them to help them, not hook them. Instead of stealthily collecting private information, focus on feedback that users actively contribute to make your applications better. Last but not least, strike the right balance with how much you automate. Convenience can quickly turn into cold comfort and boredom, which are the death knell for UX and user satisfaction. Let your users play, and let AI be their companion. Build interaction into your automated models, so that both the algorithm and the user grow and develop thanks to your product or service. It is good for AI, good for UX, and good for society at large.
Leif-Nissen Lundbæk (Ph.D.) is Co-Founder and CEO of Xayn. His work focuses mainly on algorithms and applications for privacy-preserving artificial intelligence. He previously worked with Daimler AG and IBM.
Julia Hintz is Lead Designer at Xayn. She previously worked as a freelancer and art director at a Berlin-based advertising agency..
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.