Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
The rise of artificial intelligence (AI) continues to drive changes in how enterprises innovate and communicate their business. But for the humans and consumers interfacing with products affected by this technology, what will an AI-empowered future look like? What will it feel like? Empowered by rapidly advancing machine intelligence, product designers are rethinking the fundamental principles that guide the way they work.
Learn how developers and brand marketers are using AI to grow their businesses, at MB 2017 on July 11-12 in SF. We cut through the hype to show how marketers are achieving real ROI. Get your 50% off Early Bird ticket by May 19!
Faced with an evolving and exciting set of problems, product designers are asking a new set of questions to reinvent the model of human-computer interaction. How do we empower people rather than overwhelming or terrifying them? How do we help people grapple with intelligences that will inevitably exceed their own? How do we think about user experience (UX) design when it is no longer aimed at helping people understand machines, but rather at machines that will understand human beings and communicate accordingly?
Here are five considerations I’ve discovered thus far.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
1. Design for two
We are no longer designing for just the user. We’ve traditionally designed interfaces solely through the lens of the user’s needs — centered deeply on understanding the user’s goals, journeys, or stories. While those needs will remain core to the process, we have to consider AI as the second agent.
We’re in the nascent era of designing for interactive conversations between two intelligent agents. While some of these designs are voice-based, we are still focused on creating windows, not screens, for relationships across interactions. Successfully designing for these interactive relationships requires an additional consideration of the machine’s goals, the machine’s journey, and the machine’s needs in any given context.
2. Understand the stakes
Fruitful conversations rely on trust and respect. A human user’s comfort or discomfort relying on AI will be directly impacted by how well they understand what the machine is up to (complexity) and how much it will impact their lives (importance). To put it plainly, you might be less concerned about taking product recommendations directly from an AI system than you would be about taking medical advice. The more complex and important a set of interactions is, the more the user has to both respect the machine’s competence at coming to a solution and trust that it has the user’s best interests in its (digital) heart.
Designers can approach these problems in a host of ways. For example, a designer can focus on explanations in human terms, provide constant transparency in information gaps, or they can completely reimagine an approach to error messaging and guidance (my team has taken to calling this “error-driven design”). Most importantly, a designer should first assess an interaction by how well each agent understands the other and how much of an impact the interaction will have.
3. Design like you talk
A key part of winning trust and respect, and facilitating a productive conversation, is rooted in a slight riff on the old adage about good writing: “Write like you talk.” Since the advent of the PC, designers have been tasked with helping humans think like machines — first at the command line, and later through spreadsheets, database inputs, and form fields. AI-empowered interactions change our goal. We are now designing to help machines meet people where they are. In other words, we get to let people be people again.
Consider Facebook. Recall the site’s simplicity in 2004 versus its ecosystem in 2017. In the early years, the UX was entirely centered around influencing people to organize their lives in SQL-friendly chunks and graphable tags. Fast forward a decade and Facebook is hard at work on Messenger integrations that learn about you just by absorbing your chat threads. Facebook’s designers are saying, “You be you, we’ll figure it out.”
4. Leverage active “listening” (in moderation)
Facebook’s latest moves highlight another important technological advancement for designers: AI can listen. Computers used to wait for input, but today machines collect information through our online exchanges, interactions, and communications (and, thanks to the rise of mobile and IoT devices, the “online” component of our daily lives is nearing ubiquity). Through each new app, new device, and new interface, the collective machine is capable of learning more about us. As designers, we have both a huge opportunity and an immense responsibility.
On the opportunity side, AI can now be an active “listener” in any given interaction. For example, it can gather traffic data by tracking your motion during use of a map app, optimize a coaching app by getting a sense of your activity habits through tracking your heart rate, or prioritize possible matches in a dating queue based not on what you say you like but on whose profiles you view and interact with most — this is helpful active listening. On the responsibility side, we have to find the line between helpful listening and nefarious eavesdropping that violates privacy rights (anyone remember the Samsung Smart TV episode?).
5. Convince is the new convert
Despite the machine’s ability to listen and generate information passively in some contexts, the user is still called on to take action during many interactions. Depending on the stakes, that required action might be stressful or go against gut instinct (say, decisions about medical treatment or financial planning). As AI becomes part of high-stakes interactions, designers can look to a cousin of the age-old “conversion” flow for a framework; in other words, convince is the new convert. In a wide variety of realms ripe for AI involvement, machines (and their designers) will find themselves in the position of persuading the user to accept a logical conclusion. It’s a process that will require systems to be imbued with an “understanding” of human emotion, bias, and logical fallacies. After all, human conversation isn’t just about knowledge transfer. It’s about context, mutual understanding, nuance, and trust. Machines will need to understand us if we’re to understand and believe in them.
Designers have their work cut out for them at the edge of the Fourth Industrial Revolution, but the work has amazing and powerful implications. Take a moment and think about this: As humans, our senses define our understanding of the world — our reality is what we see, smell, feel, hear, and touch. If designed successfully, interactions with AI can augment human perception and knowledge, widening the window into a universe of information that humans have yet to fathom. But, of course, windows are two-way. As we find ourselves the creators of an increasingly rich digital primordial soup — through which Unix time counts back to a second big bang — we have a related consideration. We are beginning to create the senses by which intelligent machines will know us and are defining how a nascent intelligence will come to understand our universe in the future.
It’s a pretty amazing time to be a designer.
Andrew Paley is the Director of Product Design at Narrative Science, a company that makes advanced natural language generation (Advanced NLG) for the enterprise.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.