Treating AI like it’s a person has its benefits. When IBM named its AI “Watson,” the company hoped people would see it as warm and approachable, a “humble genius” even. According to Ann Rubin, vice president of branded content and global creative at IBM, in a conversation with Adweek, it worked.
It’s no surprise then that emerging AI platforms like Salesforce’s Einstein, Amazon’s Alexa, and my company’s Albert followed suit. Some companies are now taking the humanization process even further, positioning their technology as your newest colleague or employee.
This makes enough sense from a task perspective: AI will do work like a colleague. But from a psychological one, the moment AI makes the leap from tech platform to teammate — and takes on a more traditionally human role — humans begin expecting it to be a bit more like them. Or, at least, to exhibit more humanlike traits, like accountability, transparency, and communication skills.
AI’s not good at any of these things. It’s bad at sharing the details of its day, what it’s up to, or why it does what it does. And it’s an awful listener. Some might describe their human colleagues in similar terms, but unlike humans, AI’s inability to communicate is never out of rebellion, introversion, fear of sounding stupid, an inherently brooding demeanor, or any other distinctly human traits.
Today’s autonomous technologies simply aren’t programmed to explain themselves — they’re programmed to perform. Take a self-driving car that has veered into the right lane when its human passenger would have preferred it stay in the lane it was in. There’s currently no way for the vehicle to communicate why it made its decision. This might leave its human passenger frustrated, wondering “Why?”
Humans are programmed to want to know why. They want to understand the reasoning and logic that goes into a decision. And AI has this information in spades. But as AI developers can testify, an AI processes thousands to millions of variables in the same time that a human can process hundreds. Explaining the “why” would not only be difficult, it would be lost on the listener.
The human need to understand why perhaps reveals an underlying assumption: that technology has motives and operates according to its own free will. Neither is true, of course. AI doesn’t have an opinion or agenda; it will do whatever the user wants it to. It just needs guidance and boundaries that ensure it achieves its goals in a way that reflect the user’s own values and priorities.
It’s this precise insight that has inspired the rise of a new breed of AI operators — everyday AI experts who will act as conduits between AI and its human colleagues. One part AI whisperer, another part operations professional, the AI operator’s sole purpose is to strengthen AI in places where human intervention is required: communicating what it’s doing and learning, understanding business goals, and offering its thoughts in the form of insights (rather than raw variables considered in the decision-making process).
These AI experts will emerge from different walks of life. While data analysts who understand patterns and correlations might feel more immediately comfortable interacting with an AI system, those in more operational roles will be better equipped to act as an interface between teams, pushing projects forward and putting the AI’s insights to work to help people do their jobs.
No matter who fills this role, the goal will be the same: Set parameters and guidelines. Speak “robot.” And translate what it has to say back into “human.”
Robots are rough around the edges. Left to its own devices, an AI will get from point A to point B in the quickest way possible, but it won’t always look pretty. An AI operator’s job is to make sure that the risks it takes aren’t at odds with what a company is comfortable with — even if helps them meet their goals.
Take an AI platform that’s tasked with executing a retailer’s Black Friday promotion across social media and online channels, from start to finish. Prior to letting it loose on these channels, the AI operator would step in to guide the AI and, in a sense, share with it the strategy and objectives: “Here’s the promotion, here are the creative assets, here are the types of audiences we’re targeting and the channels we’re interested in using. Our goal is to generate 2.5 times more revenue than in other months, and we’re willing to invest more aggressively than usual to do it.”
The AI operator would also make upfront decisions about what “being more aggressive” means. Is it OK to spend 30 percent upfront on audience discovery? Can it spend 20 times more than usual on keywords? Should it focus primarily on new audiences or should it also focus on existing customers?
The AI can go either way on each of these decisions. The AI operator knows how to tell it to go in the direction the organization wants it to.
Unlike humans, who often collaborate with one another to figure out how to move projects forward, AI only wants to be told the what in very generic terms. It requires less thinking and more manual input. Take the Black Friday example, where the directives were: “2.5 times more revenue, increased investment, these audiences, those channels, go.” Once the AI operator tells it this, it will determine the how on its own.
AI’s disinterest in talking through its approach doesn’t mean collaboration is off the table. It just looks a little different. AI’s way of collaborating involves offering analysis and insights that the AI operator can then use to make decisions about strategy and feed an updated plan back to it.
This basic give-and-take relationship is one of the many ways AI is redefining what it means to be a colleague in the age of robots. While humans might have invented workplace dynamics, robots are stepping in to redefine them. Somewhere between them and us is the AI operator, whose ability to speak robot will help man and machine move beyond these initial growing pains.
Tomer Naveh is chief technology officer at Albert, an artificial intelligence marketing platform for the enterprise.