When lingerie brand Cosabella announced that it’d moved away from its digital marketing agencies in favor of artificial intelligence, companies across the board took note. The two aspects of its decision that got the most attention were revenue (how much money did AI produce?) and personnel (how many people did AI replace?).

These two questions go hand in hand because, perhaps contrary to belief, not all companies are jumping at the opportunity to replace staff with autonomous technology. They’re eager to hear about AI’s potential to scale their productivity and revenue and work at a pace that can’t be achieved by human teams alone. But they’re often cautious when it comes to how AI will ultimately transform jobs we no longer need to new ones that we do.

In the case of Cosabella, technology created as many jobs as it replaced. It freed up the company’s internal and external marketing teams from gathering, processing, and responding to data, and shifted their focus to creative and strategy. It revealed the need for high-frequency delivery of new creative assets, in order to combat consumers’ creative fatigue, and thereby created more demand for creative professionals. And it gave rise to a new form of human-robot collaboration, where technology focused on the data aspects of marketing and humans focused on the emotional and subjective aspects.

This transformation is not unique to AI in marketing. Neither is the emergence of hybrid man-machine teams in the workplace and beyond. In a recent Wall Street Journal article, Ken Goldberg, professor of industrial engineering and operations at U.C. Berkeley, noted a growing trend toward multiplicity, or diverse groups of people and machines working together to solve problems. Of interest is the fact that these teams follow a somewhat gestaltist dynamic, where man and machine produce better outcomes when working together than either produces when working on its own.

Early AI adopters like Cosabella are setting this theory into action as man-machine teams emerge across industries. They’re not only giving us insight into how these teams work and what they look like; they’re also offering early feedback that is likely to shape the entire trajectory of the industry.

So, what have we learned so far?

AI is only as autonomous as humans let it be

What often gets lost in conversations about AI is the fact that artificial intelligence can’t do its job on its own. It can’t carry out an entire process without collaborating with humans. While it can process, analyze, and act on data without asking for permission, it needs upfront parameters that tell it what to learn, how to learn it, and the types of decisions it’s supposed to make. It also needs human input in many formats along the way.

This dynamic reveals that machines that work autonomously are — perhaps ironically — dependent on humans to tell them what to do.

Humans and robots complement each other

When a human tries to do a machine’s job, scale and pace are compromised. When a machine tries to do a human’s job, a certain intimacy goes missing. When they come together, the abstract tasks performed by humans are amplified by AI’s quantitative insights.

This is not surprising, considering the clear division of labor between humans and AI: Humans tackle all things creative, strategic, intuitive, and emotional, while data in the form of words and numbers is unquestionably the domain of AI.

This division becomes difficult to refute after comparing a computer’s capacity to process data to that of even the largest human teams. A human can juggle a hundred or so data points, whereas AI can process millions per minute. But it’s crucial that humans be the ones who take the resulting insights and translate them into a narrative that resonates with audiences on an emotional level.

AI is not good at communicating what it’s doing

Today’s AI solutions don’t have a way of easily communicating what they’re doing. Early adopters are left wondering why the machine did x instead of y, or which thousands of variables the AI considered during its very quick decision-making process.

A self-driving car, for instance, might veer into the right lane, when you would have stayed in the current lane had it been you in the driver’s seat. But there’s no way for the vehicle to communicate why it made its decision. This leaves the passenger wondering why, in the face of AI’s only current explanation: “Because I said so.”

Humans are not good at not knowing

What we know from watching organizations adopt AI for the first time is that their instinct is to try to track every one of the machine’s decisions, but AI-driven processes can be 100 times or more complicated than manual ones, and the number of decisions the AI makes daily is overwhelming.

If nothing else, businesses crave transparency to ensure that the AI shares their — for lack of a better word — values. Without this transparency, humans are left to work on good faith that the technology has their best interest in mind. They’re also forced to exist in a perennial state of not knowing why.

This is especially frustrating since business users’ instinct is to treat the AI as a colleague, but they aren’t able to communicate with it or hold it accountable.

It’s hard to give AI feedback in laymen’s terms

Just as AI isn’t great at telling its human colleagues what it’s doing, most AI isn’t equipped with an easy way to receive users’ non-technical feedback. In other words, telling an AI that you don’t like what it’s doing or that you want it to do something else is currently difficult to do.

For this reason, a new crop of AI operators — both technical and non-technical users who are trained as AI experts — are emerging as intermediaries between human and robot colleagues. As AI evolves to prioritize transparency and reporting to human colleagues, these operators will act as the conduit between organizations and machines, introducing new rules, business logic, and feedback that will make the AI more useful and communicative over time.

Business AI is still in early days, but if today’s early adopters’ feedback is taken seriously — and I’m sure it will be — the end result will be a shift away from humans doing tasks by themselves to working side-by-side with machines in a collaborative and communicative manner.

Or Shani is the chief executive officer of Albert, an AI marketing platform.