Humans and AI systems work better when they tackle problems together. That’s according to research from Microsoft chief scientist Eric Horvitz, Microsoft Research principal researcher Ece Kamar, and Harvard University student and Microsoft Research intern Bryan Wilder. The paper appears to be one of the first published by Horvitz since Microsoft named him chief scientific officer in March, the first in company history. Horvitz came to Microsoft as a principal researcher in 1993 and led Microsoft Research operations from 2017 to 2020.

The paper released earlier this month studies the performance of human and AI teams working together on two computer vision tasks: Galaxy classification and breast cancer metastasis detection. With the proposed approach, the AI model determines which tasks are best for humans to perform and which are better handled by AI.

The learning strategy is optimized to combine machine predictions and human contributions, with AI focusing on problems difficult for humans and humans tackling problems that can be tough for machines to figure out. Basically, machine predictions made without high levels of accuracy are routed to a human. Researchers say joint training can improve galaxy classification model Galaxy Zoo performance with a 21-73% reduction in loss and deliver an up to 20% performance improvement for CAMELYON16.

“Optimizing machine learning performance in isolation overlooks the common situation where human expertise can contribute complementary perspectives, despite humans having their own limitations, including systematic biases,” the paper reads. “We develop methods aimed at training the machine learning model to complement the strengths of the human, accounting for the cost of querying an expert. While human-machine teamwork can take many forms, we focus here on settings where a machine takes on the tasks of deciding which instances require human input and then fusing machine and human judgments.”

The paper released May 1 on preprint repository arXiv is titled “Learning to Complement Humans” and continues years of work in human-machine interaction and cooperation. Kamar and Horvitz worked together on a paper published in 2012 that demonstrates how AI can fuse human and machine labor and explores the performance of Galaxy Zoo compared to humans. In 2007, Horvitz worked on policy to determine when human receptionists should intervene in customer conversations with automated receptionist systems.

“We see opportunities for studying additional aspects of human-machine complementarity across different settings,” the paper reads. “Directions include optimization of team performance when interactions between humans and machines extend beyond querying people for answers, such as settings with more complex, interleaved interactions and with different levels of human initiative and machine autonomy.”

In researching a different sort of teamwork, OpenAI researchers have looked at machine agents working together in games like Quake III and hide and seek.