AI safety research needs social scientists to ensure AI succeeds when humans are involved. That’s the crux of the argument advanced in a new paper published by researchers at OpenAI (“AI Safety Needs Social Scientists“), a San Francisco-based nonprofit backed by tech luminaries Reid Hoffman and Peter Thiel.

“Most AI safety researchers are focused on machine learning, which we do not believe is sufficient background to carry out these experiments,” the paper’s authors wrote. “To fill the gap, we need social scientists with experience in human cognition, behavior, and ethics, and in the careful design of rigorous experiments.”

They believe that “close collaborations” between these scientists and machine learning researchers are essential to improving “AI alignment” — the task of ensuring AI systems reliably perform as intended. And they suggest these collaborations take the form of experiments involving people playing the role of AI agents.

In one scenario illustrated in the paper — a “debate” approach to AI alignment — two human debaters argue whatever questions they like while a judge observes; all three participants establish best practices, such as affording one party ample time to make their case before the other responds. The learnings are then applied to an AI debate in which two machines parry rhetorical blows.

“If we want to understand a [debate] played with machine learning and human participants, we replace the machine learning participants with people and see how the all human game plays out,” the paper’s authors explain. “The result is a pure human experiment, motivated by machine learning but available to anyone with a solid background in experimental social science.”

The beauty of these sorts of social tests is that they don’t involve AI systems or require knowledge of algorithms’ inner workings, the paper’s authors say. They instead call for expertise in experimental design, which opens the door to a free flow of ideas with “many fields” of social science, including experimental psychology, cognitive science, economics, political science, and social psychology, as well as adjacent fields like neuroscience and law.

“Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psychology of human rationality, emotion, and biases,” the researchers wrote. “We believe close collaborations between social scientists and machine learning researchers will be necessary to improve our understanding of the human side of AI alignment.”

Toward that end, OpenAI researchers recently organized a workshop at Stanford University’s Center for Advanced Study in the Behavioral Sciences (CASBS), and OpenAI says it plans to hire social scientists to work on the problem full time.