The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
How do humans learn to cooperate? It’s an interesting question, one behavioral anthropologists have been studying for decades. Social norms — that is, common understandings or informal rules, like dining etiquette and fashion sense — are thought to play a part, but it’s tough to measure the extent to which they shape society and how they’re affected by other factors.
Fortunately, that’s where artificial intelligence (AI) comes in.
In a newly published paper on the preprint server Arxiv.org (“Understanding The Impact of Partner Choice on Cooperation and Social Norms by means of Multi-agent Reinforcement Learning“), scientists describe an AI system trained using reinforcement learning — a technique that uses rewards to drive agents toward goals — for understanding how an interactions within a society affect the overall societal outcome.
“We first stud[ied] the emergence of norms and then the emergence of cooperation in presence of norms,” the paper’s authors explained. “[Norms] have been shown to have a great impact on the collective outcomes and progression of a society, [but] while it has been argued that normative behavior emerges from societal interactions, it is not clear as to what behavior is likely to emerge given some societal configuration.”
The researchers modeled two social dilemmas as games: a cooperation-based game that exposed tensions between individual goals and the group’s goal, and a coordination-based game that examined the conformity,with each agent having a partial observation of their environment. Said agents — a group of 50 in total — were tasked with achieving the highest cumulative score while trying to maximize their individual scores. The emergence of norms was assessed by tracking the number of agents that converged to a particular point.
In experiments, individual agents repeatedly interacted with others either by choice or randomly and learned behavior dependent on their experiences. After 10,000 episodes of the coordination game, those that had a choice in partner were able to sustain norms and show resistance to change in the presence of a new agent type — “influencing” agents — that played a fixed strategy. Roughly 5,000 episodes of the cooperative game, meanwhile, suggested that partner choice promoted collaboration in presence of norms; using a weak norm where agents had the freedom to choose their partners, agents paired themselves almost exclusively with other agents who’d been cooperative in the past.
“[I]t becomes harder to influence or regulate societal behavior through assimilation or supervision where agents are free to make a choice as to who they can interact within the society,” the researchers wrote. “This is the key factor that stabilizes cooperation as untrustworthy agents are avoided and cooperative behavior can be reinforced as the social norm is strengthened.”
They believe the findings might be used as a basis for the design of future autonomous systems, and perhaps provide insights into the emergence of cooperation in both human and animal societies.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more