Researchers from Google, Microsoft, and IBM have joined together to create Responsible AI Licenses (RAIL). With its first two offerings available in the past two weeks, RAIL now offers end user license agreements and source code license agreements that software providers, researchers, and developers can include with their software to prevent harmful use of their technology.

RAIL may create additional license agreements based on input from AI researchers and the broader AI community.

“We recognized the risks our work can sometimes bring to the world; that led us to think about potential ways of doing this,” said cofounder Danish Contractor, who works at IBM Research in New Delhi during a phone interview with VentureBeat. “Our goal is to empower developers to be able to restrict the use of their AI technology and to prevent harmful and irresponsible use.”

“The licenses in some sense could be a more grounded way of enforcing responsible use than just simply ethical guidelines which in the real legal world, for example, are just fruitless. They’re just a promise of something that we all aspire to be and do but there’s no real way of enforcing that,” he said.

Potential violations of a RAIL agreement, according to its creators, include if a fitness tracking app was used by an insurance provider to increase customer premiums or a camera filter that’s converted into deepfakes to deceive people and cause societal discord.

A potential complication is the fact that AI systems can often be used for dual purposes: The same facial recognition software used to identify missing kids can also be trained to track down political dissidents.

Over time, RAIL plans to become a “participatory agent of change” that engages with developers, technology providers, companies, researchers, and other members of the community, including those who violate clauses for responsible use, Contractor said.

The initial target audience for RAIL are AI researchers who release papers and source code at popular machine learning conferences like NeurIPS (formerly NIPS), the recently held Association for the Advancement of Artificial Intelligence (AAAI), or International Conference on Machine Learning (ICML). RAIL team members regularly attend such conferences as part of their work.

AI research paper publication is up nearly 13 percent in the past five years, according to business analytics company Elsevier.

Julia Haines, a senior user experience researcher at Google in San Francisco, said she sees RAIL as an “ever-evolving entity rooted in engagement with the broader community” both to develop licenses and to stay informed about emerging irresponsible use cases of AI.

“The notion is not just to engage the tech community, but to engage domain experts in the areas in which AI is increasingly being used to understand what their concerns about malicious or negligent misuse are and to just try to stay on the cusp of the curve there with the broader community,” she said.

RAIL’s cofounders met while participating in the Association for Computing Machinery (ACM) Future of Computing Academy, a group cochaired by Northwestern University assistant professor Brent Hecht.

Growing up, Hecht said, open source software was seen as an unquestionable good, but in recent years many have come to understand there are ways in which open can cause harm.

“We all expect this to be controversial,” Hecht said “and one of the big value adds here, in addition to licenses, is the discussion that we hope to start.”

RAIL’s initial licenses are the result of conversations between the group and patent lawyer Christopher Hines over the better part of the past year. Hines is author of both license agreements.

“I think it’s fair to say that there will likely be several legal challenges. I mean, at the end of the day, what we’re trying to do is control behavior using the legal mechanism of intellectual property rights and contract law,” Hines said.