The AI industry seems to follow a version of Newton’s law: For every positive use case, there’s an equally negative one.

Last week, a developer published — and subsequently pulled — DeepNude, an app that leveraged AI trained on thousands of pictures of nude bodies to swap women’s clothing for genitalia. Vice’s discovery of DeepNude came just days after the publication of a remarkable study by astrophysicists at the Flatiron Institute and Carnegie Mellon University that details an algorithm (Deep Density Displacement) capable of simulating in milliseconds the way gravity shapes interstellar bodies over the course of billions of years.

In the midst of all this, an Amazon product manager described at Ignite Seattle how he used AI to discourage his pet cat from bringing dead prey into the house, while researchers at India-based ecommerce company Myntra Designs proposed a model trained on a data set of shoppers’ preferences and body shapes that can predict the likelihood of return prior to purchase. And on Sunday, researchers at MIT and IBM launched a web tool — GAN Paint Studio — that enables people to upload photographs and liberally edit the appearance of depicted buildings, flora, and fixtures.

With machine learning approaches rapidly gaining sophistication and barriers to development crumbling, the need to engender a sense of responsibility in those unleashing novel AI on the world is growing more acute by the day. As recent events amply demonstrate, scientists and practitioners must carefully consider the societal impact of their creations, whether minor or potentially paradigm-shifting.

San Francisco AI research firm OpenAI engaged with critics directly after publishing a natural language model called GPT-2 that is capable of generating convincingly human-like prose. The company chose not to release the data set used to train its NLP models — or three of the four language models or the training code — in part out of concern that doing so might open the door to abusive behavior by bad actors.

“We see some restraint on publication as a healthy characteristic of technological fields with transformative societal consequences,” OpenAI said. “In this case, we were guided initially by a rough consensus within the organization that these results were qualitatively different from prior ones … We eventually hope to create a global community of AI practitioners that think about the information hazards of particular types of releases.”

Encouragingly, OpenAI isn’t standing alone in this. Researchers from Google, Microsoft, and IBM joined forces in February to launch Responsible AI Licenses (RAIL), a set of end-user and source code license agreements with clauses restricting the use, reproduction, and distribution of potentially harmful AI technology. Julia Haines, a senior user experience researcher at Google in San Francisco, described RAIL as an “ever-evolving entity rooted in [conversation] with the broader community” — both to develop licenses and to stay abreast of emerging AI use cases.

“The notion is not just to engage the tech community, but to engage domain experts in the areas in which AI is increasingly being used,” she told VentureBeat in an earlier interview, “to understand what their concerns about malicious or negligent misuse are and to just try to stay on the cusp of the curve there with the broader community.”

IBM has separately proposed voluntary factsheets that would be completed and published by companies that develop and provide AI, with the goal of increasing the transparency of their services.

Industry self-policing may prove insufficient in the face of present and future challenges, however, and the public is skeptical of AI stakeholders’ professed neutrality. In a recent Edelman survey, close to 60% of the general public and 54% of tech executives said policies to guide AI’s development should be imposed by a “public body,” with less than 20% (15% and 17%, respectively) arguing that the industry should regulate itself. (To this end, the European Commission will soon pilot company and public agency guidelines it developed for the “responsible” and “ethical” use of AI.)

That’s not to suggest organizations should look to governmental guidance in lieu of crafting their own policies — on the contrary. Now more than ever, they need to actively work toward responsible AI design principles that limit the spread of deepfakes, autonomous weapons, obtrusive facial recognition, and other objectively harmful applications without inhibiting the open exchange of techniques and technologies. It won’t be easy, but it’s necessary.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

P.S. Please enjoy this video of GAN Paint Studio, an AI-powered image editor.