The U.S. Department of Defense (DoD) visited Silicon Valley Thursday to ask for ethical guidance on how the military should develop or acquire autonomous systems. The public comment meeting was held as part of a Defense Innovation Board effort to create AI ethics guidelines and recommendations for the DoD. A draft copy of the report is due out this summer.

Microsoft director of ethics and society Mira Lane posed a series of questions at the event, which was held at Stanford University. She argued that AI doesn’t need to be implemented the way Hollywood has envisioned it and said it is imperative to consider the impact of AI on soldiers’ lives, responsible use of the technology, and the consequences of an international AI arms race.

“My second point is that the threat gets a vote, and so while in the U.S. we debate the moral, political, and ethical issues surrounding the use of autonomous weapons, our potential enemies might not. The reality of military competition will drive us to use technology in ways that we did not intend. If our adversaries build autonomous weapons, then we’ll have to react with suitable technology to defend against the threat,” Lane said.

“So the question I have is: ‘What is the worldwide role of the DoD in igniting the responsible development and application of such technology?'”

Lane also urged the board to keep in mind that the technology can extend beyond military applicatioins to adoption by law enforcement.

Microsoft has been criticized recently and called complicit in human rights abuses by Senator Marco Rubio, due to Microsoft Research Asia working with AI researchers affiliated with the Chinese military. Microsoft also reportedly declined to sell facial recognition software to law enforcement in California.

Concerns aired at the meeting included unintentional war, unintended identification of civilians as targets, and the acceleration of an AI arms race with countries like China.

Multiple speakers expressed concerns about the use of autonomous systems for weapon targeting and spoke about the United State’s role as a leader in the production of ethical AI. Some called for participation in multinational AI policy and governance initiatives. Such efforts are currently underway at organizations like the World Economic Forum, OECD, and the United Nations.

Retired army colonel Glenn Kesselman called for a more unified national strategy.

In February, President Trump issued the American AI initiative executive order, which stipulates that the National Institute of Standards and Technology establish federal AI guidelines. The U.S. Senate is currently considering legislation like the Algorithmic Accountability Act and Commercial Facial Recognition Privacy Act.

“It’s my understanding that we have a fragmented policy in the U.S., and I think this puts us at a very serious not only competitive disadvantage, but a strategic disadvantage, especially for the military,” he said. “So I just wanted to express my concern that senior leadership at the DoD and on the civilian side of the government really focus in on how we can match this very strong initiative the Chinese government seems to have so we can maintain our leadership worldwide ethically but also in our capability to produce AI systems.”

About two dozen public comments were heard from people representing organizations like the Campaign to Stop Killer Robots, as well as university professors, contractors developing tech used by the military, and military veterans.

Each person in attendance was given up to five minutes to speak.

The public comment session held Thursday is the third and final such session, following gatherings held earlier this year at Harvard University and Carnegie Mellon University, but the board will continue to accept public comments until September 30, 2019. Written comments can be shared on the Defense Innovation Board website.

AI initiatives are on the rise in Congress and at the Pentagon.

The board launched the DoD’s Joint AI Center last summer, and in February, the Pentagon released its first declassified AI strategy, and said the Joint AI Center will play a central role in future plans.

The Defense Innovation Board announced the official opening of the Joint AI Center and launched its ethics initiative last summer.

Other members of the board include former Google CEO Eric Schmidt, astrophysicist Neil deGrasse Tyson, Aspen Institute CEO Mark Isaacson, and executives from Facebook, Google, and Microsoft.

The process could end up being influential, not just in AI arms race scenarios, but in how the federal government acquires and uses systems made by defense contractors.

Stanford University professor Herb Lin said he’s worried about people’s tendency to trust computers too much and suggests AI systems used by the military be required to report how confident they are in the accuracy of their conclusions.

“AI systems should not only be the best possible. Sometimes they should say ‘I have no idea what I’m doing here, don’t trust me’. That’s going to be really important,” he said.

Toby Walsh is an AI researcher and professor at the University of New South Wales in Australia. Concerns about autonomous weaponry led Walsh to join with others in calling for an international autonomous weapons ban to prevent an AI arms race.

The open letter first began to circulate in 2015 and has since been signed by more than 4,000 AI researchers and more than 26,000 other people.

Unlike nuclear proliferation, which requires rare materials, Walsh said, AI is easy to replicate.

“We’re not going to keep a technical lead on anyone,” he said. “We have to expect that we can be on the receiving end, and that could be rather destabilizing and more and more create a destabilized world.”

Future Life Institute cofounder Anthony Aguirre also spoke.

The nonprofit shared 11 written recommendations with the board. These include the idea that human judgement and control should always be preserved and the need to create a central repository of autonomous systems used by the military that would be overseen by the Inspector General and congressional committees.

The group also urged the military to adopt a rigorous testing regiment intentionally designed to provoke civilian casualties in test situations.

“This testing should have the explicit goal of manipulating AI systems to make unethical decisions through adversarial examples, to avoid hacking,” he said. “For example, foreign combatants have long been known to use civilian facilities such as schools to shied themselves from attack when firing rockets.”

OpenAI research scientist Dr. Amanda Askell said some challenges may only be foreseeable for people who work with the systems, which means industry and academia experts may need to work full-time to guard against the misuse of these systems, potential accidents, or unintentional societal impact.

If closer cooperation between industry and academia is necessary, steps need to be taken to improve that relationship.

“It seems at the moment that there is a fairly large intellectual divide between the two groups,” Askell said.

“I think a lot of AI researchers don’t fully understand the concerns and motivations of the DoD and are uncomfortable with the idea of their work being used in a way that they would consider harmful, whether unintentionally or just through lack of safeguards. I think a lot of defense experts possibly don’t understand the concerns and motivations of AI researchers.”

Former U.S. marine Peter Dixon served tours of duty in Iraq in 2008 and Afghanistan in 2010 and said he thinks the makers of AI should consider that AI used to identify people in drone footage could save lives today.

His company, Second Front Systems, currently receives DoD funding for the recruitment of technical talent.

“If we have an ethical military, which we do, are there more civilian casualties that are going to result from a lack of information or from information?” he asked.

After public comments, Dixon told VentureBeat that he understands AI researchers who view AI as an existential threat, but reiterated that such technology can be used to save lives and shouldn’t discount this modern reality because of some “Skynet boogeyman.”

Before the start of public comments, DoD deputy general counsel Charles Allen said the military will create AI policy in adherence to international humanitarian law, a 2012 DoD directive that limits use of AI in weaponry, and the military’s 1,200-page law of war manual.

Allen also defended Project Maven, an initiative to improve drone video object identification with AI, something he said the military believes could help “cut through the fog of war.”

“This could mean better identification of civilians and objects on the battlefield, which allows our commanders to take steps to reduce harm to them,” he said.

Following employee backlash last year, Google pledged to end its agreement to work with the military on Maven, and CEO Sundar Pichai laid out the company’s AI principles, which include a ban on the creation of autonomous weaponry.

Defense Digital Service director Chris Lynch told VentureBeat in an interview last month that tech workers who refuse to help the U.S. military may inadvertently be helping adversaries like China and Russia in the AI arms race.

The report includes recommendations on AI related to not only autonomous weaponry but also more mundane things, like AI to augment or automate administrative tasks, said Defense Innovation board member and Google VP Milo Medin.

Defense Innovation board member and California Institute of Technology professor Richard Murray stressed the importance of ethical leadership in conversations with the press after the meeting.

“As we’ve said multiple times, we think it’s important for us to take a leadership role in the responsible and ethical use of AI for military systems, and I think the way you take a leadership role is that you talk to the people who are hoping to help give you some direction,” he said.

A draft of the report will be released in July, with a final report due out in October, at which time the board may vote to approve or reject the recommendations.

The board acts only in an advisory role and cannot require the Defense Department to adopt its recommendations. After the board makes it recommendations, the DoD will begin an internal process to establish policy that could include adoption of some of the board’s recommendation.