The White House released a fun report on the future of A.I. today that was mostly upbeat with some important caveats about its potential regulatory and economic impacts and blablabla.

Fine and dandy, until you got the section where it talks about the implications of LETHAL AUTONOMOUS WEAPONS SYSTEMS. (Sorry, but if ever anything deserves to be in all caps, it’s that.)

The A.I.-driven human extinction machines are first mentioned under a section called “Global Considerations and Security.” It starts off talking about the use of A.I. in cybersecurity defense systems, which makes sense and feels moderately reassuring. And then the authors say:

“Challenging issues are raised by the potential use of AI in weapon systems.” Challenging issues? This seems like, I don’t know, just a slight understatement. But let’s continue:

“The United States has incorporated autonomy in certain weapon systems for decades, allowing for greater precision in the use of weapons and safer, more humane military operations.”

Hokay. So we’re been building in autonomy. OK. Greater precision. OK, I’m with you. Safer. Hmmm. More humane military operations. Wut? We’re killing people in kinder and gentler ways?

“Nonetheless, moving away from direct human control of weapon systems involves some risks and can raise legal and ethical questions,” the authors add.

Ya think? Have you people not actually watched any of the Terminator movies? I mean…This is how it all starts! This is literally everything that terrifies every human on the planet about A.I. We are actually developing the worst sci-fi cliché of our nightmares and talking about it in the kind of calm bureaucratic language we’d use to discuss potential inefficiencies at the DMV.

“The key to incorporating autonomous and semi-autonomous weapon systems into American defense planning is to ensure that U.S. Government entities are always acting in accordance with international humanitarian law, taking appropriate steps to control proliferation, and working with partners and Allies to develop standards related to the development and use of such weapon systems,” the report says.

Maybe it’s just me, but I would think the best way to “control proliferation” would be to NOT BUILD THEM AT ALL!

“The United States has actively participated in ongoing international discussion on Lethal Autonomous Weapon Systems, and anticipates continued robust international discussion of these potential weapon systems. Agencies across the U.S. Government are working to develop a single, government-wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons,” the report says.

Good. Good. So, you’ve got committees and reports going and there will hopefully be a policy to figure how we can build HUMANE autonomous weapons systems that don’t break any international law. Phew. BTW, is wiping out the human race specifically against the international laws? You know, people are always looking for loopholes.


The document comes back later with a more detailed explanation, noting again that weapons have been using autonomous technologies for decades because they allow for “greater precision.” And more precise weapons means using fewer weapons, which means, YEAH! Cost savings! Of course, this also means not putting military personnel in harm’s way.

“Nonetheless, moving away from direct human control of weapon systems involves some risks and can raise legal and ethical questions,” the report says. “Over the past several years, in particular, issues concerning the development of so-called ‘Lethal Autonomous Weapon Systems’ (LAWS) have been raised by technical experts, ethicists, and others in the international community.”

Thank goodness!

However, while the U.S. is engaging in talks around LAWS, it doesn’t want drones to be part of those conversations, since humans are still involved. (See: Loopholes)

“Other States have focused on artificial intelligence, robot armies, or whether ‘meaningful human control’ — an undefined term — is exercised over life-and-death decisions,” the report says.

So, cool. Cool. Robot armies. Cool. Some countries are cooking up robot armies. OK. Is anyone else hyperventilating at this point besides me?

Bottom line: The U.S. government is currently committed to having meetings to discuss how it should structure meetings to make decisions about this subject. (Bureaucracy FTW!) And it wants you to know that some very important people in the U.S. Department of Defense must sign off on the development of these systems. Which, come to think of it, is what happened in the Terminator universe also, no?

At least we won’t have to worry about this for a long time because fully autonomous A.I. in weapons systems is a long way off.

Unless it isn’t.

“Given advances in military technology and artificial intelligence more broadly, scientists, strategists, and military experts all agree that the future of LAWS is difficult to predict and the pace of change is rapid,” the report says. “Many new capabilities may soon be possible, and quickly able to be developed and operationalized.”