U.S. defense spending on AI shows no signs of slowing — if anything, it’s accelerating. The Defense Advanced Research Projects Agency (DARPA) expects to spend $2 billion over the next five years on military AI projects. The Pentagon’s controversial Project Maven, which taps machine learning to detect and classify objects of interest in drone footage, recently received a 580 percent funding increase in this year’s $717 billion National Defense Authorization Act. And this week, the U.S. Army announced it would invest $72 million in AI research to “increase [the] readiness” of soldiers off and on the battlefield.

“Tackling difficult science and technology challenges is rarely done alone and there is no greater challenge or opportunity facing the Army than Artificial Intelligence,” said Dr. Philip Perconti, director of the Army’s corporate laboratory, in a statement today. “The Army is looking forward to making great advances in AI research to ensure readiness today and to enhance the Army’s modernization priorities for the future.”

The Army’s new five-year program will see its Combat Capabilities Development Command Army Research Laboratory division team up with Carnegie Mellon University and a “consortium” of other academic institutions to develop AI that “enhance[s] national security and defense.” Specifically, they’ll embark on research involving adversarial algorithms that can respond to enemy AI, autonomous networking that adapts to electromagnetic and cyber events, and systems that “increase survivability” in contested environments. More generally, the Army says, the group will pursue “automated sense-making” technologies — that is, systems that generate real-time insights.

“For almost 30 years, the Army Research Laboratory has been at the forefront of bold initiatives [involving] U.S. universities,” said CMU President Farnam Jahanian. “At this time of accelerating innovation, Carnegie Mellon is eager to partner with ARL and with universities across the nation to leverage the power of artificial intelligence and better serve the Army mission in the 21st century.”

The partnership follows on the heels of a collaboration between the Army Research Laboratory and Carnegie Mellon under the former’s Open Campus initiative, which Carnegie Mellon joined in 2018, and builds on the university’s 70-year relationship with the Defense Department. But it’s likely to be met with criticism.

Last year at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, Sweden, prominent thought leaders including SpaceX and Tesla founder Elon Musk and three cofounders of Google’s DeepMind subsidiary protested autonomous defense technologies. They, along with 2,400 other executives, researchers, and academics from 160 companies in 90 countries, signed an open letter pledging to not “participate in nor support the development, manufacture, trade, or use” of autonomous weaponry, which they warned could be “dangerously destabilizing.”

In April 2018, a group of researchers and engineers from the Centre on Impact of AI and Robotics published a letter calling for a boycott of the Korea Advanced Institute of Science and Technology (KAIST), which they accused of working with defense contractor Hanwha Systems on AI for military systems. And in November 2017, over 300 Canadian and Australian scientists penned letters to their Prime Ministers Justin Trudeau and Malcolm Turnbull urging bans on autonomous weaponry.

So far, their pleas have fallen on deaf ears. In the past year, countries such as India, Chile, Israel, China, and Russia have pursued autonomous tanks, aircraft, reconnaissance robots, ship-based missile systems, and weaponized drones. And a recent report on AI and war commissioned by the Office of the Director of National Intelligence concluded that because of AI’s potential to “massively magnify” military power, countries will almost inevitably build autonomous weapons systems.

The private sector has taken matters into its own hands, to a degree. Google, under pressure from employees and the general public, released a set of guiding AI ethics principles in June and canceled its Project Maven contract with the Pentagon, while Microsoft discontinued its work with Immigrations and Customs Enforcement and created an internal advisory panel — the Aether Committee — to look critically at its use of AI. But there’s work to be done, advocates say.

“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale that is greater than ever, and at timescales faster than humans can comprehend,” Musk, DeepMind cofounder Mustafa Suleyman, and 116 machine learning experts from 26 countries wrote in a 2017 letter. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”