China, not to be outdone by the U.S., India, Russia, and the dozens of other countries around the world developing autonomous weapons systems, is planning to deploy unmanned submarines that can perform reconnaissance, place mines, and ram “high-value” enemy targets, according to the South China Morning Post.

The artificially intelligent submersibles are designed to complete basic tasks without the need for human intervention, Lin Yang, a marine technology equipment director at the Shenyang Institute of Automation, told the South China Morning Post. The goal isn’t to replace human crews entirely — commanders on the ground will monitor the subs’ progress — but they’re said to be able to autonomously change course and depth to avoid detection, distinguish civilian from military vessels, navigate to locations on a map, and scout out hostile territory.

The subs’ chief advantage is cost. They’re cheaper to produce and operate at scale than manned submarines, a researcher told the South China Morning Post, jibing with a recent report from Lockheed Martin. The defense contractor’s Orca program, which aims to build an autonomous underwater vehicle in the next few years, is expected to cost $30 million. That’s compared to the $2 billion price tag of one of the U.S. Navy’s Ohio-class submarines, and the $120 billion its dozen upcoming Columbia-class submarines cost to research, develop, and purchase.

China hopes to deploy the unmanned submarines in strategic waters like the South China Sea and the western Pacific Ocean by 2021, the 100-year anniversary of the Chinese Communist Party.

Lin characterized them as a “countermeasure” against defense systems other countries are actively pursuing. The U.S. military intends to pilot an autonomous extra large unmanned underwater vehicle (XLUUV) by 2020, and has recruited Lockheed Martin and Boeing to develop prototypes. Russia, for its part, has reportedly built a high-speed underwater drone — the Status-6 — that’s capable of carrying a nuclear weapon.

AI-enabled weapons made the news last week when Tesla and SpaceX CEO Elon Musk, Future of Life Institute president Max Tegmark, three cofounders of Google’s DeepMind, and more than 2,400 other executives, researchers, and academics from three countries signed an open letter protesting the use of autonomous weapons.

It wasn’t the first such protest. In April, a group of researchers and engineers from the Centre on Impact of AI and Robotics called for a boycott of the Korea Advanced Institute of Science and Technology (KAIST), which they accused of working with a defense contractor on AI for military systems. And in November 2017, over 300 Canadian and Australian scientists penned letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging bans on autonomous weaponry.

The private sector has taken matters into its own hands, to a degree. Google, under pressure from employees and the general public, released a set of guiding AI ethics principles in June and canceled its controversial Project Maven drone contract with the Pentagon. Microsoft, meanwhile, discontinued its work with U.S. Immigration and Customs Enforcement and created an internal advisory panel — the Aether Committee — to look critically at its use of artificial intelligence.