India will enlist the help of artificial intelligence to develop weapons, defense, and surveillance systems, government officials announced today.
“The world is moving towards an artificial intelligence-driven ecosystem,” Dr. Ajay Kumar, secretary at the defense ministry, said in a statement. “India is also taking necessary steps to prepare our defense forces for the war of the future.”
A 17-person task force is working on an AI roadmap for India’s armed forces, the Times of India reports. Within the next two years, the task force will recommend ways machine learning can be incorporated into the country’s aviation, naval, land, cybersecurity, nuclear, and biological resources, specifically as it relates to the areas of autonomous weapons systems and unmanned surveillance.
The elite group of stakeholders, which is headed by Tata Sons chairman Natarajan Chandrasekaran and includes members of the Army, Navy Air Force, Atomic Energy Commission, and Finance Ministry, was established in February and is expected to submit its first report in the next three months.
“The task force will make recommendations on […] establishing tactical deterrence in the region and visualizing potential transformative weaponry, [and] developing intelligent, autonomous robotic systems, and bolstering cyber defence,” an official told The Times of India.
The push for AI-enhanced defense platforms is a top priority for India Prime Minister Narendra Modi, who said at the Defence Expo 2018 in Chennai, India in April that AI and robots would be “the most important determinants” of the readiness of future militaries. “India, with its leadership in [the] information technology domain, [will] strive to use this technology to its advantage,” he said.
The development follows hard on the heels of news that China is testing autonomous tanks, aircraft, reconnaissance robots, and supply convoys as part of a 1.11 trillion yuan ($173.5 billion) plan to modernize its armed forces.
Russia is also believed to be investing in AI-enabled defense. Its new T-14 Armata battle tank, part of its Universal Combat Platform, is said to have autonomous capabilities.
Amid the global AI arms race, prominent researchers are protesting the use of AI in the development of weapons.
In April, 50 top AI researchers announced a boycott of KAIST, South Korea’s top university, after it opened what they characterized as an “AI weapons lab.” And in August 2017, prominent AI researchers penned an open letter to the United Nations urging it to ban the use of autonomous weapons.
“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale that is greater than ever, and at timescales faster than humans can comprehend,” Elon Musk, Mustafa Suleyman, and 116 machine learning experts from 26 countries wrote. “These can be weapons of terror, weapons that despots and terrorist use against innocent populations, and weapons hacked to behave in undesirable ways.”
“I think there are a number of interesting ethical questions about machine learning and AI as we as a society start to develop more powerful techniques,” Jeff Dean, leader of the Google Brain research division, said at Google’s I/O developer conference in early May. “I think most people have qualms about using autonomous weapons systems.”