Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
We’re roughly halfway through 2018, and one of the most important AI stories to emerge so far is Project Maven and its fallout at Google. The program to use AI to analyze drone video footage began last year, and this week we learned of the Pentagon’s plans to expand Maven and establish a Joint Artificial Intelligence Center.
We also learned that Google believed it would make hundreds of millions of dollars from participating in the Maven project and that Maven was reportedly tied directly to a cloud computing contract worth billions of dollars. Today, news broke that Google will discontinue its Maven contract when it expires next year. The company is reportedly drafting a military projects policy that is due out in the coming weeks. According to the New York Times, the policy will include a ban on projects related to autonomous weaponry.
Most revealing in all of this are the words of leaders like Google Cloud chief scientist Dr. Fei-Fei Li. In emails obtained by the New York Times, written last fall while Google was considering how to announce its participation in Maven, executives expressed awareness of just how divisive an issue autonomous weaponry can be.
“Avoid at ALL COSTS any mention or implication of AI,” Li wrote. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google … I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.”
Since Google’s involvement with Maven became public in March, the project has certainly attracted attention from some members of the press. I’ve written that Google should listen to its employees and stay out of the business of war and that Maven reflects the need for a Hippocratic oath for AI practitioners, but the backlash isn’t just coming from journalists.
Inside Google, about a dozen employees resigned in protest, and more than 3,000 employees — including AI chief Jeff Dean — have signed letters stating that Google shouldn’t participate in the creation of autonomous weaponry. Outside Google, petitions from organizations like the Tech Workers Coalition and International Committee for Robot Arms Control have also attracted signatures from the broader tech and AI community.
As the debate wages on, one overlooked or little-known fact about Google’s participation in Maven has emerged: Google wasn’t the only tech company invited to participate. IBM and smaller firms like Colorado-based DigitalGlobe have also been invited to participate in the program, according to Gizmodo.
AI isn’t new. Military usage of AI isn’t either, but as AI goes beyond just offering personalized results when you open an app, the ethical stance AI practitioners choose to take can play a role in defining how this discipline is applied to virtually every sector of business, government, and society.
Thanks for reading,
AI Staff Writer
P.S. Please enjoy the Microsoft AI platform State of the Union delivered at the Build conference:
Researchers at the University of Toronto have developed an adversarial neural network that can defeat facial recognition systems.
AI startup Pymetrics today announced it has open-sourced its tool for detecting bias in algorithms. Available for download on GitHub, Audit AI is designed to determine whether a specific statistic or trait fed into an algorithm is being favored or disadvantaged at a statistically significant, systematic rate, leading to adverse impact on people underrepresented in the data […]
Mary Meeker released her annual Internet Trends Report today to highlight a broad range of subjects influencing the future of technology, ranging from the proliferation of artificial intelligence to flat growth for smartphone sales in 2017, the first time since the advent of the smartphone. Like the 2018 Tech Trends Report from futurist Amy Webb […]
Google says it will formulate a policy around defense and military contracts, following the fallout from its involvement in the Pentagon’s controversial Project Maven program.
Nvidia today unveiled HGX-2, a cloud server platform equipped with 16 Tesla V100 graphics processing unit (GPU) chips that collectively provide half a terabyte of GPU memory and two petaflops of compute power. The GPUs work together through the use of NVSwitch interconnects. The HGX-2 motherboard is made to handle both training AI models and high performance […]
Amazon users can unlock their doors by saying “Alexa, unlock door” and verbally entering a pin code. Unfortunately, the system can be easily circumvented.
Academics share machine-learning research freely. Taxpayers should not have to pay twice to read our findings (via The Guardian)
When former Google CEO Eric Schmidt was asked about Elon Musk’s warnings about AI, he had a succinct answer: “I think Elon is exactly wrong.” (via TechCrunch)
In an ongoing effort to get more AI into healthcare, the FDA just approved the marketing of an algorithm that detects wrist fractures. (via MIT Tech Review)
The people building new AI audio technologies share how they feel about their families interacting with robots, being recorded, and disclosing personal information. (via Inc)
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.