This week, we learned that thousands of Google employees are upset about the company’s involvement in Project Maven, a U.S. Department of Defense initiative that has tapped Google to help with drone footage analysis.

News of Google’s participation in the program and concern among the company’s ranks was first reported by Gizmodo, which cited anonymous sources last month. A letter obtained by the New York Times and published earlier this week makes it clear just how seriously this issue is being taken.

Written and signed by 3,100 Google employees, the letter addressed to CEO Sundar Pichai urges the company to pull out of Project Maven and enact a policy stating that Google and its contractors will not build “warfare technology.” The letter states that failure to do so could “irreparably damage Google’s brand and its ability to compete for talent” at a time when Google is “already struggling to keep the public’s trust.”

“We cannot outsource the moral responsibility of our technologies to third parties,” the letter reads. “Google’s stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever. This contract puts Google’s reputation at risk and stands in direct opposition to our core values.”

News of Google’s internal spat comes the same week as 50 AI researchers refused to support a supposed autonomous weapons and “killer robot” initiative at South Korea’s top university.

Google apparently characterizes its work with the Pentagon as “non-offensive,” but think of the bomb-diffusing robot used in 2016 to kill a mass shooter in Dallas. As this and many other examples make clear, a tool made for one purpose can always be used for other ends. This problem is the subject of increased scrutiny in AI communities, most recently in a report from EFF, OpenAI, and other reputable organizations that implores engineers to remember the duality of AI use cases.

The New York Times article refers to the letter from Google employees as “idealistic,” an assertion I find very odd. There’s nothing “idealistic” about employees of a company that makes the majority of its money on advertising articulating an aversion to killing people.

When it comes to AI use cases, keep in mind that Google already has monopolies in fields like internet search and is developing businesses in many more sectors, not to mention new avenues for AI that will open up down the road.

The technology giant has its hand in an astonishing number of other pies.

It owns both Chrome, the most popular web browser, and Android, the world’s most popular mobile operating system. It’s squarely second in the U.S. smart speaker market, with expansions set for India and other countries around the world. It’s in the workplace with millions of G Suite users. Google’s education tech is used in more than half of U.S. primary and secondary schools. Google is even helping governments with specially made apps and cloud services and initiatives like Project Loon to spread internet access around the world.

This is all to say nothing of Google Cloud, YouTube, GV, Waymo’s ambitions for autonomous vehicles, and many other industries where Google is a dominant force.

Sure, Google’s involvement with Project Maven could be motivated by some form of patriotism, or justified more pragmatically by the knowledge that if Google refuses to help, another company will happily step in to do so. Like Google’s push to fund campaigns of both liberal and conservative politicians in recent years, the company’s role in Maven could also be aimed at bolstering ties with the federal government as people are calling for increased antitrust regulation of tech giants.

Google could also be motivated in part by competition with players like Amazon. But this whole situation reminds me of the end of the movie Bad Santa, when Billy Bob Thornton is betrayed by one of his elves. In the moment when Thornton’s character is about to get killed for his share of a mall robbery, he doesn’t plead for his life, he’s shocked by the elf’s greed.

“Do you really need all that shit?” he asks about the money and a pile of stolen merchandise.

Google has a bit — or a lot — of everything. Economically speaking, the company doesn’t need to make tools of war.

We’re in the midst of what VentureBeat correspondent Chris O’Brien calls “the rise of tech nationalism,” in places like France as well as in authoritarian nations with large standing armies and an appreciation of AI’s strategic importance, like Russia and China.

This is also a moment when major tech giants like Google are declaring themselves reborn as AI companies, and the areas they choose to devote resources to will shape not just their revenue but public perception of AI and what this powerful technology is capable of.

Google Cloud chief scientist Fei-Fei Li recently expressed her view that Google should explore ways to work more closely with social scientists, humanists, lawyers, artists, and policy makers — collaborative prospects that are a long way from making tools of war.

Exactly what’s at stake for Google with Project Maven is tough to gauge, but the potential riches that come from working with the Department of Defense on more accurate drones may not justify the risk of alienating consumers or governments around the world.

Like Facebook, where internal strife has also led to recent controversy inside and outside the company, Google encourages spirited debate among its employees, and the biggest loss for Google — again like Facebook — may be erosion of trust in a company that’s ever-present in all of our lives.

A public backlash against two companies that have acquired much of the world’s top AI talent could also impact a vibrant but still growing AI ecosystem.

As the recent controversies have driven home, there’s a lot more to consider with AI than finding the right model or datasets to train neural nets for businesses that can appear at times more powerful than a lot of nation states.

For AI coverage, send news tips to Khari Johnson and Blair Hanley Frank, and guest post submissions to Cosette Jarrett.

Thanks for reading,

P.S. Please enjoy this video of Will Smith on a date with Sophia the robot. Facebook’s Yann LeCun earlier this year referred to Sophia as a complete scam, but it is a funny video :)

 

From VB

John Giannandrea

Apple hires former Google AI chief John Giannandrea

Apple today hired John Giannandrea, who was until recently in charge of Google’s search and AI departments. Giannandrea had been at Google since 2010, according to his LinkedIn profile, and in 2016 he began to head up Google’s search team. Giannandrea’s departure from his role as Google’s AI chief […]

Read the full story

Amazon expands services for AI training, translation, and transcription

Amazon Web Services announced today a new way for machine learning developers to build and deploy models through its cloud. The company’s SageMaker AI service gained support for a local mode that lets developers start testing intelligent systems on their personal computers before moving to the cloud. Using local mode, a developer can first test […]

Read the full story

Microsoft’s AI lets bots predict pauses and interrupt conversations

Microsoft today said it has developed a new way for its most popular AI-powered bots to speak and analyze human voices at the same time, a skill engineers believe leads to more naturalistic conversations. The bots are empowered to predict what a person will say next, when to pause, and when it’s appropriate to interrupt someone. Major virtual assistants have gained more expressive, human-like voices and are being trained to […]

Read the full story

France, China, and Silicon Valley: The fight to dominate AI and the rise of tech nationalism

As new products and services have emerged throughout the history of capitalism, it hasn’t been unusual to see geographic clusters emerge that become an industry’s center of gravity. But the era of artificial intelligence has triggered an unusually direct response from countries that want to be at the center of a technology they see as both an opportunity to wield influence and a threat to their political independence. This has created surprising nationalistic fervor […]

Read the full story

Emotion AI: Why your refrigerator could soon understand your moods

Artificial intelligence is already making our devices more personal — from simplifying daily tasks to increasing productivity. Emotion AI (also called affective computing) will take this to new heights by helping our devices understand our moods. That means we can expect smart refrigerators that interpret how we feel (based on what we say, how we slam the door) and then suggest foods to match those feelings. Our cars could even know when we’re angry, based on our driving habits. Humans use non-verbal cues, such as facial expressions, gestures and tone […]

Read the full story

Textio expands its AI to help humans craft better recruiting messages

Textio, maker of AI-powered tools to augment business writing, today announced a new product to help recruiters reach out to job candidates. Like the company’s first service, which uses AI to help customers write better job descriptions, Textio’s second offering helps companies write recruiting messages by scoring them on […]

Read the full story

Beyond VB

How babies learn – and why robots can’t compete

Deb Roy and Rupal Patel pulled into their driveway on a fine July day in 2005 with the beaming smiles and sleep-deprived glow common to all first-time parents. Pausing in the hallway of their Boston home for Grandpa to snap a photo, they chattered happily over the precious newborn son swaddled between them. (via The Guardian)

Read the full story

Retailers race against Amazon to automate stores

To see what it’s like inside stores where sensors and artificial intelligence have replaced cashiers, shoppers have to trek to Amazon Go, the internet retailer’s experimental convenience shop in downtown Seattle. Soon, though, more technology-driven businesses like Amazon Go may be coming to them. (via New York Times)

Read the full story

AI ‘poses less risk to jobs than feared’ says OECD

Fewer people’s jobs are likely to be destroyed by artificial intelligence and robots than has been suggested by a much-cited study, an OECD report says. An influential 2013 forecast by Oxford University said that about 47 percent of jobs in the US in 2010 and 35 percent in the UK were at “high risk” of being automated over the following 20 years. (via BBC)

Read the full story

Artificial intelligence helps to predict likelihood of life on other worlds

Developments in artificial intelligence may help us to predict the probability of life on other planets, according to new work by a team based at Plymouth University. The study uses artificial neural networks (ANNs) to classify planets into five types, estimating a probability of life in each case, which could be used in future interstellar exploration missions. The work is presented at the European Week of Astronomy and Space Science (EWASS) in Liverpool on 4 April by Mr Christopher Bishop. (via phys.org)

Read the full story