Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Last week at SXSW, renowned futurist Amy Webb rolled out her 2018 Tech Trends Report, which focused heavily on artificial intelligence.

China plans to become the world leader in AI by 2030, but Webb’s presentation warned that the nation, spurred on by heavy investment and the biggest data-generating population in the world, is poised to become the “unchallenged AI hegemon” this year.

She also admonished the Trump administration for restricting foreign AI talent from entering the U.S. and doing too little to keep the U.S. competitive in AI.

Tariffs to limit Chinese involvement in industries like robotics and AI is part of the plan, the Trump administration said Thursday, but although there may be questions about whether the White House is doing enough to keep the U.S. competitive in AI, the same criticism can’t be lobbed at the entire federal government.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

The Department of Defense’s DARPA has for decades helped advance many forward-looking technologies, including AI. Now it seems the DOD is working more closely with tech giants with AI prowess.

Earlier this month, we learned about Google’s involvement in Project Maven, a DOD initiative began last year to bring more AI into warfare. News of Google assisting the DOD with identification of objects in drone footage reportedly shocked and disturbed some employees.

In addition to Maven, a task force is being assembled by Project Maven creator and former deputy secretary of defense Robert Work to explore how the federal government may use AI. The task force will include Partnership on AI executive director Terah Lyons. The organization, which was made to establish best practices and explore how AI can be used to benefit society, was formed by companies like Facebook, Apple, Microsoft, and Google.

Heightened by the statement from Russian president Vladimir Putin that the nation that leads in AI will rule the world, this nationally aligned competition is being likened to a modern moon race between countries like China, the United States, and Russia.

“This is a Sputnik moment,” Work told the New York Times.

Google working with the U.S. government might be in line with Baidu and Tencent being so closely aligned with Beijing, but it does bring to mind some important questions: Will Google also work with defense ministries in other countries to make AI? Will drones that President Donald Trump plans to sell to allied foreign nations benefit from Google’s involvement? And most importantly: Will the AI Google is helping the DOD with be used to kill or injure people?

Alphabet’s “Don’t be evil” motto changed to “Do the right thing” some time ago, but in working with the U.S. military, does Google violate both these mantras?

Initiatives like Project Maven aren’t alone in raising some downright frightening questions. In just the past week, we’ve seen an autonomous vehicle kill a woman, the reveal of Palantir’s predictive policing experiment in New Orleans, and Google’s François Chollet urging AI practitioners not to work for Facebook in the wake of the Cambridge Analytica scandal.

The ethical quagmires at play in each of these instances stretch across very different sectors of society, but there is one low-tech potential starting point for a solution: a Hippocratic oath for AI practitioners.

Last week, AIlen Institute for AI CEO Oren Etzioni took a swing at creating an oath for AI practitioners after he spotted the idea in a small book about AI from Microsoft’s Brad Smith and Harry Shum. Etzioni suggests university students be required to recite the oath before graduation.

In his version, Etzioni says AI practitioners should remember their own humanity, avoid playing God, and “prevent harm whenever it can.”

Etzioni’s oath ends with: “I will remember that I am not encountering dry data, mere zeros and ones, but human beings, whose interactions with my AI software may affect the person’s freedom, family, or economic stability. My responsibility includes these related problems.”

Perhaps not everyone will agree with the main tenets as set out by Etzioni, but groups like the Allen Institute, Partnership on AI, and other members of this community should team up to hammer out the basics. AI needs something in line with Asimov’s Three Laws of Robotics or the Hippocratic oath, something that says, for example, “don’t make AI that kills people” or “engage with the community impacted by your work.”

If doctors trusted with human lives should have to take an ancient oath, maybe so too should people whose work is often referred to as the third age of computing and fourth industrial revolution. If doctors entrusted with healing humans have to pledge to do no harm, maybe AI practitioners with the power to create autonomous weapons systems, general artificial intelligence, or predictive policing should do so too.

Even if some malicious general AI like Skynet never emerges, we may not survive in a world of disruption for disruption’s sake. It’s no longer enough to say “I just write the code” when 40 percent of the world’s jobs are expected to go away and instability in the world increases due to things like climate change.

There may be no way to stop AI practices outlawed in the U.S. from being adopted elsewhere, but while the most transformative technologies to emerge in decades spreads throughout tech, business, and society, perhaps something as old as an oath can help set the tone for acceptable behavior in the global but still-small AI community.

For AI coverage, send news tips to Blair Hanley Frank and Khari Johnson, and guest post submissions to Cosette Jarrett — and be sure to bookmark our AI Channel.

Thanks for reading,

Khari Johnson

AI Staff Writer

P.S. Please enjoy this video from DARPA about the future of AI:

From VB

Google’s François Chollet: AI researchers with a conscience shouldn’t work at Facebook

Facebook is under fire from a lot of critics this week as fallout continues from the Cambridge Analytica scandal, including from an unexpected source: Google. In a series of tweets published Thursday, Google researcher François Chollet warns that the problem with Facebook isn’t just the privacy breach or recent lack of trust, it’s the fact that Facebook, powered by AI, can soon become a “totalitarian panopticon.”

Read the full story

How Microsoft and Databricks crafted a unique partnership for AI data processing

Microsoft is bringing its Azure Databricks cloud service out of beta today to help its customers better process massive amounts of data, powered by a partnership unlike anything the tech titan has done before. The company worked with Databricks to produce the service, which is an analytics system based on the popular Apache Spark open source project. Customers use it to ingest and process a large amount of data using machine learning […]

Read the full story

Google Assistant can now send money to friends and family with Google Pay

Google today announced that it’s giving users the power to send or request money with their voice by bringing Google Pay to Google Assistant on Android and iOS smartphones in the U.S. Google’s Home smart speakers will be able to do the same in the coming months, the company said […]

Read the full story

Affectiva launches emotion tracking AI for drivers in autonomous vehicles

Affectiva today announced the launch of its Automotive AI service that lets creators of autonomous vehicles and other transportation systems track users’ emotional response. The Automotive offering is Affectiva’s third service for tracking the emotional response of product users and is part of a long-term strategy to build emotional profiles drawn from smart speakers, autonomous vehicles, and other platforms with a video camera […]

Read the full story

Siri’s poor performance has opened iOS to every AI assistant from Alexa to Watson

Siri was already coming off a bad year. Then HomePod’s messy release cast a spotlight on its many failings as a digital assistant. Now the floodgates are open: Former Siri executives and engineers are openly discussing its troubled inner workings, while competitors have rushed out improved AI apps for Apple devices. So much has changed recently that it’s worth looking at the current state of Siri’s rivals. From Amazon to Google, IBM, and Microsoft, here’s what’s happening […]

Read the full story

Uber self-driving car crash video shows driver looking down before fatal impact

Just-released video of an autonomous Uber crashing into a pedestrian shows the car’s safety driver looking down seconds before the fatal accident. The footage, provided by the Tempe, Arizona police department, shows the moments leading up to the self-driving car’s collision with 49-year-old Elaine Herzberg on Sunday night. Viewer discretion is advised. While the footage doesn’t show […]

Read the full story

Jabra Elite 65t review: Alexa in earbuds is missing key features

A few days before the Consumer Electronics Show in Las Vegas earlier this year, Amazon rolled out its mobile accessory kit to enable creators of wireless earbuds and wearables like smartwatches or fitness trackers to place Alexa inside their devices. Jabra Elite 65t, which are available now for $170, are among the first earbuds to use the kit for Alexa integration, so we got a pair and put them through their paces. Unlike Google Assistant and Siri, Alexa isn’t native to a mobile […]

Read the full story

Beyond VB

Europe’s AI delusion

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications. If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war. (via Politico EU)

Read the full story

OpenAI wants to make safe AI, but that may be an impossible task

True artificial intelligence is on its way, and we aren’t ready for it. Just as our forefathers had trouble visualizing everything from the modern car to the birth of the computer, it’s difficult for most people to imagine how much truly intelligent technology could change our lives as soon as the next decade — and how much we stand to lose if AI goes out of our control. (via Futurism)

Read the full story

Top schools for AI: New study ranks the leading U.S. artificial intelligence grad programs

Artificial intelligence is poised to become one of the most disruptive technologies of the century. A new study of the top graduate programs for artificial intelligence, part of the closely watched U.S. News & World Report rankings, reveals which universities are poised to lead this revolution. (via GeekWire)

Read the full story

AI translates news just as well as a human would

Translation was traditionally considered a job in which the magic human touch would always ultimately trump a machine. That may no longer be the case, as a Microsoft AI translator just nailed one of the hardest challenges: translating Chinese into English with accuracy comparable to that of a bilingual person. (via Futurism)

Read the full story

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.