On Tuesday this week, the U.S. House of Representatives Subcommittees on Research and Technology and Energy invited prominent academics, tech executives, and scientists to talk about the “game-changing” potential and implications of AI, as the hearing charter put it.
It touched on a number of topics.
Rep. Barbara Comstock (R-VA) sought suggestions from the panel on ways institutions and government might collaborate on AI systems development. Rep. Suzanne Bonamici (D-OR) opined on the ethical dilemmas facing AI. And Rep. Marc Veasey (D-TX) asked earnestly about the potential for “doomsday” scenarios. “To what extent do you think [is it] something we should be concerned about?” he said.
The answers to some of those questions lie in transcripts from past hearings.
During a House Subcommittee on Information Technology hearing in April, Jack Clark, the director of OpenAI, suggested AI competitions as a way to bring government agencies and AI researchers closer together.
“Every single agency has … problems it’s going to encounter, and it has competitions that it can create to spur innovation, so it’s not one single moonshot, it’s a whole bunch of them,” he told lawmakers in attendance. “I think every part of government can contribute here.”
And in 2016, during one of the very first congressional hearing on AI, OpenAI cofounder Greg Brockman said that cracking the AI ethics code — tackling, among other challenges, the doctrine of war and black box algorithms — would take a collective effort.
“[S]afety, security, [and] ethics. I think that’s going to take everyone,” he said. “And I think that we all need to work together.”
Cyclical discussions about AI are a product, it would seem, of today’s bombastic, no-holds-barred news cycle. It’s not easy to step back and think critically when protesters are denouncing the sale of Amazon’s Rekognition system to local law enforcement and employees are resigning over Google’s drone contract with the Pentagon, much less when accidents involving self-driving cars hit the newswire.
These prompt gut reactions. We can’t help ourselves.
There’s another reason, though, that topics like AI in government, AI ethics, and “doomsday scenarios” come up time and time again, not just among politicians, but within the machine learning community itself: The questions don’t yet have satisfactory answers.
Certainly, governments — most recently the European Union, France, and Canada — have made great strides in establishing helpful legal and social frameworks. And companies like Microsoft and Facebook continue to work toward minimally biased datasets and algorithms.
But others have yet to make inroads on either front.
Before we can move past the questions that have been asked about AI for decades, we need to develop the right vocabulary and understanding that’ll advance the discourse.
As Jean-François Gagné of Element AI said during a panel discussion at C2 summit in Montréal this year: “Seek to understand. Dig. Don’t stay at the surface level. It’s the only way we’re going to be able to engage in a productive conversation and use AI for the best.”
Thanks for reading,
P.S. Enjoy this video of an AI algorithm defeating teams of amateur Dota 2 players:
Researchers at Google have created a machine learning system that not only adds color to black and white videos, but that can constrain colors to particular objects, people, and pets in any given frame.
Google today shared a new demo of Duplex, its conversational AI that makes phone calls on behalf of Google Assistant users, and revealed more details about how the AI will work when speaking to businesses and customers. Initial use cases will involve making hair salon appointments and restaurant reservations. Tests of the experimental Duplex service […]
Amazon today announced that Alexa is coming to the Alexa iOS app for iPhone and iPad users. You’d think an app made to help users control the AI assistant and interact with information it serves up would come with Alexa inside, but that wasn’t the case until this year. Though Echo speakers have been available since […]
Japanese telecom company NTT East teamed up with Earth Eyes, a Japan-based tech startup, to create AI Guardsman, a machine learning system that attempts to catch shoplifters in the act.
This fall, Techstars will open its first AI startup accelerator in Montreal, a city known for its AI sector. The first cohort of 10 startups will be selected next month to begin in September. VentureBeat spoke with managing director Bruno Morency to explore what it means to be an AI startup.
Greg Blockmark, cofounder of OpenAI, spoke to VentureBeat about recent advances in deep learning, the need for discussion and debate about AI, and ways researchers and policymakers might solve the “AI bias problem.”
So began a sequence of events that saw Ibrahim Diallo fired from his job, not by his manager but by a machine. (via BBC)
The Terminator was written to frighten us; WALL-E was written to make us cry. Robots can’t do the terrifying or heartbreaking things we see in movies, but still the question lingers: What if they could? (via Futurism)
If you drive a car, you’ve probably found yourself waiting at a red light while the intersection sits empty. Artificial intelligence could make that — and other frustrating inefficiencies of city traffic — a thing of the past. (via The Wall Street Journal)
When parents tell kids to respect AI assistants, what kind of future are we preparing them for? (via Fast Company)