Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Writing about AI for an appreciable amount of time is, in my experience, enough to make any reasonable person concerned about the future of humanity. But I worry the focus of that concern is too often directed at the relatively distant future, which could lead to unforeseen consequences in the present.
Headlines from the past few months illuminate how bad things can get. Consider the cases of the self-driving Uber that killed Elaine Herzberg in Tempe, Arizona and that of the Apple engineer who was killed when his Tesla, driving on Autopilot, plowed into a traffic barrier on the highway. You’re probably also aware of the content suggestion algorithms from Facebook and YouTube, which have been implicated in the spread of fake news and extremist views.
Then there was a story last week about how companies and cities use Palantir’s analytics for corporate security and predictive policing, with potentially disastrous results. One man interviewed in the story claimed that he isn’t involved with the Eastside 18 gang in Los Angeles but said the LAPD has him in their database as an associate and officers have been stopping him as a result.
None of these cases involved AI that’s advanced enough for Elon Musk to call it an existential threat to humanity. But that didn’t stop people from getting hurt. If we’re not careful, this will happen more frequently in the coming years.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
It’s easy to worry about a catastrophic future for bank tellers, truck drivers, or some other profession that’s being told today their jobs will go away in the future. Thanks to Westworld, Battlestar Galactica, Iain M. Banks’ Culture novels, and other media, we can confidently picture how artificial superintelligence could upend our lives.
Don’t get me wrong — that’s all very concerning. But we can’t let our anxiety about the distant future blind us to what’s going on right in front of us. When AI systems fail, they can do so in ways we don’t expect. Research and anecdotal evidence have shown us that those systems are often biased against minorities. That bad news turns dire when those systems become critical components of our infrastructure.
Consider the story of the Southern State Parkway in New York that Robert Caro laid out in The Power Broker. According to Caro’s interviews with Sidney M. Shapiro, urban developer Robert Moses decided to build overpasses for the parkway that were too short for buses to pass under, which would limit the access people of color had to it. Those overpasses still stand.
It’s a story that worries me about the future, considering that we’re using AI to build far more powerful systems that can make predictive decisions about all manner of things. This isn’t an idle concern: Law enforcement agencies are using AWS’ Rekognition service to do image recognition today as part of their work. A recent study of competing facial recognition APIs showed they were less accurate identifying the gender of people with darker skin, especially women.
There’s good news: Changing an algorithm is far easier than building a new train line or adjusting the height of a bridge. Our codification of bias need not be set in stone. But we’re already developing technologies like algorithms that claim to provide predictive policing capabilities. Companies are already testing the use of AI for investing, money lending, and other tasks. We need to be working on this now.
In my view, there are a few things we can do. First, I’d like to see a non-industry body develop a system of auditing algorithmic bias, which would allow governments and businesses to restrict the use of algorithms that don’t meet a particular bias metric. With that in place, I’d like to see governments identify situations where the biases of those algorithms must be made available to the public. Law enforcement, consumer finance, and health care seem like good candidates from the get-go.
Second, I believe governments and corporations should be required to disclose when an algorithm influenced the outcome of a decision. For example, if police arrest someone based in part on the evaluation of a predictive policing system, the suspect should have the right to know that.
I don’t have high hopes any of that will happen, given the current political climate. But a boy can dream. It’s the only way I’ll be able to sleep at night, anyhow.
P.S. This is my last AI Weekly column for VentureBeat. Thanks so much for inviting me into your inbox over the past several months. Your choice to take time out of your day to read this newsletter means the world to me. If you want to stay in touch, find me on Twitter: @belril.
P.P.S. Please enjoy this video from Nvidia of an AI system filling in the contents of photos:
FROM THE AI CHANNEL
Microsoft’s Javier Soltero on Alexa, Cortana, and building ‘the real assistive experience’
This year has brought a fair number of shakeups at some of the world’s biggest tech companies. Google’s AI chief left for Apple, Amazon’s AI research chief left for Google, and Windows chief Terry Myerson left Microsoft as the company shifts its focus towards AI and the cloud. Around that time, Javier Soltero moved from […]
Waze and Waycare strike data-sharing pact to improve city traffic
Waze is partnering with traffic-management platform Waycare to share data and improve congested roads. Founded out of Palo Alto, California in 2016, Waycare taps multiple historical and real-time data sources, such as connected car platforms, telematics services, road cameras, construction projects, fleet management platforms, weather services, and public transit, to build a more complete picture […]
Zuckerberg: It’s easier to detect a nipple than hate speech with AI
It’s easier to spot a nipple using artificial intelligence than it is to tackle hate speech, Facebook CEO Mark Zuckerberg said today in a call with analysts. Zuckerberg spoke in response to a question about the efficiency of machines trained by Facebook to moderate content today compared to six months or a year ago. “One […]
It’s time to address the reproducibility crisis in AI
GUEST: Recently I interviewed Clare Gollnick, CTO of Terbium Labs, on the reproducibility crisis in science and its implications for data scientists. The podcast seemed to really resonate with listeners (judging by the number of comments we’ve received via the show notes page and Twitter), for several reasons. To sum up the issue: Many researchers in […]
5 big features Amazon’s home robot will likely have
Under codename Project Vesta, Amazon’s Lab126 is reportedly making a home robot. Tests in employees’ homes could begin in the coming months, and sales of an Amazon robot may begin as early as 2019. Anonymous sources speaking with Bloomberg today provided no details about what the domestic robot will look like or how it will function, […]
Amazon is reportedly planning ‘Vesta’ home robot for 2019
Amazon’s next major consumer product could be far more ambitious than Echo speakers and Fire tablets, according to a new Bloomberg report: The company is said to be working on a domestic robot codenamed “Vesta” after the Roman goddess of family, hearth, and home. Vesta is said to be several years into development, with a release targeted […]
Scientists plan huge European AI hub to compete with US
Leading scientists have drawn up plans for a vast multinational European institute devoted to world-class artificial intelligence (AI) research in a desperate bid to nurture and retain top talent in Europe. (via The Guardian)
Amazon Alexa to reward kids who say ‘please’
Amazon’s smart assistant Alexa can now be made to encourage children to say: “Please,” and: “Thank you,” when issuing it voice commands. (via BBC)
Lessons from my first two years of AI research
A friend of mine who is about to start a career in artificial intelligence research recently asked what I wish I had known when I started two years ago. Below are some lessons I have learned so far. They range from general life lessons to relatively specific tricks of the AI trade. I hope others find them useful. (via MIT)
Why tech companies are racing each other to make their own custom AI chips
Chinese retailer and cloud infrastructure provider Alibaba is the latest company to think up its own design for processors that can run artificial intelligence software. It joins a crowded roster of companies already working on similar custom designs, including Alphabet, Facebook and Apple. (via CNBC)
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.