Facebook’s flawed approach to censorship reveals a challenge faced by AI. According to leaked documents, human censors at the social network have devised well-intentioned guidelines for removing posts containing hate speech or other offensive content. In practice, however, they create a confusing, often contradictory set of practices.
As ProPublica reported, a post from U.S. Representative Clay Higgins in which he called for the slaughter of radicalized Muslims was permissible. “Hunt them, identify them, and kill them,” he wrote. “Kill them all. For the sake of all that is good and righteous. Kill them all.” Meanwhile, a post by Didi Delgado, a Black Lives Matter activist, was removed: “All white people are racist. Start from this reference point, or you’ve already failed.”
As these two examples reveal, striking a balance between free speech and hate speech from a universal set of guidelines is fraught with peril. Applying such a framework for one country’s audience is difficult, and attempting to do so for a global audience — Facebook has two billion users, is even more difficult. What is acceptable to one group, will offend another.
It doesn’t matter if it’s humans or an AI that perform the censorship.
Irwin Gotlieb, chairman of GroupM, raised this topic in a conversation with Stephen Wolfram, founder of Wolfram Research, and me this spring. Gotlieb described the scenario of one AI-car carrying one passenger and another carrying several passengers; if only one vehicle could be saved, how would an AI system determine a response?
“At the moment there isn’t one solution for the world, and different parties will put different rule sets against it, with different objectives,” Gotlieb said.
“This question of ‘Can we invent one perfect set of mathematical principles that will determine the AIs for all eternity?’ — the answer, I think, is no,” Wolfram replied.
While our conversation was about car safety, the same challenges can be found in Facebook’s approach to censorship. As Abraham Lincoln famously said, “You can never please all of the people all the time.”
Facebook may be doing a lousy job, but it’s a nearly impossible task.
Thanks for reading,
Editor in Chief
P.S. Please enjoy this video, “AI and Machine Learning – Technology Frontiers,” from MIT’s Initiative on the Digital Economy.
From the AI Channel
Adobe Analytics Cloud can now track the performance of voice-enabled intelligent assistants like Alexa, Siri, Google Assistant, Cortana, and Samsung’s Bixby, marking the company’s first foray into voice analytics and conversational computing. Adobe chose to enter the voice space now, Adobe Analytics Cloud director of product management Colin Morris told VentureBeat, because improvements in speech recognition […]
Facebook’s methods for detecting hate speech are being criticized today for the way in which they protect some groups and ignores others. In a ProPublica report published today, leaked internal documents used to train teams that review questionable content with help from the AI-trained algorithm are told to protect white men over black children or women […]
Developers building on top of Salesforce’s platform have new AI services at their disposal. The company announced today that it is launching a trio of services focused on natural language understanding and object recognition as part of its Einstein portfolio. Apps will be able to integrate an Einstein Sentiment service that will take in a […]
Bonsai now lets customers bring their own AI models
EXCLUSIVE: Bonsai unveiled a new feature today that’s aimed at helping data scientists run machine learning models created outside its platform. Called Gears, the system is supposed to take independently developed models and bring them onto Bonsai’s platform for easier execution and monitoring. When users upload a model that includes a Gear to Bonsai’s platform, the […]
JASK, which provides companies with software to monitor cyber threats, came out of stealth today and announced funding of $12 million in a round co-led by Dell Technologies Capital and TenEleven Ventures. Existing investors Battery Ventures and Vertical Venture Partners also joined. The San Francisco-based startup deploys software sensors on customer networks to monitor threats […]
I first used a computer to do real work in 1985. I was in college in the Twin Cities, and I remember using the DOS version of Word and later upgrading to the first version of Windows. People used to scoff at the massive gray machines in the computer lab, but secretly they suspected something […]
In the world of marketing, brand anthropomorphism can be a powerful mechanism for connecting with consumers. It’s the tactic of giving brand symbols people-like characteristics: Think of Tony the Tiger and the Michelin Man. Today some companies are taking brand anthropomorphism to a whole new level with sophisticated AI technologies. (via Harvard Business Review)
If we ever want future robots to do our bidding, they’ll have to understand the world around them in a complete way—if a robot hears a barking noise, what’s making it? What does a dog look like, and what do dogs need? (via Quartz)
The HBO show Silicon Valley released a real AI app that identifies hotdogs — and not hotdogs — like the one shown on season 4’s 4th episode (the app is now available on Android as well as iOS!) (via Medium)
What worries you about the coming world of artificial intelligence? Too often the answer to this question resembles the plot of a sci-fi thriller. People worry that developments in A.I. will bring about the “singularity” — that point in history when A.I. surpasses human intelligence, leading to an unimaginable revolution in human affairs. Or they wonder whether instead of our controlling artificial intelligence, it will control us, turning us, in effect, into cyborgs. (via The New York Times)
and receive this newsletter every Thursday