Facebook launched an independent oversight board and recommitted to privacy reforms this week, but after years of promises made and broken, nobody seems convinced that real change is afoot. The Federal Trade Commission (FTC) is expected to decide shortly whether to sue Facebook, sources told the New York Times, following a $5 billion fine levied last year.

In other investigations, the Department of Justice filed suit against Google this week, accusing the Alphabet company of maintaining multiple monopolies through exclusive agreements, collection of personal data, and artificial intelligence. News also broke this week that Google’s AI will play a role in creating a virtual border wall.

What you see in each instance is a powerful company insisting it can regulate itself while government regulators appear to reach the opposite conclusion.

If Big Tech’s machinations weren’t enough, this week also brought news of a Telegram bot that undresses women and girls; AI used to alter the emotions on people’s faces in photos; and Clearview AI, which is being investigated in multiple countries, allegedly planning to introduce features to help police use its facial recognition services more responsibly. Oh, right, and there’s a presidential election campaign happening.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!


Learn More

It’s enough to make people reach the conclusion that they’re helpless. But that’s an illusion, one that Prince Harry, Duchess Meghan Markle, Algorithms of Oppression author Dr. Safiya Noble, and Center for Humane Technology director Tristan Harris attempted to dissect earlier this week in a talk hosted by Time. Noble began by acknowledging that AI systems in social media can pick up, amplify, and deepen existing systems of inequality, like racism and sexism.

“Those things don’t necessarily start in Silicon Valley. But I think there’s really little regard for that when companies are looking at maximizing the bottom line through engagement at all costs — it actually has a disproportionate harm and cost to vulnerable people. These are things we’ve been studying for more than 20 years, and I think they’re really important to bring out this kind of profit imperative that really thrives off of harm,” Noble said.

As Markle pointed out during the conversation, the majority of extremists in Facebook groups got there because Facebook’s recommendation algorithm suggested they join the groups.

To turn the tide, Noble said it’s important to pay attention to public policy and regulation, as both are crucial to conversations about how businesses operate.

“I think one of the most important things people can do is to vote for policies and people that are aware of what’s happening and who are able to truly intervene. Because we’re born into the systems that we’re born into,” she said. “If you ask my parents what it was like being born before the Civil Rights Act was passed, they had a qualitatively different life experience than I have. So I think part of what we have to do is understand the way that policy truly shapes the environment.”

When it comes to misinformation, Noble said people would be wise to advocate for sufficient funding for “counterweights” like schools, libraries, universities, and public media, which she said have been negatively impacted by Big Tech companies.

“When you have a sector like the tech sector that is so extractive — it doesn’t pay taxes, it offshores its profits, it defunds the democratic educational counterweights — those are the places where we really need to intervene. That’s where we make systemic long-term change, to reintroduce funding and resources back into those spaces,” she said.

Forms of accountability make up one of five values embedded in many AI ethics principles. During the talk, Harris emphasized the need for systemic accountability and transparency so the public can better understand the scope of problems created by Big Tech and seek redress. For example, Facebook could form a board people can report harms to and then produce quarterly reports on progress toward removing those harms.

For Google, one way to increase transparency would be to release more information about its employees’ AI ethics principle review requests. A Google spokesperson told VentureBeat that Google does not currently share this information publicly, beyond a few isolated examples. Getting that data on a quarterly basis might reveal more about the politics of Googlers than anything else, but I’d sure like to know if Google employees have reservations about the company increasing surveillance along the U.S.-Mexico border or which controversial projects attract the most objections at one of the most powerful AI companies on Earth.

Since Harris and others released The Social Dilemma on Netflix, a number of people have criticized the documentary for failing to include enough voices of women, particularly Black women like Noble who have spent years assessing the issues it examines, such as how algorithms can automate harm. That said, it was a pleasure to see Harris and Noble speak together about how Big Tech can build more equitable algorithms and a more inclusive digital world.

For a breakdown of everything The Social Dilemma misses, you can read this recap of an interview with Meredith Whittaker that took place at a virtual conference this week. But she also contributes to the heartening conversation about solutions. One helpful piece of advice from Whittaker: Dismiss the idea that the algorithms are superhuman or superior technology. Technology isn’t infallible, and Big Tech isn’t magical. Rather, the grip large tech companies have on people’s lives is a reflection of the material power of large corporations.

“I think that ignores the fact that a lot of this isn’t actually the product of innovation. It’s the product of a significant concentration of power and resources. It’s not progress. It’s the fact that we all are now, more or less, conscripted to carry phones as part of interacting in our daily work lives, our social lives, and being part of the world around us,” Whittaker said. “I think this ultimately perpetuates a myth that these companies themselves tell, that this technology is superhuman, that it’s capable of things like hacking into our lizard brains and completely taking over our subjectivities. I think it also paints a picture that this technology is somehow impossible to resist, that we can’t push back against it, that we can’t organize against it.”

Whittaker, a former Google employee who helped organize a walkout at Google offices around the world in 2018, also finds workers organizing within companies to be an effective solution. She encouraged employees to recognize methods that have proven effective in recent years, like whistleblowing to inform the public and regulators of serious harms. Volunteerism and voting, she said, may not be enough.

“We now have tools in our toolbox across tech, like the walkout, a number of Facebook workers who have whistleblown and written their stories as they leave, that are becoming common sense,” she said.

In addition to recognizing how power shapes perceptions of AI, Whittaker encourages people to try to better understand how AI influences our lives today. It might have been easy to miss amid so many other things this week, but the group AIandYou.org, which wants to help people understand how AI impacts their daily lives, dropped its first introductory video with Spelman College computer science professor Dr. Brandeis Marshall and actress Eva Longoria.

The COVID-19 pandemic, a historic economic recession, calls for racial justice, and the consequences of climate change have made this year challenging, but one positive outcome is that these events have led a lot of people to question their priorities and consider how they can make a difference.

The idea that tech companies can regulate themselves appears to have largely dissolved. Institutions are now taking steps to reduce Big Tech’s power, but even with Congress, the FTC, and the Department of Justice — the three main levers of antitrust — trying to rein in the power of Big Tech, I don’t know a lot of people who are confident they will be able to do so. Tech policy advocates and experts, for example, openly question whether Congress can muster the political will to bring lasting, effective change.

But whatever happens in the November 3 election or with antitrust enforcement, there are steps we can take to wrangle power away from Big Tech. People at the heart of the matter believe building a better world for ourselves and future generations will require, among other things, imagination, engagement with tech policy, and a better understanding of how algorithms impact our lives.

As Whittaker, Noble, and the leader of the antitrust investigation in Congress have said, the power Big Tech holds can seem insurmountable, but if people get engaged, there are real reasons to hope for change.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.