Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Last Tuesday, Google shared a blog post highlighting the perspectives of three women of color employees on fairness and machine learning. I suppose the comms team saw trouble coming: The next day NBC News broke the news that diversity initiatives at Google are being scrapped over concern about conservative backlash, according to eight current and former employees speaking on condition of anonymity.

The news led members of the House Tech Accountability Caucus to send a letter to CEO Sundar Pichai on Monday. Citing Google’s role as a leader in the U.S. tech community, the group of 10 Democrats questioned why, despite corporate commitments over years, Google diversity still lags behind the diversity of the population of the United States. The 10-member caucus specifically questioned whether Google employees working with AI receive additional bias training.

When asked by VentureBeat, a Google spokesperson did not respond to questions raised by members of Congress but said any suggestion that the company scaled back diversity initiatives is “categorically false.” Pichai called diversity a “foundational value” for the company. For her part, Google AI ethical research scientist Timnit Gebru, one of the three women featured in the Google blog post, spelled out her feelings about the matter on Twitter.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 

Register Now

Hiring AI practitioners from diverse backgrounds is seen as a way to catch bias embedded in AI systems. Many AI companies pay lip service to the importance of diversity. As one of the biggest and most influential AI companies on the planet, what Google does or doesn’t do stands out and may be a bellwether of sorts for the AI industry. And right now, the company is cutting back on diversity initiatives at a time when clear ties are being drawn between surveillance AI startups and alt-right or white supremacy groups. Companies with documented algorithmic bias like Google, as well as those associated with alt-right groups, seem to really like government contracts. That’s a big problem in an increasingly diverse America. Stakeholders in this world of AI can ignore these problems, but they’ll only fester and risk not just a public trust crisis, but practical harms in people’s lives.

Reported diversity program cutbacks at Google matter more than at virtually any other company in the world. Google began much of the modern trend of divulging corporate diversity reports that spell out the number of women and people of color within its ranks. According to Google’s 2020 diversity report, roughly 1 in 3 Google employees are women, whereas 3.7% are African American, 5.9% are Latinx, and 0.8% are Native American.

Stagnant, slow progress on diversity in tech matters a lot more today than it did in the past now that virtually all tech companies — especially companies like Amazon, Google, and Microsoft — call themselves AI companies. Tech, and AI more specifically, suffers from what’s referred to as AI’s “white guy problem.” Analysis and audits of a vast swath of AI models have found evidence of bias based on race, gender, and a range of other characteristics. Somehow, AI produced by white guys often seems to work best on white guys.

Intertwined with news about Google’s diversity and inclusion programs is recent revelatory reporting about surveillance AI startups Banjo and Clearview. Banjo founder and CEO Damien Patton stepped down earlier this month after OneZero reported that he had been a member of a white supremacist group who participated in shooting up a synagogue. A $21 million contract with Utah first responders is under review, according to Deseret News.

And in an article titled “The Far-Right Helped Create The World’s Most Powerful Facial Recognition Technology,” Huffington Post reported on Clearview’s extensive connections with white supremacists, including a collaborator whose interest in facial recognition stems from a desire to track down people in the United States illegally. Clearview AI scraped billions of images from the web to train its facial recognition system and recently committed to working exclusively with government and law enforcement agencies.

That some AI roads lead back to President Trump should come as little surprise. The Trump campaign’s largest individual donor in 2016 was early AI researcher Robert Mercer. Palantir founder Peter Thiel voiced his support for President Trump onstage at the Republican National Convention in 2016, and his company is getting hundreds of millions of dollars in government contracts. There’s also Cambridge Analytica, a company that maintained close ties with Trump campaign officials like Mercer and Steve Bannon.

And, when OpenAI cofounder Elon Musk was taking a break from bickering with Facebook’s head of AI theories on Twitter a few days ago, he pushed people to “take the red pill,” a famous phrase from The Matrix that’s been appropriated by people with racist or sexist beliefs.

Also this week: Machine learning researcher Abeba Birhane, winner of Best Paper award at the Black in AI workshop at NeurIPS 2019 for her work on relational ethics to address bias, had this to say:

Looking back at the Banjo and Clearview episodes, AI Now Institute research Sarah Myers West argued that racist and sexist elements have existed within the machine learning community since its beginning.

“We need to take a long, hard look at a fascination with the far right among some members of the tech industry, putting the politics and networks of those creating and profiting from AI systems at the heart of our analysis. And we should brace ourselves: We won’t like what we find,” she said in a Medium post.

That’s one side of AI right now.

On the other side, while Google takes steps backward in diversity, and startups with ties to white supremacists seek government contracts, others in the AI ethics community are working to turn the vague principles that have been established in recent years into actual actions and company policy. In January, researchers from Google, including Gebru, released a framework for internal company audits of AI models that is designed to close AI accountability gaps within organizations.

Forward momentum

Members of the machine learning community pointed to signs of more maturity at conferences like NeurIPS, and the recent ICLR featured a diverse panel of keynote speakers and Africa’s machine learning community. At TWIMLcon in October 2019, a panel of machine learning practitioners shared thoughts on how to operationalize AI ethics. And in recent weeks, AI researchers have proposed a number of constructive ways organizations can convert ethics principles into practice.

Last month, AI practitioners from more than 30 organizations created a list of 10 recommendations for turning ethics principles into practice, including bias bounties, which are akin to bug bounties for security software. The group also suggested creating a third-party auditing marketplace as a way to encourage reproducibility and verify company claims about AI system performance. The group’s work is part of a larger effort to make AI more trustworthy, verify results, and ensure “beneficial societal outcomes from AI.” The report asserts that “existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.”

In a keynote address at the all-digital ICLR, sociologist and Race After Technology author Ruha Benjamin asserted that deep learning without historical or social context is “superficial learning.” Considering the notion of anti-blackness in AI systems and the new Jim Code, Benjamin encouraged building AI that empowers people, and she stressed that AI companies should view diverse hiring as an opportunity to build more robust models.

“An ahistoric and asocial approach to deep learning can capture and contain, can harm people. A historically and sociologically grounded approach can open up possibilities. It can create new settings. It can encode new values and build on critical intellectual traditions that have continually developed insights and strategies grounded in justice. My hope is we all find ways to build on that tradition,” she said.

Analysis published in Proceedings of the National Academy of Sciences last month indeed found that women and people of color in academia produce scientific novelty at higher rates than white men, but those contributions are often “devalued and discounted” in the context of hiring and promotion.

A fight over AI’s soul is raging as algorithmic governance or AI used by government grows in interest and real-world applications. Use of algorithmic tools may increase as many governments around the world, such as state governments in the U.S., face budgetary shortfalls due to COVID-19.

A joint Stanford-NYU study released in February found that only 15% of algorithms used by the United States government are considered highly sophisticated. The report concluded that government agencies need more in-house talent to create custom models and assess AI from third-party vendors, and warned of a trust crisis if people doubt AI used by government agencies. “If citizens come to believe that AI systems are rigged, political support for a more effective and tech-savvy government will evaporate quickly,” the report reads.

A case study about how Microsoft, OpenAI, and the world’s democratic nations in the OECD are turning ethics principles into action also warns that governments and businesses could face increasing pressure to put their promises into practice. “There is growing pressure on AI companies and organizations to adopt implementation efforts, and those actors perceived to verge from their stated intentions may face backlash from employees, users, and the general public. Decisions made today about how to operationalize AI principles at scale will have major implications for decades to come, and AI stakeholders have an opportunity to learn from existing efforts and to take concrete steps to ensure that AI helps us build a better future,” the report reads.

Bias and better angels

When Google, one of the biggest and influential AI companies today, cuts back diversity initiatives after public retaliation against LGBT employees last fall, it sends a clear message. Will AI companies, like their tech counterparts, choose to bend to political winds?

Racial bias has been found in the automatic speech recognition performance from Apple, Amazon, Google, and Microsoft. Research published last month found popular pretrained machine learning algorithms like Google’s BERT contain bias ranging from race and gender to religious or professional discrimination. Bias has also been documented in object detection and facial recognition, and in some instances has negatively impacted hiring, health care, and financial lending. The risk assessment algorithm the U.S. Department of Justice uses assigns higher recidivism scores to black people in prisons — known COVID-19 hotspots — which affects early release.

People who care about the future of AI and its use in bettering human lives should be outspoken and horrified about a blurring line between whether biased AI is the product of genuine racial extremists or indifferent (mostly) white men. For the person of color on the receiving end of that bias, whether the racism was generated actively or passively doesn’t really matter that much.

The AI community should resolve that it cannot move at the same slow pace of progress on diversity as the wider tech industry, and it should consider the danger of the “white default” spreading in an increasingly diverse world. One development to watch in this context in the months ahead is how Utah considers its $21 million contract with Banjo, which state officials are currently reviewing. They also have to decide if they’re OK employing surveillance technology built by a racist.

Another, of course, is Google. Will Google make meaningful progress on diversity hiring and retention or just ignore the legacy of its scrapped implicit bias training program Sojourn and let the wound fester? What’s also worth watching is Google’s thirst for government contracts. The company recently hired Josh Marcuse to act as its head of strategy and innovation for the global public sector, including military contracts. Marcuse was director of the Defense Innovation Board (DIB), a group formed in 2016 that last fall created AI ethics principles for the U.S. Department of Defense. Former Google chair Eric Schmidt was the DIB chair who led the process of developing the principles. Schmidt’s close ties with Silicon Valley and the Pentagon on machine learning initiatives were documented in a recent New York Times article.

Keep an eye on Congress as well, where data privacy laws proposed in recent months call for additional study of algorithmic bias. The Consumer Online Privacy Rights Act (COPRA) supported by Senate Democrats would make algorithmic discrimination illegal in housing, employment, lending, and education, and would allow people to file lawsuit for data misuse.

And then there’s the question of how the AI community itself will respond to Google’s alleged reversal and slow or superficial progress on diversity. Will people insist on more diversity in AI or chalk this up, like example after example of algorithmic bias that leaches trust from the industry, as sad and unfortunate and wrong, but do nothing? The question of whether to speak up or do nothing was raised recently by Soul of America author and historian Jon Meacham. In a conversation with Kara Swisher, Meacham, who’s host of a new podcast called “Hope, Through History,” said the story of the United States is not a “nostalgic fairy tale” and never was. We’re a nation of perennial struggles with a history that includes struggle against apartheid-like systems of power.

He says the change wrought by events like the civil rights movement came not from when the powerful decided to do something, but when the powerless convinced the powerful to do the right thing. In other words, the arc of the moral universe “doesn’t bend toward justice if there aren’t people insisting that it swerve towards justice,” he said.

The future

The United States is a diverse country that U.S. Census estimates say will have no racial majority in the coming decades, and that’s already true in many cities. United Nations estimates say Africa will be the youngest continent on Earth for decades to come and will account for most global population growth until 2050.

Building for the future quite literally means building and investing with diversity in mind. We should all want to avoid finding out what happens when systems known to work best for white men are implemented in a world where the majority of people are not white men.

Tech is not alone. Education is also experiencing diversity challenges, and in journalism, newsrooms often fail to reflect the diversity of its audience. Basic back-of-the-envelope math says businesses that fail to recognize the value of diversity may suffer as the world continues to grow more diverse.

AI that makes previously impossible things possible for people with disabilities, or that tackles borderless challenges like climate change and COVID-19, appeal to our humanity. Tools like the AI technology for sustainable global development that dozens of AI researchers released earlier this week appeal to our better angels.

If sources speaking with NBC News under condition of anonymity are accurate, Google now has to decide whether to revisit diversity initiatives that bear results or carry on with business as usual. But even if it’s not today or in the immediate future, if the company fails to act, it could bring about demands from an increasingly diverse base of consumers or even a social movement.

The notion of building a larger movement to demand progress on tech’s lack of diversity has come up before. In a talk at the Afrotech conference about the black tech ecosystem in the United States, Dr. Fallon Wilson talked of the need for a black tech movement to confront the lack of progress toward diversity in tech. Wilson said such a movement could involve groups like the Algorithmic Justice League and draw inspiration from previous social movements in the United States like the civil rights movement. If such a movement ever mounted boycotts like the civil rights movement in the 1960s, future demographics of women or people of color could include a majority of the population.

Algorithmic discrimination today is pervasive, and it seems to be not just an acceptable outcome to some but the desired result. AI should be built with the next generation in mind. At the intersection of all these issues are government contracts, and making tools that work for everyone should be an incontrovertible matter of law. Policy that requires system audits and demand routine government surveillance reports should form the cornerstone of government applications of AI that interact with citizens or make decisions about people’s lives. To do otherwise risks a trust crisis.

There’s a saying popular among political journalists that “All governments lie.” Just as governments are held accountable, unspeakably wealthy tech companies who seek to do business with governments should also have to show some receipts. Because whether it’s tomorrow or months or years from now, people are going to continue demand progress.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.