Mozilla today released the 2019 Internet Health Report, an analysis that brings together insights from 200 experts to examine issues central to the future of the internet. This year’s report chose to focus primarily on injustice perpetuated by artificial intelligence; what NYU’s Natasha Dow Schüll calls “addiction by design” tech, like social media apps and smartphones; and the power of city governments and civil society “to make the internet healthier worldwide.”
The Internet Health Report is not designed to issue the web a bill of health, rather it is intended as a call to action that urges people to “embrace the notion that we as humans can change how we make money, govern societies, and interact with one another online.”
“Our societies and economies will soon undergo incredible transformations because of the expanding capabilities of machines to ‘learn’ and ‘make decisions’. How do we begin to make tougher demands of artificial intelligence to meet our human needs above all others?” the report reads. “There are basically two distinct challenges for the world right now. We need to fix what we know we are doing wrong. And we need to decide what it even means for AI to be good.”
The modern AI agenda, the report’s authors assert, is shaped in part by large tech companies and China and the United States. The report calls particular attention to Microsoft and Amazon’s sale of facial recognition software to immigration and law enforcement agencies.
The authors point to the work of Joy Buolamwini, whom Fortune recently named “the conscience of the AI revolution.” Through audits published by Buolamwini and others in the past year, facial recognition software technology from Microsoft, Amazon’s AWS, and other tech companies was found to be less capable of recognizing people with dark skin, particularly women of color.
Also highlighted is the work of AI Now Institute cofounder Meredith Whitaker. A number of co-organizers of Google employees’ ethically motivated worldwide walkouts last fall said they have been demoted since the protest. Whitaker said she was told after Google disbanded its AI ethics board to stop her work at the AI Now Institute if she wanted to keep her job, Wired reported Monday. A Google spokesperson denied that any retaliatory changes were made.
“Are you going to harm humanity and, specifically, historically marginalized populations, or are you going to sort of get your act together and make some significant structural changes to ensure that what you create is safe and not harmful?” Whitaker asked in a quote included in the report and shared with Kara Swisher’s Recode podcast last month.
Referencing Finland’s initiative to train 1% of its population in artificial intelligence essentials, the report called AI literacy critical for not only business and government leaders but the average citizen, as well.
“Each and everyone of us who cares about the health of the internet — we need to scale up our understanding of AI. It is being woven into nearly every kind of digital product and is being applied to more and more decisions that affect people around the world,” the report reads. “It’s not just technology companies that need to be interrogating the ethics of how they use AI. It’s everyone, from city and government agencies to banks and insurers.”
The report also explored solutions to the threat of deepfakes. Some scholars advise against attempts to regulate deepfakes since governments would be allowed to act as arbiters of what’s fact and fiction and label views they disagree with as “fake news.”
The report also focuses on what it calls “breaking free of the addiction machine” that includes smartphone apps, social media platforms, and recommendation engines. In the 2018 Internet Health Report, Mozilla argued that tech giants like Facebook and Amazon should be regulated, disrupted, or broken up.
Business models that incentivize engagement still reign, and the report called for new incentives or business models.
“There is an opportunity for people within the tech sector — developers, designers, content creators, marketers, and others — to be leaders in creating apps and services that do not encourage addictive behaviors and instead incentivize positive, healthy online experiences,” the report reads.
It also takes a closer look at Germany’s hate speech law, which went into effect about a year ago and requires companies to removed hate speech from their social media platforms within 24 hours or face fines of up to €50 million ($56 million).
Twitter, Facebook, Google, and YouTube published reports sharing their first year of operations under the new legislation, but experts say they want more information and transparency to accurately gauge the impact of Germany’s hate speech law.
The report also looks at a number of other issues, like world access to the internet. The majority of people on Earth gained access to the internet earlier this year. However, there appears to be an increase in internet censorship via forms of intentional slowdown.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here