Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Alphabet CEO Sundar Pichai said artificial intelligence is “more profound than fire or electricity.” Author and historian Yuval Noah Harari said, “If you have enough data about me, enough computing power and biological knowledge, you can hack my body, my brain, my life, and you can understand me better than I understand myself.” And a recent Brookings Institution report prophesied that the country or region leading in AI in 2030 will rule the planet until at least 2100.
It’s clear the AI revolution will impact society drastically, both for good and bad. Recently, using natural-language processing and machine-learning techniques, Canadian company BluDot sifted through global news reports, airline data, and reports of animal disease outbreaks to issue an alert about the current Coronavirus outbreak, days ahead of official organizations such as the World Health Organization. But an AI survey last year found about 70% of the public and technology leaders believe AI will lead to greater social isolation and a loss of human intellect and creativity. Already, data collected about us without our knowledge and processed by AI applications manipulate our wants and beliefs, effectively controlling people for commercial or political purposes. Deepfake videos created with readily available AI tools pose an additional challenge, further undermining our ability to know what is real and what is fake. And, of course, AI is expected to take many of our jobs.
Governments are at cross purposes
Given the force of this technology, shouldn’t governments be bracing for its effect with robust regulations? The U.S. government so far is taking a mostly hands-off approach. U.S. Chief Technology Officer Michael Kratsios warned federal agencies against over-regulating companies developing artificial intelligence. There are views, too, that the U.S. government doesn’t want to issue meaningful regulation, that the administration finds regulation antithetical to its core beliefs.
There is greater movement underway by the European Union (EU), which will issue a paper in February proposing new AI regulations for “high-risk sectors,” such as healthcare and transport. These rules could inhibit AI innovation in the EU, but officials say they want to harmonize and streamline rules in the region. China is pursuing a different strategy designed to tilt the playing field to its advantage as exemplified by its standards efforts for facial recognition. Ultimately, it is in the worldwide public interest for the AI superpowers, the U.S. and China, to collaborate on common AI principles. But at present it appears there is no ongoing dialog, effectively creating a regulatory standoff.
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
Perhaps that is okay, as the just released 2020 Edelman Trust Barometer highlights a lack of faith from the public in the ability of government to understand how to regulate AI and other emerging technologies.
This governance lag is due both to conflicting national and regional interests and to the high velocity with which AI technologies have progressed. These twin drivers are at the crux of the challenge for how to regulate this revolution. Government may try but is largely unequipped to create and enforce meaningful regulation for public benefit. That leaves private industry to foster regulation. But will they do it?
Big Tech wants a light touch
Those leading the AI revolution are the giant technology companies – what Amy Webb, the Founder of the Future Today Institute, refers to as “The Big Nine.” These include U.S. firms Google, Microsoft, Amazon, Facebook, IBM, and Apple plus China’s Baidu, Alibaba, and Tencent. Arguably, it’s not only the future of AI that they control but possibly the future of humanity. Taken together, they form a global oligopoly with near total control of the technology and its underlying data. While there is growing consensus among them on the need for some form of regulation, they’ve voiced different views on how to go about it.
At Davos this year. Ginni Rometty, now the former IBM CEO, called for “precision regulation,” to allow for AI technical advancement and the ability to compete in a global marketplace. This view is similar to one espoused by Tom Wheeler of the Brookings Center for Innovation. He says regulation should focus on the technology’s effects rather than chase broad-based fears about the technology itself. They are both basically saying that companies should have the freedom to create leading technologies without regulations slowing them down.
Alphabet’s Pichai recently advised the European Commission to take a “proportionate approach” when drafting AI rules and followed this with an op-ed in which he argued that AI technology needs to be harnessed for good and available to everyone. Microsoft president Brad Smith said the world “should not wait for the technology to mature” before regulating AI.
While it’s possible the Big Nine companies have the public’s interest at heart when they propose regulation strategies, it’s equally plausible that they see regulations as a way to increase barriers to entry into the AI space, further cementing their leadership positions while fending-off potential anti-trust lawsuits.
Speeches at Davos and op-eds are all well and fine and perhaps heartfelt, but real actions point to the conflicting interests of these companies. Mostly they support regulation that is light on specifics, and they often oppose more encompassing (or what they see as restrictive) efforts. The regulations that emerge over the next several years are likely to be a patchwork, varying by geography and degrees of specificity, open to interpretation and with enough loopholes to be largely ineffective.
The public is just trying to adapt
This gives rise to a larger question about whether the AI revolution can be effectively regulated at all. In the meantime, Harvard professor Shoshana Zuboff said in a recent op-ed: “Without law, we scramble to hide in our own lives, while our children debate encryption strategies around the dinner table and students wear masks to public protests as protection from facial recognition systems built with family photos.”
The AI revolution is underway. Conventional wisdom is that people are ultimately in control. There may still be time and opportunity for meaningful worldwide regulation, however the conflicting views between governments make this unlikely, and vast industry lobbying will strive to keep rules effectively weak. In a fast-moving revolution, however, the notion of control may only be an illusion.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.