Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Prescribing rules for robots used to be a game for sci-fi fans. Now it is an urgent task for serious people. There is broad agreement among governments, policymakers, companies, and the general public that AI must be regulated. Why?

First, AI affects human well-being. It impacts both petty and profound aspects of people’s lives — from the ads they see and the music they hear to how much they are paid and whether they can vote. If social rules exist to ensure that questions like these are settled fairly, then AI should be required to meet the standards of fairness expressed in such rules.

Second, although the human mind is an astonishingly complex machine, AI comes pretty close to recreating it. In fact, AI technology can sometimes process so much data so rapidly that it has a cognitive advantage over humans. If we harness AI properly, it can extend our capabilities. This presents heady opportunities – but also dangers. We must apply rules to AI to maximize its power to do good while limiting the risk of misuse.

Who makes the rules?

A consensus is emerging about what the rules for AI should be, at least in rough outline. The EU recently published its AI Ethics Guidelines, the OECD has specified principles for the “democratization of AI,” the World Economic Forum has established a Global AI Council to help close ‘governance gaps’ between states, and Dubai has published an Ethical AI Toolkit – among other examples. All say their own things but share a common spirit.


Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

However, the fact that rules and guidelines are being proposed by regulators at different levels – city, national, and supra-national – suggests uncertainty about how best to make AI regulation a reality. In particular: who should set and oversee the rules?

In my experience, cities are the best candidates. Attempts to regulate AI at the corporate or transnational level often fail to yield suitable principles. For example, the EU’s AI Ethics Guidelines answers the philosophical question of defining “trustworthy” AI very clearly, but some people have observed that it lacks a decisive call to action. And for various reasons, Google’s AI ethics board disintegrated only a week after its recent formation. These issues are due not to a lack of vision, but to the fact that companies and countries aren’t always the best-structured environments for pursuit of the task.

Cities, by contrast, have a few advantages.

One is an immediacy of purpose. Cities are often the site both of technology development and its initial implementation. This gives cities have an incentive to set practical ground-rules early on. In Dubai, this resulted in our AI ethics toolkit being presented as a user-friendly checklist, rather than as a set of abstract precepts. Our goal was to encourage organisations and individuals to quickly start using the guidelines, and this influenced our approach.

Cities also have the sharpest tools for stimulating the innovation culture that underpins AI development and adoption in the first place. Dubai, for example, has been doing several things (including launching some highly successful accelerator programs) to attract bright tech folk from around the world. Any regulatory scheme for AI requires engagement from as many stakeholders as possible: and while cities can capitalize on their limited scale to interest and involve as many people as possible, states (which are usually bigger and have a wider spectrum of priorities) cannot always do the same.  Cities also benefit from commonality of vision – and, in most cases, the sense of identity – that regulatory schemes must reflect in order to maximize support.

A third advantage is that the legislative capabilities of city governments usually sit in the sweet spot between companies (which don’t have legislative competency) and states or supranational institutions (whose regulatory priorities, which often overlook AI, may be too hard-set).

Of course, the power of cities extends only as far as their municipal boundaries, and if AI regulation is to be seamlessly effective, it will be necessary for cities to coordinate – and ideally to settle on at least some common rules. Higher-level governments can play an important role here. Getting cities to synchronize approaches to AI won’t always be easy, but it’s a challenge that both countries and companies will have to take on.

What will the rules be?

From the first sets of recommendations we’ve seen, the rules proposed for governing AI seem to reflect a common set of values:

Transparency. AI can seem opaque. It is clever software that converts data into actions but usually hides  the in-built “decision tree” and justification for each branch from view. Often, this isn’t a problem. Most people don’t care why a smart signal turns green five rather than seven seconds after they arrive at a crossing. But in more consequential cases – such as when AI assesses job applications or determines creditworthiness – people do care. Good AI regulations should permit inspection of the inner workings of AI so that any arbitrary or unfair decisions can be amended or discarded.

No monopoly. Computers cannot feel as humans do, but they can process a lot more data. While a person might spend a day contemplating a single choice, a smart computer can make trillions. And the potency of any AI program is usually greater when it has access to more data. This creates a tension between humans and technology. On one hand, AI does best when it can monopolize data and decision-making power. But communities hoping to benefit from AI will want to restrict the power of any single program. After all, the opportunity for abuse is more acute when a single program dominates, and the wide-ranging gifts of AI cannot be unlocked if it is limited to a single program with a narrow purpose. Ensuring no single company or program has a monopoly on any type of citizen data is therefore critical.

Human well-being as priority. Finally, most people agree that the public’s well-being should be a veto test for AI use. If a proposed application enhances well-being, it is acceptable. If it harms it, it should be redesigned or rejected. This moral principle is familiar. And even if people sometimes disagree about what exactly constitutes happiness, whose happiness matters most, or whether or not happiness is boosted in a particular instance, there is an understanding that AI ought ultimately to be an instrument to happiness and well-being.

Cities to take the lead

For those of us who work with AI, the strengthening consensus that it ought to be regulated is highly positive. It shows that governments, companies and individuals are taking AI seriously and that they see it as a long-term ingredient in economic and social development. However, it is also important that we don’t become complacent and assume that AI will somehow start to oversee itself. We must agree on practical rules and start following them as quickly as possible. And cities should look to take the lead.

Aisha Bint Butti Bin Bishr is Director General of Smart Dubai.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.