Artificial intelligence is quickly becoming a part of daily life. Enterprise implementations of AI-based technologies tripled in 2018, according to Gartner. At the same time, it’s reaching ubiquity in consumer-facing applications, helping us write our emails, discover new music, and get on-demand customer support. At every touchpoint, our data is being collected and used to make machines faster and smarter, and that’s driving calls for regulation from global citizens, governments, and companies who want to ensure deployments of machine and deep learning algorithms are safe and ethical.

While implementing laws to protect consumers from “AI-gone-wild” may seem like a reasonable proposition, it’s one that’s doomed to fail. We need only look at the case of GDPR to see how well-intentioned initiatives can end up becoming virtually unenforceable laws. Even the Principles on AI from the Organization for Economic Co-operation and Development (OECD), are unlikely to make a real impact, since, as non-binding recommendations, there are no ramifications for non-compliance.

The case against government intervention

Governments simply do not have the bandwidth or budget to tackle the complexities of AI. They’re also notoriously slow-moving. The pace at which artificial intelligence is evolving is simply too fast for them to establish effective legislation. Even the creation of “frameworks” that have no provision for enforcement is an exercise in futility. It’s the very real threat of penalty that will curb bad actors. For example, most people will avoid insider trading because they know there’s a high risk that the SEC will discover the transgression and fine or imprison them for the offense. If insider trading were only part of a guideline or was otherwise something that could be done without punishment, the practice would be much more rampant.

Legislative bodies also lack the type of knowledge of the space that would be required. We allowed the proliferation of the internet and then of social media to go completely unchecked simply because governments (except China) didn’t see the potential ramifications coming before it was too late. Members of the U.S. Congress recently proved that they can’t even understand how Facebook generates revenue. Can we really expect them to understand the nuances of complex neural networks?

Inter-governmental agencies also aren’t the solution. Case in point, the OECD and its Principles of AI, which is more a suggested moral compass than anything else. It has zero technical detail despite the involvement of highly-astute members of academic and scientific backgrounds, and it will do little to change the way organizations implement and/or develop AI.

A precedent for the solution

Neither legal regulation nor ethical guidelines will keep AI development from running amok. That doesn’t mean there isn’t a solution, though. In fact, the solution is a lot simpler than you might think: Establish an independent body that can create standards and a program for certification.

There’s ample precedent — for example, the ISO 27001 and SOC 2 standards for information security management, which are protected under the SSAE16 and ISAE 3402 financial reporting standards. These compliance measures have highly technical standards that require organizations to comply with specific password protection measures, mobile phone security, data segregation, firewall protections, and many more nuanced topics. While there’s no legal penalty for non-certification, certification is often a necessity for businesses wanting to engage with one another.

In AI, I propose that technical experts, investors, and policymakers within the space come together to create a global, independent governing body responsible for establishing and enforcing AI standards. The standards — which should be reviewed regularly with annual certification requirements — should spell out specific requirements such as compliance around avoiding bias in data sets, checks to ensure AI is being used ethically and in a way that isn’t discriminatory, controls around automated decision making, and emergency measures to stop an AI machine. Even if a separate body isn’t created, an existing regulatory agency such as the FASB, ISO, National Institute of Standards and Technology (NIST), or the IASB (with input from the AI Ethics Lab and a rotation of key players in the space) should still step up, before the same mistakes made with data privacy and social media are repeated.

Organizations that choose to deploy AI will be encouraged to get this certification because it is a stamp of approval, and other businesses/consumers will require them to have it in order to continue doing business. Enforcement will be a product of the capital markets, because non-compliant organizations will find they have fewer markets to operate in, which reduces the amount of business they can do.

This approach has many benefits. It reduces the need for government involvement and regulations that will be ineffective, and it increases the business community’s awareness of specific technical standards that need to be met. This model has a proven track record, and there are already precedents for compliance measures.

While governments can provide input and support for the standards, they don’t need to waste time and resources coming up with standards that they have zero expertise in or enforcement capabilities for.

If you want change and you want to be a part of developing something new and impactful, contact me so that we — as tech leaders, investors, and consumers — can work on creating AI standards to present to the Senate AI Caucus. You can message me on LinkedIn or Twitter.

Abhinav Somani is CEO of Leverton.