Artificial intelligence (AI) is an ever-growing technology. More than nine out of 10 of the nation’s leading companies have ongoing investments in AI-enabled products and services. As the popularity of this advanced technology grows and more businesses adopt it, the responsible use of AI — often referred to as "ethical AI" — is becoming an important factor for businesses and their customers.

What is ethical AI?

AI poses a number of risks to individuals and businesses. At an individual level, this advanced technology can pose endanger an individual’s safety, security, reputation, liberty and equality; it can also discriminate against specific groups of individuals. At a higher level, it can pose national security threats, such as political instability, economic disparity and military conflict. At the corporate level, it can pose financial, operational, reputational and compliance risks.

Ethical AI can protect individuals and organizations from threats like these and many others that may result from misuse. As an example, TSA scanners at airports were designed to provide us all with safer air travel and are able to recognize objects that normal metal detectors could miss. Then we learned that a few “bad actors” were using this technology and sharing silhouetted nude pictures of passengers. This has since been patched and fixed, but nonetheless, it's a good example of how misuse can break people’s trust.

When such misuse of AI-enabled technology occurs, companies with a responsible AI policy and/or team will be better equipped to mitigate the problem. 

Implementing an ethical AI policy

A responsible AI policy can be a great first step to ensure your business is protected in case of misuse. Before implementing a policy of this kind, employers should conduct an AI risk assessment to determine the following: Where is AI being used throughout the company? Who is using the technology? What types of risks may result from this AI use? When might risks arise?

For example, does your business use AI in a warehouse that third-party partners have access to during the holiday season? How can my business prevent and/or respond to misuse?

Once employers have taken a comprehensive look at AI use throughout their company, they can start to develop a policy that will protect their company as a whole, including employees, customers and partners. To reduce associated risks, companies should factor in certain key considerations. They should ensure that AI systems are designed to enhance cognitive, social and cultural skills; verify that the systems are equitable; incorporate transparency throughout all parts of development; and hold any partners accountable.

In addition, companies should consider the following three key components of an effective responsible AI policy: 

        It is important to note that different businesses may require different policies based on the AI-enabled technologies they use. However, these guidelines can help from a broader point of view. 

        Build a responsible AI team

        Once a policy is in place and employees, partners and stakeholders have been notified, it is vital to ensure a business has a team in place to enforce it and hold misusers accountable for misuse.

        The team can be customized depending on the business’s needs, but here is a general example of a robust team for companies that use AI-enabled technology: 

              Ultimately, an effective responsible AI team can help ensure your business holds accountable anyone who misuses AI throughout the organization. Disciplinary actions can range from HR intervention to suspension. For partners, it may be necessary to cease using their products immediately upon discoering any misuse.

              As employers continue to adopt new AI-enabled technologies, they should strongly consider implementing a responsible AI policy and team to efficiently mitigate misuse. By utilizing the framework above, you can protect your employees, partners and stakeholders. 

              Mike Dunn is CTO at Prosegur Security.



              Welcome to the VentureBeat community!

              Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

              Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!