As internally developed artificial intelligence systems move from lab to deployment, the importance of creating unbiased, ethical systems is greater than ever. The challenge is that there is not a simple solution for companies to build ethical consideration into AI algorithms. But there are key things you can do early on that help.
A machine learning algorithm can’t tell you whether a decision is ethical or whether it will irreparably damage morale within your organization. It hasn’t spent years honing its business intuition, the intuition that tells you that even though a recommendation looks right on paper, it will be poorly received by your client base.
That’s where human judgment enters the picture. There are a number of approaches you can take for integrating AI into your decision-making strategies. Depending how high the stakes are and the problem you’re trying to solve, you might outsource the job to AI but insist that a person review its findings before action is taken. Or you might identify key areas that will largely be the domain of AI, which will relieve you of the need to be involved in every decision related to that particular process.
Human judgment will remain central to business decisions for some time to come. As AI systems become increasingly powerful tools in corporate arsenals, we must ensure that those tools support our ethics. Here are some ways to do that before you even write the first line of code.
Identify your company’s core values
Systematizing your company’s core values begins with identifying and documenting those values. Start a process that captures the values that have become central to your company culture. One researcher made a useful distinction between “values” as marketing and “values” as deeply held beliefs: “If you’re not willing to accept the pain real values incur, don’t bother going to the trouble of formulating a values statement.”
If an AI system suggests a course of action that makes sense on paper but not in the broader context of your organization’s long-term goals, you’ll need a strong internal compass to make the right call. Data is important, but you’re ultimately responsible for your decisions. When called upon to explain your actions, you can’t just say, “The AI made me do it.” Use such tools to gather information and add context to your decision-making process. But when you make a decision for your company, you should include humanity in the process.
Establish an AI oversight group
Machine learning systems are only as good as the data we feed them. This immediately creates a challenge for AI system developers: Humans are biased. Even the most fair-minded person carries unconscious bias they’re not aware of. So without meaning to, developers can end up corrupting the systems they design to help us make more objective decisions.
To get around this problem, create internal AI watchdog groups that periodically review your algorithms’ outputs and can address complaints about discrimination and bias. Then use the group’s findings to refine how individuals use the system throughout the organization.
Anticipate AI’s impact on employees
Machine learning systems can generate powerfully personalized experiences — for both customers and employees. Using AI ethically includes shifting employee performance metrics from output-based measurements to evaluating the creative value they bring to the company.
In an article published by the World Economic Forum, Maria Grazia Pecorari says, “Although there are roles under threat, there are also roles that will become needed more than ever. It’s more cost efficient to retrain current employees to fill the roles you need in the future than it is to hire new ones, and they are also more likely to be loyal to your organisation.”
One great part about deploying AI tools in the office is that people won’t have to do so much drudge work anymore. As their roles become more dynamic, so too should your evaluation standards.
By investigating these three areas early in the development process, your company is better positioned to build new AI systems that reflect — and protect — your company’s values, even as they improve the experiences of your customers and employees alike.
Additional article contributors: Mehdi Ghafourifar and Brian Walker.
Alston Ghafourifar is the CEO and cofounder of Entefy, an AI communication technology company building the first universal communicator.
A version of this article originally appeared at Entefy. Copyright 2018.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here