Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Technologists, social scientists, and others are rightly concerned over bias in artificial intelligence. As the technology continues to infiltrate the digital systems that impact our lives, ensuring that AI does not discriminate on the basis of race, gender, or other factors is emerging as a top priority.
Enhancing social justice is important to the enterprise as well, but equally important is the ability to succeed in a competitive marketplace. And the fact remains that bias in AI is not only detrimental to society, it can also lead to poor decision-making that can cause real harm to business processes and profitability.
A bad reputation hurts
USC assistant professor Kalinda Ukanwa recently highlighted the myriad ways in which poorly trained algorithms that produced biased results can lead organizations down the wrong path. For one, word of mouth can quickly spread tales of unfair treatment throughout a given community, which results in lost opportunities and diminished sales. As well, her research has shown that over-reliance on “group-aware” algorithms that attempt to discern an individual’s behavior based on an assignment to a particular group may yield results in the short-term but ultimately fall behind AI operating on a “group-blind” basis.
Another key source of bias-induced friction between organizations and both its customers and employees is when direction interaction becomes necessary, such as in a call center. Nice, a developer of robotic process automation (RPA) for call centers, recently developed a framework to help ensure that AI remains helpful and friendly to users and employees, which in turn builds strong brand loyalty and positive social media buzz. Among the key points are the need to focus on delivering positive outcomes in any interaction and to train bots to be devoid of race, gender, age, or any other biases so as to produce a thoroughly agnostic view of humanity.
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
Data scientists categorize AI bias under several domains, such as sample bias and selection bias, but one of the most detrimental to the enterprise is predetermination bias, according to author and entrepreneur Jedidiah Yueh. This is where AIs (and humans as well) try to prepare for the future they expect, not necessarily the one they’ll get. This is understandable but, in an age where AI itself is producing a radically unpredictable future, it is fraught with danger because it inhibits innovation and the ability to remain flexible in a changing environment. Unfortunately, predetermination is often hard-wired into the ETL process itself, so undoing it requires more than changes to AI training.
Harnessing bias for good
Enterprise leaders should also avoid the trap that comes from thinking that all bias is bad, says Dr. Min Sun, chief AI scientist at Appier. In many marketing scenarios, it can be helpful to build bias into AI algorithms if you’re trying to figure out buying trends for, say, single women of a certain age. The trick is to ensure that decision-makers are aware that these biases are present and can view the resulting data in the appropriate manner. To do this successfully, it’s important not to introduce bias into the learning model itself but rather into the data that the model is trained on.
The key problem enterprises face when trying to eliminate bias from AI is that today’s data governance policies are not suited for this new mode of operation, says tech author Tom Taulli. All too often, AI projects lack the coordination needed to stamp out bias and produce an effective ROI, and this usually stems from the distance that exists between data science and application development teams. While there is always a temptation to automate all functions in a given data process, governance should be an exception because only a hands-on, intuitive approach can ensure that goals are being met in a rapidly changing environment.
With bias so prevalent in the AI projects already deployed, enterprise leaders would be wise to take a hard look at where and how AI is being employed — not just in the interests of the greater social good, but for their own economic reasons as well. In this day and age, trust is a rare and valuable commodity, and once it is lost it is not easily regained. The last thing any organization should want is to be tarnished with a bias label caused by a poorly trained AI process.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.