Rapid advances in artificial intelligence have improved the human factor of machines. Like humans, machines can be influenced by people with varying ethical values. In looking at AI’s brief history, exemplified through innovations like chatbots, we encounter ethical challenges that we must overcome to ensure AI is not adversely molded, to the detriment of users.
AI’s early ethical troubles
Some earlier iterations of AI were marketed as smart search capabilities. Machines learned from their users and made personalized recommendations based on the preferences and behavior of both individuals and profiled segments.
As AI capabilities progressed, Microsoft introduced the world to a more recognizable iteration of machine intelligence — Xiaoice, aka “Little Ice.” Xiaoice differed from its predecessors because “she” had the personality of a 17-year-old girl. Xiaoice also provided emotional support through natural language, thereby mimicking a user’s personality. It was capable of adapting phrasing and responses based on positive or negative cues from its human counterparts.
Xiaoice was launched in China in July 2015 and had more than 40 million users in her prime. She was programmed to filter certain topics, including racially biased keywords and political taboos. This filtering ultimately helped Xiaoice avoid major ethical challenges. Yet while filtering is a useful ethical containment tool for AI, it is not enough on its own.
In March 2016, Microsoft released the AI chatbot Tay. Like Xiaoice, Tay was based on learning algorithms that helped it interpret what humans said. It then used that knowledge to converse with others. However, for Tay, limited filtering was put in place in hopes that less filtering would accelerate the learning process. Compared to Xiaoice, the filtering was trivial and ultimately insufficient.
Because of Tay’s open nature, the chatbot became corrupted by malicious users who miseducated Tay by using politically insensitive phrases and feeding her inflammatory viewpoints focused on offensive themes. As a result, Microsoft was forced to take her offline. In hindsight, the problems that occurred were predictable. The nature of the Internet provided Tay’s audience with a sense of anonymity and disconnection, leading to unethical nurturing.
Artificial empathy and black box AI
AI’s ethical challenges are comparable to those faced by humans. That’s why builders should consider limitations to ensure AI bots do not communicate in harmful or offensive ways. Filtering is one option, but evolutionary adaption using some form of machine learning capabilities, combined with information available to these bots, is required for real-time ethical adaptation.
In some sense, AI can be viewed as a re-creation of the human brain’s abilities in digital form. Of course, AI’s version of the brain would include significant enhancement to capabilities like multitasking and memory retention. In human brains, a specific portion, the right supramarginal gyrus, recognizes a lack of empathy and autocorrects. By simulating the functionality of the brain’s right supramarginal gyrus, AI could take a major step forward in addressing potential ethical issues.
The leap from empathy to ethicality would require additional capabilities, but this would be a key component. Tay, for example, demonstrated the need for filtering to avoid ethical landmines while presaging the need for artificial empathy to adapt to changing emotional and ethical times. Filters together with continuously learned empathy can help AI achieve ethical standing. However, other incarnations of AI are not as easily groomed.
AI as a concept has existed since the 1950s. Algorithms created to realize AI were divided into algorithms exclusively authored by humans and algorithms authored by machines themselves. Early AI advancements were predominantly made by humans. It wasn’t until more industries became computerized and large datasets emerged that AI algorithms made by machines became a reality. Black box AI is keeping the latter trend alive through neural networks, prompting deep learning through mimicry of the human brain’s structure.
Initially, systems like black box AI were considered “neural networks” due to their similarity to the human brain. Like humans, memory is encoded in the strength of multiple connections rather than stored in specific locations. How the brain generates associations and derives conclusions is, for the most part, a black box. Similarly, in black box AI, we know how to create these networks, but we are no closer to understanding them than we are to understanding the brain itself.
Real world examples emphasize the many parallels between black box AI and the human brain. A notable example of black box AI used for good is Mount Sinai Hospital’s Deep Patient project, which applied deep learning to medical records for over 700,000 patients. Researchers fed data into a deep learning system for training. Deep Patient was then tested on new records, proving to be incredibly good at diagnosing diseases. Without any expert instruction, Deep Patient discovered patterns hidden in the hospital data that seemingly indicated when people were developing a wide range of ailments. Its diagnostic methods are a mystery, as it utilizes deep learning for training. However, its results are particularly valuable for diagnosing conditions physicians find difficult to detect.
Many AI and deep learning applications won’t happen in the future unless we find ways of making the logic more understandable to its creators and more accountable to its users. This AI is modeled on humans, and like humans, AI is guaranteed to make mistakes. The use of the technology needs to have some type of governance to contain the impact of those mistakes.
What happens next?
A first step in the responsible development of AI is the incorporation of ethical testing during the development lifecycle. Unit testing, load testing, and user testing are already standardized steps in software release cycles. In tomorrow’s world, we will also need bias, transparency, and predictability testing, as well as many new and unknown categories of testing.
Machines may learn to generate their own algorithms, but the onus will be on humans to make sure they are compliant with our standards for societal behavior. At its core, AI can bring out the best and worst in humanity. This increases the need to ensure AI is introduced in the right situations and in the right way. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms.
Pumulo Sikaneta is the vice president of business process management at Ness Digital Engineering.