Artificial intelligence (A.I.) and machine learning are burning topics within the tech community, and while we are likely decades away from Artificial General Intelligence, deep learning technologies are charting new territories in medicine, education, and ways in which consumers can interact with brands.

Within the past quarter alone, tech giants like Alphabet, Salesforce, Nvidia, Amazon, and IBM have made major investments into A.I. technology by acquiring smaller machine learning companies, hiring dedicated A.I. research teams in house, or successfully partnering with startups tackling the space. Their investments validate that deep learning is no longer a new fad, but a technology that businesses must either include in their products and services or risk being outpaced by competitors that do.

While advancements in machine learning and A.I. are exciting and pose many benefits to people and businesses, its increased functionality in our daily lives raises some regulatory and socioeconomic concerns.

A.I. has had its mishaps

Facebook was excoriated recently after turning over its Trending section to algorithms, which proceeded to promote false and inappropriate news stories without the human former curators. Similarly, Microsoft’s Tay bot was criticized as a racist after learning from posts on Twitter. There was even a bot arrest of the Random Darknet Shopper, which purchased illegal goods on the dark web.

The incidents raise ethical questions about the use of A.I. and whether tasks like news editing, shopping, social interactions, and customer service should be completely turned over to machines.

The reality is that very few chatbots today are powered by deep neural networks and state-of-the-art machine learning. In fact, most of them are still using scripted rule-based frameworks to navigate users through predefined journeys.

However, as artificial intelligence advances and becomes easily accessible, chatbot makers will likely embrace it to create ever-smarter and more advanced bots. In this case, the creators become responsible for ensuring that their A.I.-powered cyber-friends can act in good faith to help customers and businesses without crossing ethical boundaries.

And the big players know this too. Google, Nvidia, Amazon, IBM, and others recently pledged collaboration on creating industry standards for artificial intelligence. While some of the tech giants have not yet agreed to participate, the general concept of this alliance is a step in the right direction — as well as a chance to self-regulate ahead of possible government sanctions.

Raising ethical machines: nature and nurture

Given the space is still in its early days, the majority of deep learning technologies are not yet trained to take on general tasks and need more time to “grow up.” In our case, we are committed to raising “good” neural networks by deploying them in environments like customer service, where they can learn quickly and deeply in order to perform specific tasks.

These environments are all about the data. In the world of customer service, a system should not be trained manually by linguists trying to predict every potential customer query.  This was the traditional natural language processing chatbot approach.  Whereas recently, the world has entered into the realm of accessible deep learning applications — models that rely on math rather than keywords and phrase recognition. We believe a truly strong and resilient AI model for customer service needs to be trained on a vast data set of historical customer service logs.

In a tool that’s powered by A.I. and reinforced with online learning or human supervision, each conversation continually trains the neural network, enabling it to become smarter over time. Therefore, the more valid users you have on your platform, the faster your bot will grow up and become smart.

Companies building A.I.-enabled tools need to address issues like privacy, political biases, and consumer sensitivity while making machine learning technologies serve a purpose and placing relevant safeguards through rich training data sets and initial human oversight.

Human + A.I., not humans vs. A.I.

For companies seeking returns on their A.I. investments, it’s easy to hope that the automation of business processes can take over the jobs of the team already in place, but no company should completely entrust the process to artificial intelligence. Machine learning is first and foremost a method to empower and not replace humans conducting intelligent labor tasks.

At this stage, A.I. works best alongside people to handle repetitive tasks. For example, in customer service, the repetitive part of the queries are best handled by a machine. This in turn unlocks valuable time for human agents to drive meaningful experiences for their customers and even create additional value where there was once a customer complaint.

By training the model on millions of historical data sets, like real customer service logs, the neural network becomes proficient at recognizing questions regardless of their phrasing and can make predictions about the best ways to answer them. This helps agents significantly reduce their handling time while increasing quality and accuracy of their responses — and consequently raising the Customer Satisfaction Score (CSAT) and Employee Satisfaction (ESAT).

In the coming phase, winning A.I. companies will be focused on building scalable products for focused use cases with communities of active users and mindful developers. Those developers and the companies they partner with have a responsibility to create solutions that balance ethical boundaries with the ambition to push the world forward through technology.

The machines will learn over time, but it’s up to us to make sure we use them in a way that complements human values.

Get more stories like this on TwitterFacebook