The biggest opportunities in machine learning (ML) today lie not in cracking the next big nut on the path to artificial general intelligence (AGI), but in opening up existing machine learning techniques to more businesses and making them more usable. The tech giants already know this and are investing in democratizing AI to make tools and services more widely available, but the user experience (UX) of machine learning is still overlooked. Companies can make massive improvements to machine learning-based applications even without access to the same levels of data or talent as the biggest players — compensating for a lack of data by building a great UI (more on this later). When we focus on AI as a tool and recognize how crucial usability is to widespread adoption, we can see that there are opportunities to enhance existing AI in ways that have nothing to do with progress toward human-level machine intelligence or artificial general intelligence.

AGI makes headlines, AI-as-a-tool makes money

While flashy projects like DeepMind and Google Brain are more likely to make headlines than Google’s more mundane implementations of AI, such as search, the latter is a vastly more profitable business. According to a recent MarketWatch article, Google has “made a massive multibillion-dollar bet on AI and machine learning,” a bet I believe is nicely hedged on the question of whether there’ll be another “AI winter,” a period of reduced interest in AI.

Gary Marcus of NYU recently wrote a critique of deep learning that has been covered not only in tech publications such as Wired and MIT Technology Review but also in the mainstream media. In the critique, Marcus warns of the dangers of overhyping AI. In February, the Financial Times published an opinion piece titled “Why we are in danger of overestimating AI” that points to examples of serious problems with current AI systems, such as how easily they can be fooled and their lack of common sense knowledge.

The hype around AI is about artificial general intelligence, not AI-as-a-tool. If the former should experience a lull, that’ll probably be a good thing. It won’t affect the numerous uses that we already make of machine learning techniques — search, translation, content recommendation, object classification, etc. — and we could add value by making these available to businesses that can’t employ the armies of PhDs companies like Google or Amazon have at their disposal. To that end, both of those companies now offer various platforms and services — even machine learning models-as-a-service (already trained with masses of data) — to companies that don’t have the expertise to build these themselves.

This isn’t about advancing toward human-level intelligence, it’s about making the existing tech more widely accessible. Microsoft and IBM are also investing heavily in this so-called democratization of AI. But in addition to making the existing tech available to more people, there are all kinds of ways in which we can make that tech more useful.

Uncertainty is a UX problem

A fundamental facet of machine learning, which involves learning from a set of “training data” in order to be able to make predictions on new data is that its predictions are uncertain. They are simply probabilities researchers arrive at mathematically from the data they have fed the system.

The uncertainty inherent in the predictions of machine learning systems is not going to go away and so we must deal with it, at least in cases where the action we take based on a prediction is something more serious than targeting web content or advertising. In some cases, actions informed by machine learning predictions can have very serious consequences. The challenge is to make that uncertainty more palatable to the user. To a certain extent, we can treat the explainability problem as a usability issue. After all, a prediction that comes with an explanation is easier to trust and make use of than one without.

Once we recognize something as a user experience problem, we can usually bring to bear the standard tools and processes (usability research, etc) to find a solution.

How a user interface can increase machine learning accuracy

I claimed earlier that you can make up for a lack of training data by building a great user interface. This is about something called “human-in-the-Loop” (HitL) machine learning, which simply means any machine learning system that involves humans in the training process. Companies such as Figure Eight and Mighty AI are leading the charge on the crowdsourced approach to this problem. Mighty AI has an app that lets anybody with a smartphone earn a few cents by labeling lamp posts, pedestrians, parked cars, etc. in images that the company will later use to train autonomous vehicle systems.

But HitL is about more than just using crowdsourcing to label entire training sets of data. We can make creative use of techniques like few-shot learning, where a system can learn to classify examples from just a few labeled examples, and transfer learning, where we apply learning from one task to another, to solve problems where no labeled training data is available. Transfer learning is about learning rich representations of data. It often goes hand-in-hand with few-shot learning because it’s the very richness of the representations that makes it possible to learn from just a few examples. Now bring in a human to do those “few shots” and it becomes possible to go from having no labeled data at all to having a powerful classifier. There really is a lot to explore here, including how best to present the human with the most effective examples to label, but honing the user experience to get the most out of the human in the loop is critical to the accuracy of the classifier.

Conversational AI: At the end of the day it’s just software

Perhaps where user experience is most critical is in the design of conversational interfaces. Here we’re not talking about humans in the loop, but rather humans as end users of an application. Conversational AI is an area where the distinction between artificial general intelligence and AI-as-tool is crucial. The original goal of voice-based/conversational AI may have been to produce something that could hold an intelligent, open-ended conversation. But that turned out to be really hard to do, as it became clear that real conversation is impossible without experience of the world or common sense knowledge. The OpenAI group is working on one particular approach to this problem that involves making machines use language to accomplish goals in their environment, but this is very much in its infancy. Another approach, the Cyc project, which aims to imbue machines with common sense knowledge by storing facts and inference rules in a massive database, was started back in the 1980s and decades later has not come to fruition.

“We are not building these systems in order to pass the Turing test,” Bill Mark of SRI International, the research company behind Apple’s Siri, said in a recent interview with Byron Reese. He was pointing to the need to be pragmatic when designing voice-based systems, and to acknowledge the lack of understanding in order to work around these limitations and design something that is actually useful. This is the new goal for voice-based AI systems, and it demands skills beyond those of an AI researcher. It requires software engineers and user experience designers working in collaboration with natural language processing experts to create something not so much intelligent as useful.

Katherine Bailey is a principal data scientist at Acquia where she leads a team of data scientists and engineers working on building machine learning-based applications.