Give any respectable machine learning algorithm a concrete scenario to optimize and it will blow human-based heuristics out of the water. But we as humans should continue to focus on what we do best — thinking creatively, building empathy for other humans — in order to guide machines in the right directions.
A friend asks for recommendations of restaurants that are good for a romantic night out. If you’re like most people, you probably jumped to a few salient features — cozy atmosphere, fancy and non-messy food, and maybe bonus points for shareable desserts. Based on the perceived importance of each of these features, you then remembered a few restaurants that do well on each of these areas and formulated a recommendation.
The same friend asks a black box machine learning algorithm for date night recommendations. The machine ingests as many business attributes as it has access to — distance from the user, ratings, and price, among numerous others — and trains a model with those features on all of the users who have ever searched for “date night.” It then spits out an ordered list of a couple hundred restaurants.
It’s clear from this simplistic comparison that human intuition and machine learning excel in different ways. Our strength lies in the fact that we as humans have spent a lot of time building up the implicit shared context around “date night.” We know our friend well, we know that this is a crucial second date, and we’ve thought up of all of the details that could help make this a magical evening for her. At the end of the evening, we’ll also hear about it if our friend calls us to complain about the hour-long wait and the horrible parking situation, and the next time we’ll try to remember to factor in that information when recommending a restaurant.
There’s been talk on the resurgence of the 80:20 rule in machine learning. The notion is that machines alone can get us 80 percent of the way there, which may be “good enough” in most scenarios. But there remain a number of areas where we continue to need human involvement and judgment to get that last 20 percent there.
Understanding the problem
With the hype surrounding machine learning these days, it’s tempting to jump directly into ML-oriented solutions. There have been instances where we have stared longingly at the shiny new machine learning algorithms implemented by our neighbors, wondering what kind of problems we can solve if we could just sprinkle in a few more exotic models. But this kind of thinking could easily lead a whole team down the wrong rabbit hole, where you end up building an extremely robust infrastructure to solve an imaginary user problem.
Implicit in the original romantic restaurant recommendation scenario is the fact that we’ve done our user research (knowledge of our friend) and identified the exact user needs (crucial second date). Compared to machines, we humans are fantastic at all forms of generative user research — interviews, focus groups, observational studies — which all require significant empathy and often unstructured human interactions. Numerous studies (including a recent report by McKinsey) have repeatedly affirmed that humans will continue to excel over machines in areas that demonstrate these characteristics. At least in the foreseeable future, humans will still be the ones identifying major problem areas to tackle.
Another very common use of human involvement in machine learning is to leverage human intuition to identify features and label datasets. “Cozy and dim atmosphere,” for example, is a feature that humans can add to the restaurant’s dataset to make recommendations more subtle and nuanced.
At this step, generative research methods also come in handy — we ask users to rank a number of features in terms of relative importance to their use cases. Once feature areas are identified through user research, collecting or inferring the data and using them in training models becomes straightforward.
Evaluating outcomes and refining algorithms
Ultimately, whether we’ve provided a good recommendation or not depends on the actual experience, but how we evaluate the effectiveness of an algorithm is not always as straightforward as asking our friend how the date went.
Because many of these features are interactive by nature, it’s not always easy to separate out the influence that the product has on the user and vice versa. For example, we’ve learned that by simply showing users visible elements that may or may not be relevant to their original intent, we can influence what users perceive to be important (e.g. showing a map to the user when they’re looking for a homebound service), leading to an unhelpful feedback loop where misleading data gets fed back into the training models.
Fortunately, in this case, generative user research helped us to understand the relative weights users place on various features when they’re making decisions. These types of qualitative findings put a “why” to the “how” and “how much” and enable us to more reasonably interpret the data and refine the algorithm.
With all the doom and gloom surrounding the idea of humanity losing ground to the machines, we humans should continue to focus on what we do best — thinking creatively, building empathy for other humans, and so on — and the use cases are surprisingly wide. Just earlier this year, researchers at MIT found that even for an objective optimization use case, algorithms can still benefit from the addition of human intuition. For areas that are far more subjective, such as choosing a restaurant based on a user’s current needs, moods, and company, human intuition continues to play an important role in shaping and guiding the process.