Even as the masses begin to settle into the idea that a great many jobs can be automated by AI, they still cling to the notion that jobs involving “judgment” are safe.

One can appreciate why people keep human judgement on such an exalted pedestal, after all, every great advancement in science or technology was a direct result of human judgement. Conversely, every major setback and man-made catastrophe was also the result of human judgement. In fact, the bad calls vastly outweigh the good. There is no finer testament to human judgement than the fact that our opinion of it is governed by a logical fallacy-confirmation bias.

The real question isn’t whether AI can replicate human judgement, it’s why would we want it to?

More often than not, human judgment is little more than qualified guessing, and often wildly inconsistent guessing at that. Our decisions are governed by dozens of extraneous factors. For example, parole boards tend to grant more paroles directly after lunch than directly before lunch. This is not because they order their day from worst to best prisoners, but solely because they are in a better mood after having recently eaten — meaning if you ever find yourself in the unfortunate position of coming before a parole board at 11 a.m., bring cookies.

We would like to think that we make decisions by accurately consuming all the pertinent information and coming to a rational conclusion. But the reality is that the number of data points needed to make an informed decision are simply beyond the capacity of the squishy organic calculator that rests behind your eyeballs. For a data set to be statistically significant it needs to number in the thousands to hundreds of thousands, however, we often make decisions from a dozen or so hastily collected data points.

Worse yet, we are naturally inclined to look for mental shortcuts, regardless of their accuracy. These are generally more akin to soft bigotry than rational consideration. A telling scene in the movie Moneyball revealed some of the ridiculous tropes the scouts used to evaluate talent.

“He has an ugly girlfriend, ugly girlfriend means no confidence.”

The movie is a good roadmap for how AI will affect the future of decision-making. When human judgment is benchmarked against data-driven decisions, data-driven will win every time.

In fact, in a follow-up to Moneyball, author Michael Lewis explored the new world of data-driven sports drafts and found that even with all the available data, human judgment would still find a way to snatch defeat from the jaws of victory. Lewis discussed how it was that so many teams overlooked Jeremy Lin, noting that all Lin’s stats indicated he would be an amazing player, yet almost every team overlooked him during his draft. One manager admitted that despite seeing the data saying Jeremy Lin would be a superstar, he simply had trouble thinking of an Asian player as being that athletic and thus passed on one of the greatest players the sport has ever seen.

This is one of the great advantages of AI over human judgment — our rich tapestry of self-delusion, faulty memory, and biases means even with all the data on hand we will still be inclined to make a bad decision for personal reasons. We are too quick to be mistrustful of data and too trustful of our “gut.” It is appropriate that we credit our “gut” with these judgment calls, because physiologically our guts don’t contain any decision-making abilities and are quite literally just full of crap.

The problem of bad human judgment is regrettably cyclical. We use the same fuzzy logical to gauge other people’s judgment. We tend to focus too much on people’s greatest hits in appraising their expertise and not enough on the more telling information — their average.

As another case in point, millions of people entrust their life savings to hedge funds that make expert investment decisions for them. They even pay huge fees for this expertise. There is just one problem: Hedge funds almost never beat the average growth of the stock market. In fact, less than half of fund managers invest their personal money in their own fund, demonstrating that they do sometimes make good decisions.

This is the point, though. To design truly revolutionary AI systems, we need to end our unjustified adulation of human judgement and see it for what it really is, deeply flawed at best. The question should not be how we can possibly replicate such a perfect process but how we can avoid baking its implicit shortcomings into its replacement.

We need to deconstruct the really important factors in determining our desired outcome and refine methods for measuring the relevant data points if we are to move toward better decisions. Because if we continue to consider human judgment the gold standard, we are only going to automate bad decision-making for the future.

Aiden Livingston is the founder of Casting.AI, the first chatbot talent agent.