Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

In 1997, IBM’s Deep Blue beat world chess champion Gary Kasparov, the first time an AI technology was able to outperform a world expert in a highly complicated endeavor. It was even more impressive when you consider they were using 1997 computational power. In 1997, my computer could barely connect to the internet; long waits of agonizing beeps and buzzes made it clear the computer was struggling under the weight of the task.

Even in the wake of Deep Blue’s literally game-changing victory, most experts remained unconvinced. Piet Hut, an astrophysicist at the Institute for Advanced Study in New Jersey, told the New York Times in 1997 that it would still be another hundred years before a computer beat a human at Go.

Admittedly, the ancient game of Go is infinitely more complicated than chess. Even in 2014, the common consensus was that an AI victory in Go was still decades away. The reigning world champion, Lee Sedol, gloated in an article for Wired, “There is chess in the Western world, but Go is incomparably more subtle and intellectual.”

Then AlphaGo, Google’s AI platform, defeated him a mere two years later. How’s that for subtlety?

In recent years, it is becoming increasingly well known that AI is able to outperform humans in much more than board games. This has led to a growing anxiety among the working public that their very livelihood may soon be automated.

Countless publications have been quick to seize on this fear to drive pageviews. It seems like every day there is a new article claiming to know definitively which jobs will survive the AI revolution and which will not. Some even go so far as to express their percentage predictions down to the decimal point — giving the whole activity a sense of gravitas. However, if you compare their conclusions, the most striking aspect is how wildly inconsistent the results are.

One of the latest entries is a Facebook quiz aptly named “Will Robots take My Job?” Naturally, I looked up “writers,” and I received back a 3.8 percent, which seemed comforting. After all, if a doctor told me I had a 3.8 percent chance of succumbing to a disease, I would hardly be in a hurry to get my affairs in order.

There is just one thing keeping me from patting myself on the back: AI writers already exist and are being widely used by major publications. In a way, the quiz’s prediction is like a doctor declaring there is only a 3.8 percent chance of my disease getting worse…at my funeral.

All this begs the question: Why are these predictions about AI so bad?

Digging into the sources from “Will Robots take My Job” gives us our first clue. The predictions are based on a research paper. This is at the root of most bad AI predictions. Academics tend to view the world very differently from Silicon Valley entrepreneurs. Where in academia just getting a project approved may take years, tech entrepreneurs operate on the idea of what can we get built and shipped by Friday? Therefore, asking academics for predictions about the proliferation of AI is like asking your local DMV how quickly Uber may be able to gain market share in China. They may be experts in the vertical, but they are still worlds away from the “move fast and break stuff” mentality that pervades the tech community.

As a result, their predictions are as good as random guesses, colored by their understanding of a world that moves at a glacial pace.

Another contributing factor to bad AI predictions is human bias. When the question is between who will win — man or machine — we can’t help but root for the home team. It’s been said that it is very hard to make someone believe something when their job is dependent on them not understanding it, meaning the banter around the water-cooler at oil companies rarely turns to climate change. AI poses a threat to the very notion of human-based jobs, so the stakes are much higher. When you ask people who work for a university the likelihood of AI automating all jobs, it is all but impossible for them to be objective.

Hence the conservative estimations — admitting that any job that can be taught to a person can obviously also be taught to an AI would fill the researcher with existential dread. Better to sidestep the whole issue and say that it won’t happen for another 50 years, hoping they’ll be dead by then and it will be the next guy’s problem.

This brings us to our final contributing factor, that humans are really bad at understanding exponential growth. The research paper for “Will Robots Take My Job” was from 2013. The last four years in AI might well have been 40 years, based on how much has changed. In fact, the researchers’ bad predictions make more sense through this lens. There is an obvious bias for assuming jobs that require decision-making are “safer” than those that involve routine. However, the proliferation of neural net resources is showing that AI is actually very good at decision-making, providing the task is well-defined.

The problem is that our somewhat primitive reasoning tends to view the world in a linear fashion. Take this example often used on logic tests: If the number of lily pads on a lake doubles every day, and the lake will be full at 30 days, how many days will it take for the lake to be half full? A depressingly high number of people’s knee jerk response would be 15. The real answer is 29. In fact, if you were viewing the pond, the lily pads wouldn’t appear to be growing at all until about the 26th day. If you were to ask the average person on day 25 how many days until the pond was full they might rightfully conclude decades.

The reality is AI tools are growing exponentially. Even in their current iteration, they have the power to automate at least part of all human jobs. The uncomfortable truth that all these AI predictions seek to distract us from is that no job is “safe” from automation. Collectively, we are like Lee Sedol in 2014, smug in our sense of superiority. The coming proliferation of AI is perhaps best summed up in the sentiments of Nelson Mandela: “It always seems impossible until is it done.”

Aiden Livingston is the founder of Casting.AI, the first chatbot talent agent.


VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member