The gold rush for artificial intelligence (AI) is officially in full swing. Big players like Google and Facebook and small teams alike are in an all-out sprint toward the goal of creating the next generation of AI assistants that will fundamentally change how we live and work.
I am in awe at the pace of progress, because every week it feels like a new barrier is breached, a tool grows more robust, or a new startup is launched with the ability to transform an industry.
However, the most surprising observation continues to be people’s underestimation of AI. Specifically how the general population seems so unable, or unwilling, to imagine that a machine could ever match a human’s ability in any job — particularly their own.
What is so striking about this conclusion is that it reveals a belief system that posits that people are actually quite good at their jobs, despite abundant day-to-day empirical evidence to the contrary. Honestly, who among us goes through life forever impressed by how efficient and effective others are at their job?
Recently at a pharmacy in Bogota, I attempted to order some loratadine for my allergies. I even doubled down, explaining that it was for allergies, to clarify, but also because “allergies” is the same word in Spanish with only a softened “g,” and I live for cognates in other languages. The pharmacist then produced the pills, pesos were exchanged, and I was on my way.
Only later that evening, when I went to take a pill, did I realize the pharmacist had given me a statin instead, a medicine used to treat heart disease. After a lengthy debate with my girlfriend over whether this had been a life-threatening error, it was settled that if nothing else it was at least inconvenient, necessitating a trip back to the pharmacy to exchange.
It is worth noting that as a young, active, and healthy man, I would fall short of fitting any stereotypical daily statin user type.
How then, with all of these many clues, was such an error made? Perhaps the pharmacist didn’t get enough sleep the night before, maybe she hates her boss, but mostly likely, it was just a simple mistake — human error. This is not to single out this poor pharmacist as somehow being terrible. Far from being unique in making a potentially life-threatening error in her job, I would argue that most humans routinely make these kinds of mistakes. I am just happy none of my jobs have ever required me to distribute medication or I would have surely have ended up like the pharmacist from It’s a Wonderful Life who didn’t have George Bailey there to save him from his own incompetence.
So why is it that, despite each of us having a wealth of experience with people being unapologetically bad at their jobs, we still feel that humans have set the bar so high that the same machines — the ones that can tell you the name of that obscure actor in that even more obscure film with 100 percent accuracy in .01 seconds — would somehow buckle under the challenge of distributing allergy pills? Perhaps the idea of artificially intelligence entities invokes our “us versus them” insecurities, where we feel compelled to ignore our common sense and rally behind the home team.
However, mounting evidence points to artificially intelligent assistants replacing a great many professions, from law to delivery systems. I recently read an article about how IBM’s new legal research platform, known as ROSS, was outperforming new lawyers in general research accuracy. But the writer stopped short of declaring an all-out AI victory, saying that one could envision a future where we only have lawyers who are specialists.
This, of course, ignores some basic rules of how AI performs best. AI systems actually work much better when they are more specialized. Normally it is general concepts where AI falls down. Put simply, AI can instantly identify all the troublesome gene sequences of a beagle’s genome to determine the likelihood of certain diseases but has struggled to identify a beagle in a picture.
What the writer had inadvertently identified wasn’t the limitations of AI, but rather the limitations of our imagination to grasp the sweeping ramifications of AI. It is not that AI couldn’t perform equally well on specialized legal issues; rather, it is that we are unable to admit that AI might surpass us in the most esteemed of human professions.
Ultimately, our belief systems are simply out of sync with the data. Self-driving cars, which have performed flawlessly during their millions of hours logged on real streets, still have a driver at the wheel as Uber launches its first fleet. Statistically speaking, passengers are in much more danger with a human operator involved in any capacity. After all, human errors literally kill over a million people a year, whereas the statistical likelihood of dying from a self-driving car is like falling off a building and being struck by lightning on the way down. Nonetheless, an operator sits in the driver’s seat of each of Uber’s self-driving cars, hovering their hands over the wheel.
This is ultimately a metaphor for the whole problem that is restricting our progress: We would rather let millions die in traditional ways than risk even one person dying in a way that is new and scary. So I propose that if you find yourself uncomfortable with any job that will soon be automated by AI, you should step back and try and remember an experience with a human underperforming in the same capacity, and ask yourself: “Could a robot really do much worse?”
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more