There is a lot of smoke and mirrors when it comes to artificial intelligence. Don’t get me wrong — AI and machine learning are real, and we use them to solve mission-critical business problems on a massive scale, unlike anything we’ve seen in the past. For instance, companies apply the technology to solve important matters of data ownership, regulatory compliance, and intellectual property. But too many people and companies these days peddle hype around what they describe as applications of ML or AI that are, in fact, not AI-driven at all.
One example of this “faux AI” I spotted recently was a company that described mechanical turks as AI, claiming the system would auto-generate the information via AI processes. Looking a little deeper, I realized the company did the initial pass with some extraction methods, but humans did the heavy lifting. Unfortunately, while there is some truth to their claims, it shows more “smoke” than real product.
This kind of hype damages the industry, and when customers see past the illusion they are inevitably — and rightly — disappointed. It’s not unlike when someone learns how a magician does a magic trick for the first time. Smoke and mirrors may be harmless as entertainment, but it has real consequences for business.
A scandal surrounding a business management company raised an important point recently when the industry found out it had only masqueraded its solution as driven exclusively by AI. In reality, the company regularly used a crowdsourcing platform to hire people for review and transcription of documents. This leads naturally to the question, do we actually know who handles, sees, and uses our personal data?
A key consideration that we overlook far too often is that contractually most terms and conditions allow vendors to use your data freely in any way they choose, especially if it’s inside that organization. In this digital age, companies hold information about people’s lives as a set of linked elements within applications — from private applications, like medical records, to public ones, such as Facebook.
We share information with these applications without a thought, in most cases, for how companies will use, keep, or handle that data. You give away hours of work without knowing it. Just think for a moment about your own Facebook account. It automatically presents you with stories that are relevant to you and filters away items that it “thinks” are less interesting. Over time, the platform presents users with a view of the world that conforms to their own thoughts and opinions.
It is not just social applications that amass information. Various businesses and law firms also freely give away intellectual property that they develop without knowing it. The value of intellectual property is hard to quantify when you create it, but once you lose the IP or someone else controls it, it becomes priceless, since it’s impossible to regain once it’s given away.
Many cloud vendors set terms and conditions that hide or obfuscate complex interlinking clauses about who owns or handles the IP created on their platforms. Very often, it isn’t the customers or the end users who provided it, but the cloud and application providers themselves.
But the truth is, manual interaction with the data remains an important part of the AI equation. A great example of putting AI to general use is Adobe’s assistant that helps make pictures better based on a vast number of, yes, manually reviewed and corrected images.
Customers must look past the magic and see what the system really does. Companies must also understand AI does require manual assistance in varying degrees to learn. To gain any real value from AI, companies must accept that human involvement is necessary — at least for now.
Kevin Gidney is cofounder of Seal Software, a company that provides contract discovery and analytics.