AI is constantly touted as the next big thing. But how many of us are aware that AI is already here? While the jury is still out on whether it will be a benevolent force or an existential threat to humankind, AI is active in many facets of our daily lives and will play a larger role in the years to come.
For example, the NSA has basically a version of “Skynet” from the Terminator series to track suspected terrorists and predict terrorist attacks (Minority Report, anyone?). And AI “hivemind” UNU correctly predicted the exact final score for this year’s Super Bowl.
AI is here whether we like it or not. Should you care? Let’s take a look at how companies are using AI today, and I’ll let you decide.
Why do we need AI?
One of the biggest technology frontiers of the 21st century is the Internet of Things (IoT), which I’ve written about before. The IoT has all sorts of exciting implications, from automatically making coffee in the kitchen when your alarm goes off in your bedroom to being the foundation for connected smart cities. One thing is for certain: IoT will require a massive AI network.
AI is also extremely useful as a way to better analyze and parse big data. While data scientists can pool sources and decide what they want to look for, the actual grunt work is often done by a proprietary AI that sorts through exabytes of data. Credit card fraud detection, email spam filters, and mobile check deposits: All of these rely to some degree on artificial intelligence.
If the recent year-over-year growth of the still-small AI market is any indication, we’re approaching a boom. Industry experts predict that the demand for AI will increase substantially, from under $1 billion worldwide in 2017 to over $2 billion in North America alone by 2022.
How companies are using AI for good
The good news is that, right now, most companies with the capital, bandwidth, and knowhow to create their own AI are using them for potential good. These moonshot passion projects, mostly experimental in nature, have had surprisingly promising results.
In 2016, British police created an AI monitoring system so comprehensive and accurate that it maps more than 1 million distinguishable features from a mugshot and can identify a criminal just from a brief glimpse of their ear.
In the highly sensitive field of mental health, the first AI psychologist, Ellie, is already making big strides. Developed by DARPA and USC, Ellie can read 60 nonverbal cues per second to better sympathize with patients. Ellie recognizes the early warning signs of depression and can prevent soldier suicide much better than a human psychologist can. Perhaps most importantly, soldiers like talking to Ellie because AIs don’t judge.
Even in the much-maligned world of finance, AI has been a force for positive change. Wall Street startup Digital Reasoning works with Goldman Sachs and billionaire investor Steve Cohen to recognize and track behavioral patterns and catch dirty traders redhanded.
Simply put, James Cameron’s Terminator situation hasn’t come to pass — yet. AI has generally been a boon to society. But some might argue that’s only because the AI we have today isn’t mature enough — that it isn’t yet strong AI.
When AI looks more like Skynet
Speaking of finance, in Matthew Miller’s recent technothriller Darknet, an AI trading algorithm goes rogue. Without spoiling the story, Miller paints a fairly realistic (and terrifying) picture of what could happen if an extensive, strong AI pulls capitalistic levers as efficiently as possible without regard for human life.
This is dystopian sci-fi, not something that’s going to happen anytime soon. But that raises the question: How can AI be bad for us, as a society, right now?
The first thing to keep in mind is that creating, maintaining, and “teaching” an AI isn’t cheap. Your average small business has a better chance of landing a billion-dollar client out of the blue than building a competent AI on its own. The companies that can afford AI dev teams are giant corporations with lots of cash. And they’re already hoarding all the AI talent.
Secondly, people in positions of power in the tech world are frequently afraid of AI. Do they know something we don’t? Elon Musk recently said that an unnamed corporation’s AI development really worries him. He even set up the OpenAI nonprofit to avoid a Skynet future. Even if you have the financial and human resources, can AI ultimately be controlled, or will it get beyond us?
Are his fears grounded in reality? Possibly. Microsoft, which has been called out in the past for crossing the line with respect to user data and privacy, recently patented a Big Brother-like AI that would essentially let it monitor everything you do in Windows to improve Bing’s search results. Some people might be totally fine with this and even find it convenient. Others may see it as an unethical way to compete with Google.
For now, AI wants to help
As AI continues to improve and more companies continue to invest in it, the way forward is clear.
Position AI as a force for good to connect people and devices and make sense of all the data that flows between them. Figure out how it can improve lives and meet needs faster and more efficiently than ever before. And use it to predict and prevent risk and criminal activity wherever possible. These are pretty clear mandates, and they’re ones that most companies seem to be following.
The path and implications of strong AI, or what we can think of as true AI, is less clear. We need to be vigilant and push for academics, corporations, and governments to see AI as a way to improve lives and society for the better rather than as a vehicle for surveillance and control.
We don’t need to be scared, but we do need to be informed and active as AI takes an important role in society.
Ed Sappin is the CEO of Sappin Global Strategies (SGS), a strategy and investment firm dedicated to the innovation economy.