VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
We’ve spent so long wringing our hands and worrying about artificial and virtual intelligence that we forgot to roll out the welcome mat when they finally arrived.
Now, when major tech companies give their annual keynotes, they can’t help but pepper the narrative with phrases like “machine learning.” What does it all mean, though? Should we crank up the worry now that it looks like every tent-pole feature of self-learning software could also be a critical flaw?
The future is here — and it’s equal parts exciting and terrifying. Now that our world is populated with computer programs that can teach themselves new tricks, how will things change? What’s still worth worrying about?
Self-learning software for business and personal use
With 2018 upon us, the worlds of both business and personal software are ramping up to make the next few years something of an artificial intelligence arms race. On the consumer side of things, machine learning and AI make our lives easier in small ways. Case in point: many of us now have a smart speaker like an Amazon Echo or Google Home sitting on our countertops.
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
While these kinds of AI applications are helpful and entertaining, their self-learning capabilities are limited, to say the least.
In the world of business, there’s more immediate potential for self-learning software.
“We are drowning in information,” says Vita Vasylyeva of Artsyl Technologies. “The biggest bottlenecks in any business process involve the handling of documents and manual input of data from those documents. At the heart of those bottlenecks is the transformation of unstructured content into structured data.”
Nevertheless, both the business and consumer worlds have distinct needs and roles to play, and I fully expect machine learning in both realms to grow more sophisticated and capable.
Briefly, here are three very different applications for self-learning software:
1. Smartphones: Machine learning is turning smartphones into veritable supercomputers. From learning what your face looks like by poring through your photos to delivering more timely and relevant app and location suggestions, our devices are learning who we are and what we want.
More critically, machine learning is also training modern smartphones to become better at identifying and quarantining known threat vectors such as malware and viruses. It’s not all about fun and games.
2. Medicine: Diagnostic medicine is a difficult branch of science. Some types of cancer scans currently require as many as four specialists to study and come to a consensus on treatment.
With machine learning, physicians can practice this type of diagnostic medicine much faster, more accurately, and with fewer people-hours required.
3. Marketing and business management: The marketing applications of self-learning software perfectly marry the promises and the privacy worries of machine learning.
Some industry experts predict that within 10 years, even the humblest small businesses will engage in machine learning to improve their reach.
Another critical application is the promise of easier bookkeeping and organization. Newer document- and data-capture software suites take cues from the user to automatically identify and categorize types of documents and transactions, and in the process, significantly cut down on the labor and expense of staying organized and profitable.
Naturally, this is an abridged version of the emerging opportunities machine learning represents. Nearly every industry will likely come to rely on self-learning software in the future to make modern life more efficient.
So why the controversy? Why are folks like Elon Musk and Stephen Hawking doing their best Chicken Little impressions about AI and machine learning? Whether or not you subscribe to their possible doomsday scenarios, it’s fairly clear by now that the vast opportunity SLS offers is counterbalanced by some legitimate concerns.
For example, a major opportunity available now is the use of smarter machines to allocate resources more efficiently. For a smaller-scale look at what this means, consider the benefits of using self-learning software to make micro-variation adjustments to the way server farms consume electricity.
The result, according to researchers, is something almost eerily alive: a kind of silicon brain switching parts of itself on and off as needed to conserve basic resources. It’s the sort of thing that could help us come to terms with global warming and the sixth mass extinction in progress.
Removing the error-prone human element from the operation of automobiles is another huge opportunity made possible by machine learning. According to firsthand reports, the uncanniness of flying down a highway at 65 mph while an algorithm does the piloting wears off after a short while. Self-driving cars, in other words, are the future.
Alongside improved battery technology, we stand to benefit by dramatically slashing or eliminating our use of fossil fuels by making our commutes and traffic jams more efficient, and nonexistent, respectively. Cars of the future will be able to communicate with each other and pool data on things like road construction, obstructions, weather, and emerging incidents that could affect the drive.
Every one of the features above represents some type of privacy concern. Siri, Bixby, Cortana, and Google can’t perform their magic tricks without gathering data about their users.
Every tech giant that oversees these virtually intelligent personal assistants seems to take a different tack on user privacy. Your smartphone will send various types of personal data to distant server farms for processing each time you make an inquiry. What that company does with the information from there — and who they sell it to — is the stuff of terms of service fine print.
Beyond privacy, the other very real concerns about self-learning software are all about the consequences of removing human judgment — and in some cases emotion — from critically human experiences and interactions.
Wells Fargo and other major financial institutions wish to use artificial intelligence to dispassionately come to conclusions about their customers’ creditworthiness, for example — an idea that will either eliminate or greatly worsen preexisting cultural biases.
As far as self-driving cars go, a major learning curve is making ourselves comfortable with a world where our cars can solve the grisly “trolley problem” to our satisfaction. Are we comfortable writing software for a car that instructs the vehicle to end a human life to save five others?
Humans have historically had to bear the weight of that moral calculus — or didn’t have time to perform it at all in the vital split-seconds before a car crash. For better and worse, it seems machines can now do some of our ethical moralizings for us.
As you can see, determining the direction of where AI innovation will take us is a complex issue — but one that’s chock-full of potential.
The trick is getting scientists, philosophers, business leaders, citizens, and politicians on the same page.
Kayla Matthews is Senior Writer for MakeUseOf. Her work has also appeared on VICE, The Next Web, The Week, and TechnoBuffalo.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.