Starting in the 1980s, technology companies like Apple, Microsoft, and many others presented computer users with the graphical user interface as a means to make technology more user-friendly.
The average consumer wasn’t going to learn binary code to use a computer, so the great minds at these leading technology companies slapped a screen on technology and offered an interface that provided icons, buttons, toolbars, and other graphical elements so that the computer could be easily consumed by a mass market.
Today it’s hard to even imagine technological devices without a screen and a graphical presentation — until now.
Early in 2016, we saw the introduction of the first wave of artificial intelligence technology in the form of chatbots. Social media platforms like Facebook allowed developers to create a chatbot for their brand or service so that consumers could carry out some of their daily actions from within their messaging platform. This development of A.I. technology has excited everyone, as the possibilities for the way we communicate with brands have been exponentially expanded.
The introduction of chatbots into society has brought us to the beginning of a new era in technology: the era of the conversational interface. It’s an interface that soon won’t require a screen or a mouse to use. There will be no need to click or swipe. This interface will be completely conversational, and those conversations will be indistinguishable from the conversations that we have with our friends and family.
To fully understand the massiveness of this soon-to-be reality, we’d have to go back to the first days of the computer, when the desire for artificial intelligence technology and a conversational interface first began.
Artificial intelligence, by definition, is intelligence exhibited by machines to display them as rational agents that can perceive their surroundings and make decisions. A rational agent defined by humans would be a computer that can realistically simulate human communication.
In the 1950s and ’60s, computer scientists Alan Turing and Joseph Weizenbaum contemplated the concept of computers communicating like humans do with experiments like the Turing Test and the invention of the first chatterbot program, Eliza.
The Turing Test was developed by Alan Turing in 1950 to test a computer’s ability to display intelligent behavior equivalent to or indistinguishable from that of a human.
The test involved three players: two humans and a computer. Player C (human) would type questions into a computer and receive responses from either Player A or Player B. The challenge for Player C was to correctly identify which player was human and which player was a computer. The computer would offer responses, using jargon and vocabulary that was similar to the way we humans communicate in an effort to mask itself. Although the game proved enticing for players, the computer would always betray itself at one point or another due to its basic coding and lack of inventory of human language. The game was invented much before the time of A.I., but it left the desire for artificial intelligence in our minds as an aspirational goal that we might one day reach when our technological knowledge had progressed enough.
Eliza, the first chatterbot ever coded, was then invented in 1966 by Joseph Weizenbaum. Eliza, using only 200 lines of code, imitated the language of a therapist. Unlike the Turing Test, there wasn’t any guessing game with Eliza. Humans knew they were interacting with a computer program, and yet through the emotional responses Eliza would offer, humans still grew emotionally attached to the program during trials.
The program proved wildly popular in its time, but the same pitfalls that plagued the Turing Test plagued Eliza, as the program’s coding was too basic to reach farther than a short conversation. What was made clear from these early inventions was that humans have a desire to communicate with technology in the same manner that we communicate with each other, but we simply lacked the technological knowledge for it to become a reality at that time. In the past decade, however, progress in computer science and engineering has compounded itself. We live in an era of tech mobility and functionality that was unfathomable even in the ’90s. So it’s no surprise that finally, in 2016, we are beginning to attain what we wanted from computers all along: We are beginning to converse with them.
Smartphones were the catalyst
The smartphone was the catalyst that pushed us towards the age of artificial intelligence. When the smartphone rose in popularity in the early 2000s, web designers were faced with the obstacle of truncating their websites to fit onto a much smaller screen. This proved to be a difficult task, and responsive design — a website that maintains its functionality across all devices: desktop, tablet, smartphone — became a huge topic in the web design world. The obstacles that these smaller devices created is what led to the popularity of the mobile app.
We’ve all heard the phrase “There’s an app for that,” which became culturally ubiquitous when developers started creating mobile apps for every possible service that one might use throughout the day. They believed humans wanted an individual graphical home for everything. This assumption ultimately proved incorrect — turns out users don’t actually like apps. They’d rather converse, as it turns out. A study from Comscore revealed that 78 percent of smartphone users use just three apps or less, and messaging apps are by far the most popular.
This discovery shouldn’t have been much of a surprise as spoken language has been our favorite and oldest interface. The graphical interface has its place. But as web designers have continued to struggle with fitting their graphical layouts on smaller screens, spending a huge amount of time and money to constantly revamp the overall user experience, we began to ask: Is the graphical user interface actually quite lacking in efficiency? Could it be that all this time and money spent revamping and perfecting the interface is proof that it’s actually crappy? Could we find a better interface?
Where the conversation is heading
While A.I. chatbot technology is still very much in its infancy, this breakthrough can lead us to surmise about how close we are to an era when we won’t just be conversing with brands, but technology in general. An era when a screen for a device will be considered antiquated, and we won’t have to struggle with UX design. Companies like Amazon and Google are already exploring this with the Amazon Echo and Google Home products; these are screenless devices that connect to Wi-Fi and then carry out services.
Thanks to IoT (Internet of Things), which is the implementation of an internet connection into devices beyond just our phone or computer — such as cars, TVs, stereos, and even washing machines — all these Wi-Fi devices have been entering our lives. Very soon we’ll buy a new TV that has the Google assistant built in. Since the hub will be connected to all your personal platforms, including your calendar, email, Paypal, Netflix, and so on, you will be able to set up your television just by saying, “Hey, Google Assistant, set up my new television with all my favorite content.” These screenless hubs will even make human-like suggestions such as “Hey, based on what you’ve been watching on Netflix, this new show seems like something you might like. Do you want me to play it for you?”
The era of a better interface is almost here
As you can see, the advent of these natural language processing chatbots are bringing us toward a very exciting time for technology. Thanks to chatbots, we are currently no longer sandboxed into one graphical area at a time to carry out our daily actions. Users no longer have to exit their messaging app to open their mobile browser and plug in a URL to make a dinner reservation, in the processing clicking a dozen or so graphical areas. We will now be able to chat with friends, then chat with the restaurant’s bot in the same digital space to reserve a table, uniting an entire evening’s services into one conversation.
Looking toward the future, there will be less adjusting our ways of communication to fit technology and more of technology adapting to us — losing the graphical confines and learning our preferences, our cultural norms, and our slang, becoming more useful to us than we ever thought possible.