Those who work in artificial intelligence must laugh at the mistakes journalists all make (including yours truly) when we write about their field. We assume the worst; we call their subroutines in machine learning dangerous and even globally cataclysmic. Meanwhile, the actual programmers know how hard it is to write AI code that seems human and intelligent.
Recently, Facebook announced that it had created a chatbot system where the AI appeared to have created a truncated language. The bots repeated words because it made more sense, and the company didn’t see any value in that because humans wouldn’t understand the language. It was a fairly predictable programmatic problem — the code wasn’t valuable. For some reason, this became the most widely covered story for about an entire week, according to Buzzsumo. In fact, it is still a topic that ranks high on all of the social media scorecards even today.
What’s really happening?
For starters, there is a fear of the unknown. It sounds suspicious at face value when you hear that bots did anything on their own. The truth is a little more granular than that. Adaptive intelligence might seem like it involves code that suddenly starts running amok, but even when a bot appears to “adapt,” it is still following subroutines that are highly choreographed.
Here’s an example I like to use. In my house, there’s a “bot” that can tell when I’m not home. It scans for motion, and then it can turn off the thermostat. To guests who don’t know the field of AI, it seems like magic. They might say, What happens if the bot decides to lock the doors? Ironically, that is possible, but it is not by accident. The bot is just a motion sensor that can determine, after a very rigid period of time, that no one is home. It’s likely a few hundred or a thousand lines of code that looks at the cameras in my house, waits for inactivity, then triggers a simple command to disable the thermo or lock the doors. It is not rocket science.
And it can’t jump the rails. If the programmer decided to watch for motion on my cameras installed by the actual company and linked to my Wi-Fi network, the same subroutine can’t suddenly decide to link to the Nest camera in my office. It’s not possible. Also, the AI itself does nothing at all without being programmed. It can’t decide to scare the cat in my living room using a loud horn. For starters, it doesn’t know what a cat is. Also, it doesn’t have a loud horn.
Most importantly, the concept of “scaring” is not even remotely possible within its subroutines or even within the field of AI. Something simple like scaring a cat requires an incredible understanding of what it means to scare something, such as whether the animal is paying attention, if anyone is in the room, and how loud to sound an alarm. It’s like saying your car can suddenly decide to fly. It just isn’t programmed to do that yet.
The entire subject is a bit ridiculous, in fact. Why would a security company want to scare an animal? Even if that was something a programmer wanted to do, the question isn’t about “why,” it’s about the fact that it still needs to be programmed. An AI doesn’t do that on its own. My concern isn’t over AI running amok; it’s over humans thinking AI can run amok. That seems to be the real setback here, the idea that somehow the programs can suddenly evolve or perform functions that were not programmed into the software originally. All adaptive intelligence is programmed.
Don’t get me started on how that’s even true of humans. Not to end on such a cheeky note, but the fact that I head for the coffee machine every morning is also highly programmed.