Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

When Hollywood isn’t doing comic book franchises, it’s doing AI. Why? Because AI gives us a window into our own souls by challenging us to consider what it means to be human, what it means to think, and what our place in the world is. It’s a topic that’s ripe for philosophical discussion, and hard-hitting directors such as Ridley Scott, Steven Spielberg, Stanley Kubrick, and Spike Jonze have all used it as a platform to explore what a world of AI looks like — and what it might mean to live in it. There’s also a long history of villainous AI, perhaps because those “just a machine” antagonists make human leads seem all the more heroic, or perhaps because science fiction has become increasingly dystopian over time.

But while Hollywood gets some of it right, there’s plenty of artistic license at work. Let’s take a look at some of the things that Hollywood gets wrong about AI, and why.

1. Intelligence vs. sentience vs. sapience

Hollywood is keen on “humanlike” intelligence because it makes it possible to skim over one of the deep philosophical roots of AI: defining intelligence and determining whether something exhibits intelligent behavior. These questions form an entire branch of philosophy, asking us to consider the nature of consciousness, intelligence, sentience, and sapience. These terms are all related but distinct in their own way — except where Hollywood is concerned. In Hollywood, sentience — the ability to experience subjectively — is typically used equivalently to sapience, the ability to act based on past experience and understanding. “Intelligence” is even harder to define, particularly when expressed in an artificial environment like the game of Go. Films typically arbitrarily pick a term and run with it, taking a challenging thought problem and using it to do nothing more than get across the idea that a given machine is less/better/scarier/smarter/more than a human.

2. Let’s all ignore our programming

In definitely one of the cuter outings in AI, the eponymous hero of Disney Pixar’s Wall-E takes service bots like the Roomba to a whole new level with its ability to level up its game from trash compactor to environmental activist. Wall-E’s transformation begins with its sudden gaining of sentience (and arguably sapience), but from where? Perhaps Wall-E was initially built as an AI so that it could learn to excel at collecting trash. This is a pretty narrow AI domain from which to develop feelings like love and nostalgia. The area around his docking station would certainly be clean, but it’s unclear how or why he would learn to collect fuzzy objects as a hobby. Yeah, this seems very nit-picky, and we loved the movie. But there were thousands of these robots, if not millions. Is Wall-E the only one that bootstrapped himself into intelligence through some magical process?

3. Stepping into the Uncanny Valley

Spielberg’s 2001 film A.I. gives us the story of a robotic child, David, who is programmed to be able to “love.” While the film gets some things right, such as David’s adherence to his programming, it misses a big one — the “uncanny valley.” Coined in 1970 by Japanese roboticist Masahiro Mori, the term refers to the negative reaction people have to robots that are too human to look like robots, yet aren’t perfectly human. Even if you can grant that the film was closer to fantasy than to sci-fi (which is what I believe), or is so far in the future that all this has been solved, you really can’t make the same arguments for closer-in films about humanoid robots.

Humans are astonishingly good at spotting things that aren’t quite right, particularly in relation to body language and facial expressions. The most successful humanoid robots will necessarily have exaggerated features like those seen in anime. This approach gets around the uncanny valley by taking a humanoid design and piling enough extra cute on top that it’s obvious that the intent isn’t to be truly human.

4. All you need is you

While the notion of the life-creating mad scientist has been with us since Frankenstein, stitching together a few limbs requires much less technical expertise than building AI. And yet Hollywood likes to labor under the impression that not only is the software side of AI readily enough achieved by a single individual, but that the hardware side is just a plug-and-play device away. While the 2015 film Ex Machina takes the time to explore the circumspect, deeply iterative AI journey, it also gives us an inventor who single-handedly develops human-like AI and an accompanying humanoid physicality to match — all from the safety of his basement.

While AI tools are strongly driving towards ease of use, such AIs are designed to solve specific business problems — they’re not humanoid AIs. Sure, chatbots would like you to think that you’re talking to a person, but spend 20 minutes with one and you’ll be disabused of that notion. One essential purpose of science fiction is to declare a world or situation and then, scientifically and consistently, explore the possibilities of this world. Ex Machina explores the least interesting part of that AI’s story. How did she get booted up? Sure, she escaped, but what then? It was much more an escape caper with an AI shroud than truly exploratory sci-fi.

5. High-speed development

Not so far removed from our point above, Hollywood is also rife with AI development that moves at montage speed. Perhaps it’s because Hollywood conceives of AI as the result of a creative act, not a scientific one — with AI simply emerging as the result of inspiration. In Spielberg’s A.I., we go from zero to 100 percent humanlike artificial intelligence in just a year and a half, while Ex Machina gives us astonishingly humanoid bots in just a couple of years.

There are those who argue for the “coming of the singularity,” where AIs build AIs that build AIs and holy crap everything just changed in 2 weeks. It’s called the Singularity because it has an event horizon — in the world of black holes (also theoretically singularities), the event horizon is the hard boundary past which nobody can see from the outside. If things go all self-replicating, intelligences building greater intelligences in the space of hours or days, then that’s a situation where we simply can’t predict what’s going to happen. Maybe you can boot up a truly human AI in a year after we’ve passed some boundary, but we have absolutely no idea where that boundary is.

6. AI does not mean Atrocity Inside

From the riotous robot Maria in Fritz Lang’s 1927 classic Metropolis to Arnold Schwarzenegger’s unstoppable cyborg Terminator to HAL from 2001: A Space Odyssey, Hollywood is full of menacing AI-gone-rogue tales. But the fear of AI probably stems more from our fear of human obsolescence than it does any actual logical fear of AI itself. Unless we’ve specifically programmed an AI to harm people — and there are some fairly strict checks and balances in place for that sort of thing — the only real threat from AI is in the workforce, where humans may perform less consistently and effectively than AI on particular information-gathering or pattern recognition tasks.

AIs will affect the job market. This is truth you can bank on. Whether they’ll be used to drive us toward a post-scarcity society or to create deeper and more entrenched income inequality remains to be seen. For some interesting, somewhat sobering reading, check out President Obama’s White House report on Preparing for the Future of Artificial Intelligence, an extremely balanced, very clearly written look at the potential future impact. We’ve seen the enemy, and it is us, not the machines.

7. It’s more than the Turing test

Ridley Scott’s 1982 Blade Runner, based on the classic Philip K Dick short story Do Androids Dream of Electric Sheep? brought the Turing test to the public consciousness with the Voight-Kampff machine. But there’s more to AI than the Turing test — originally a measure of discerning whether a robot displays behavior that is indistinguishable from a human via natural language conversations. While passing the Turing test definitely poses an interesting challenge, it’s not actually the goal of AI. AI research seeks to create programs that can perceive an environment and successfully achieve a particular goal — and there are plenty of situations where that goal is something other than passing for a human. It’s much more likely that the goal is assisting the human rather than imitating them.

8. It’s a sneak attack

The thing about AI isn’t that it’s coming. It’s that it’s already here. Hollywood might paint AI as something we’re on the cusp of, but that’s because Hollywood has conditioned us to see only a very particular definition of AI. These are the humanoid bots of Bicentennial Man and the disembodied voices of spaceships such as Mother in Alien or Icarus II in Sunshine, which together represent a kind of general artificial intelligence. The AI we have today is specialized to perform certain tasks, such as recognizing images, playing chess, or evaluating insurance claims.

9. AI (probably) ≠ human intelligence

Spike Jonze’s 2013 Oscar-nominated film Her explores the deepening romantic relationship between a man and “Sam,” his computer’s incredibly precocious OS. With her deductive, creative, and reasoning skills, Sam functions as a highly intelligent and competent human. Add in her ability to feel emotions and develop relationships, and she’s human in every sense but the corporeal one. Her works on the assumption that successful AI won’t just be indistinguishable from human intelligence, it will actually be human intelligence (just faster and smarter). When we anthropomorphize intelligence, we give it attributes that are familiar to us — emotions, consciousness, ego, conscience, and even a self-preservation instinct. But human intelligence isn’t the only type of intelligence, and there’s nothing to say that the intelligent systems we develop will think, feel, or act in a human way.

Imagine the Google DeepMind AlphaGo AI. Going back to our very first point, you can make a convincing argument that it has achieved some sort of sapience — the ability to act based on past experience and understanding. Now, imagine that it has achieved some sort of sentience — meaning that it can feel, perceive, and experience subjectively. Well, how can you communicate with such a mind? Its entire universe is the game of Go. Every game is a communication of a sort, but can other patterns be brought out? Say you place your pieces on the board in such a way to convey a basic mathematical principle. Does it have enough information to infer that you are trying to get something across, or will it just take your strange moves and wipe you off the board?  There’s no reason why it couldn’t achieve sentience — assuming you’re on the “computers can do sentience” side of the fence. If so, then, what might its world look like?

In other words, maybe we need to think a little less about the “intelligence” part of AI and ponder a bit more on the “artificial” part.

Seth Redmore is the CMO at Lexalytics, a machine learning and text analytics company

Above: The Machine Intelligence Landscape. This article is part of our Artificial Intelligence series. You can download a high-resolution version of the landscape featuring 288 companies by clicking the image.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.