It could be said that we humans have a strange, even somewhat strained, relationship with our technology. The emergence of any new game-changing tech often inspires both optimism and dread in equal measure. Even deep thought poster children like Plato fretted that a newfangled medium called “writing” would cause the end of intelligent debate. Today, of course, being literate is simply table stakes, with an estimated 86 percent of the world’s population able to read and write. No doubt we could find a troll or two to advise (in writing) that literacy is the cause of society’s ills, but overall, I’d say the introduction of that particular technology seems to have worked out alright in the end.

This overarching question of whether technology will save or destroy us now includes many innovations that we now take for granted. On the one hand, newspapers were once literally depicted as physical barriers between husbands and wives; both the radio and television were going to ruin families through sheer distraction; and violent video games continue to elicit panic over their corruptive influence. On the other hand, the introduction of industrial machinery, and later electricity, were said to be a means of liberating the common-person from back-breaking labor and darkness, respectively.

Two decades ago, the Internet was heralded as the ultimate force for democracy — a digital community in which every person, regardless of race, age, or gender, could participate equally and anonymously, and without fear of reprisal. The internet was going to turn us all into seekers of truth, allowing only the best and boldest ideas to survive. It seems safe to say at this point, those idealized concepts haven’t exactly been spot on given the much messier internet we have today.

The obvious conclusion here is that no technology is inherently bad or good. The truth is that these advancements are tools that we can use for many applications, not just good or evil. But neutrality doesn’t sell, as is made abundantly clear by the depiction of artificial intelligence in popular culture.

Just like advancements such as literacy, the internet, and virtual reality, artificial intelligence stands to make a huge impact on the world. And yet, for all of its potential benefits, our default narrative for AI is that of a destabilizing force — practically a mustachio-twisting, murderous entity. This perspective has been shaped over decades of pop-culture depictions, from movies, books, and television stretching as far back as Mary Shelley’s 1818 novel Frankenstein. One of the more unforgettable examples is also one of my favorite movies: Terminator 2: Judgment Day (T2). In T2, the “conscious” computer network Skynet tries to destroy humanity by setting off a nuclear war. This take on the existential threat of AI is just one of many examples suggesting intelligent, self-aware machines are going to want to take us out. Tantalizing as it is to imagine a self-aware AI with daddy issues coming to get us, the realities of machine learning technology advancement is something a bit less dramatic (at least in the murder department).

Another intriguing view of AI can be found in the singularity theory popularized by Ray Kurzweil. Kurzweil argues that given the huge and accelerating growth in our computing capacity, we will soon reach a point where, in order to keep up with our digital devices, we will have no choice but to actually merge with them. (Given how reluctant my kids are to put down their smartphones, I sometimes wonder if this convergence has already happened!) This singularity theory inspires no shortage of hand-wringing that our machines are already starting to overtake us.

Lately, a lot of the rhetoric surrounding AI has centered on how it will disrupt the workforce, leaving a trail of ruined industries and lost jobs. While it is true that many sectors are likely to be altered by the various ways companies can adopt AI-driven technology, such shifts are already underway and are in no way unique to advancements in machine intelligence. Any new technology can change the employment landscape, both by ending old positions and by opening up new ones. The most beneficial way to understand these shifts isn’t to run around screaming that the sky is falling, but rather to spend time considering AI’s best possible applications to your industry. Instead of thinking of AI in terms of good and bad, we should analyze its distinct costs and benefits. In other words, how can we put AI to best use? What are the potential disruptions, and how can we mitigate them — or, better yet, harness them? As with any other tool, the best way to discover AI’s potential is to actually put it to use.

Ultimately, AI is already here. If you’ve ever used Siri or Alexa, if your smartphone has ever autocorrected a word on your behalf, if Netflix has recommended a movie you might enjoy, then you are already a victim of AI in its basic form — it’s not so bad, right? As we edge into the era of self-driving cars — a reality that is much closer than you might think — AI is poised to change almost every area of our lives.

While I love a good Hollywood blockbuster as much as the next person, it’s time to expand our view of AI as an inevitable enemy and free ourselves up to examine the possibilities that these new tools offer up. The opportunities to integrate AI into business and society alike, from the mundane and routine to the profound and extraordinary, are going to shape our lives and select our next field of billionaires.

So rather than stunting AI’s potential through storybook assumptions, business leaders need to sit up and look now to properly harness the power that it brings to solve their biggest problems.

Still not convinced? I guess it’s “hasta la vista, baby.”

Nav Dhunay is the cofounder and CEO Imaginea.Ai, a platform that democratizes artificial intelligence and aims to put it in the hands of every organization across the globe.