There is a disturbing movement among technology companies today. Many claim to be using artificial intelligence in one way or another, but more often than not, these claims are a massive exaggeration.

This may be hard to believe, especially in the age of the Elon Musk’s warnings about a potential global apocalypse caused by AI. While Musk’s warnings may be justified, they’re hardly relevant — AI is still playing around in the wading pool of what is and isn’t possible.

There are a few companies that are actually working with real AI — but they’re the exception, not the rule. According to Gartner, many technology vendors are now “AI washing” by applying the AI label a little too liberally. Jim Hare, vice president of research at Gartner, said, “As AI accelerates up the hype cycle, many software providers are looking to stake their claim in the biggest gold rush in recent years.”

This means that if there’s even a remote opportunity to (ab)use the term “AI” to promote a product or service to a client, most companies would do so.

This could explain why it seemed like everyone started using AI in their offerings overnight. How can this be justified? In practice, it can’t. In theory, it might hold up.

Think about the companies using well-known statistical methods such as linear regression, clustering algorithms, or any other method derived from mathematical operations on a matrix. They have used these methods for a long time, and the techniques have almost nothing to do with AI — except that AI today includes machine learning as a special case, which by extension includes almost any statistical method you can imagine.

Thus, if a product or service includes any statistical operation more complex than a simple average, many companies would claim it’s AI by proxy and pursue advanced marketing efforts to be associated with it. This might be “good business,” but in the end, it confuses a potential client far more than the small additional buzz it might provide.

Why do we consider statistics to be a part of AI? To answer that question, I’ll point to the logistic regression method developed in 1958 by David Cox. This was related to the work done by Frank Rosenblatt in 1957, which led to the Perceptron algorithm. Perceptron is the founding father of deep neural networks, which today is the first thing a person might think of when they hear the term AI.

One of the issues with the term AI is that non-experts tend to think about it in terms of general AI, which is a fundamentally different beast from narrow AI. Every single application, service, method, and procedure you hear of today claiming to use AI is at best an attempt at achieving narrow AI. Basically, narrow AI solves a very specific problem with a specific set of requirements, but it does not generalize its problem-solving abilities to other domains. That’s why beating the human grandmaster Lee Sedol at Go is an impressive engineering achievement, but hardly a step forward in the pursuit of general AI.

If I can leave you with one piece of advice, it would be to consider this: Just because a company does not have “real AI,” but instead uses the normal statistical methods that we all know, does not invalidate its product or service offering.

Simultaneously, even if a company claims to be using real AI, that doesn’t necessarily mean that its product or service is useful. Thus, it always pays to ask a few extra questions in a business meeting where the term AI is being shot off like a fiery arrow across the table. However, the company that does eventually crack “general AI” will set the course for the rest of us operating in this space, and possibly for the future of mankind as a whole.

Dr. Michael Green is the chief analytics officer at Blackwood Seven, an AI-powered marketing agency.