Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


“Artificial intelligence” (AI) is an opaque term with no commonly agreed definition and a disputed scope. We routinely use it to represent a range of diverse technologies that have the power to bring disruptive changes around the world. Whether businesses understand the complexities or not, many are scrambling to incorporate “AI” into their practices and their branding.

Given the current buzz around the term, surely it makes sense to capitalize on the interest. Why wouldn’t you want people looking at your company to immediately think “AI”?

Although advancements in AI continue unabated, some experts have been warning for decades about potential problems. Perhaps now is the time for those of us who work in the science and technology fields, or who have a deeper appreciation of the history of artificial intelligence, to rethink how we engage with our employers and stakeholders. We need to be able to communicate effectively about the risks and benefits of the various emerging technologies without aggregating them all under a single over-hyped brand.

Below I present three reasons the blanket “AI” label has been rejected within certain scientific communities and now needs to be reconsidered.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

1. The historical argument

“Those who cannot remember the past are condemned to repeat it.”
— George Santayana

We have seen two AI winters now. When these occurred, research funding dried up, projects were canned, jobs were lost, and scientific progress was constrained. The next major glitch in the history of AI will likely not be a cooling, but a backlash. The question is how big the backlash will be. If current predictions about the impact of automation on the employment market become a reality, it could be sizeable. If this is case, expect to see increasing negative publicity associated with AI and, as with the previous AI winters, individuals and organizations scrambling to distance themselves from the term. If the fallout becomes politicized, it could deeply impact academic research activities. Reaction at a political level to public pressure risks knee-jerk regulatory responses.

2. The education argument

“Any sufficiently advanced technology is indistinguishable from magic.”
— Arthur C. Clarke

For most people working in “AI” fields, precise, accurate, and meaningful labels do exist. Let’s use them. If we’re working in data science, let’s say so. If we’re developing augmented reality hardware, let’s say so. If we’re focused on natural language processing, let’s be explicit about that. If we’re combining machine learning approaches within robotics, then let’s describe it with that level of accuracy. Conflating these distinct technologies under a single abstract label wraps an unnecessary veil of mystique around what would otherwise be clear and explicable. If we want to inform the wider public about the developments that will impact them, let’s communice using accurate and meaningful terminology wherever possible.

If AI is to become a paradigm-shifting, singularity-inducing catalyst, it will affect every field of science and the lives of everyone on the planet. The term will thus become even more of a vague umbrella label than it already is … capturing pretty much everything and meaning pretty much anything. If the general population comes to perceive AI as effectively “magic,” then the science and technology communities will have failed in one of the most important educational challenges they ever faced.

3. The semantic argument

“The question of whether machines can think … is about as relevant as the question of whether submarines can swim.”
— Edsger W. Dijkstra

We took the word “intelligence,” which people have struggled for centuries to define, and incorporated it into a term intended to represent a scientific discipline. The result is a label which is also indefinable. And the debate about what AI actually means routinely derails important discussions. Conversations about whether machine learning can solve a particular challenge are instantly marginalized by interjections along the lines of “Can machines truly become conscious?” or “Will the machines take over?”

It is worth highlighting the fact that crucially important philosophical and ethical issues are emerging from the fields of data science (personal data, the right to privacy) and automation (issues around self-serving cars, impact on employment). These need urgent consideration. The argument is not that discussions about the nature of artificial intelligence is irrelevant, it is that these discussions require some degree of focus in order to have any value. Since AI is not a specific field, conversations around this term are open to interjections from anyone. The term has thus become a hotbed for fostering confusion and misunderstanding and is a flashpoint for disagreement between different disciplines and interests.

The counterargument

“Once a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road.” — Stewart Brand

Of course, it could be argued that AI is already an unstoppable force with huge economic impetus, one that will impact us regardless of our choice of terminology. That may be true, but forging ahead without acknowledging some of the concerns raised above may adversely impact further progress.

  • Academic research funding is often more vulnerable to political concerns than businesses are. A reduction in public research funding due to an AI backlash means that an even greater proportion of the continuing scientific development will be focused within a smaller number of powerful, profit-driven corporations.
  • While it might serve the interests of powerful corporations to brand themselves as “AI” companies, and they can likely weather any backlash, similar market positioning may adversely impact smaller companies if public sentiment sours.
  • The guaranteed survival of commercial AI, due to powerful economic forces, may further polarize the interests of industry versus the individual while increasing wealth disparity. The possibility of such a scenario only strengthens the case for improving public understanding of the various technologies that are currently grouped under the AI umbrella.

AI-related technologies are clearly here to stay. And the expression itself is not going anywhere. When it’s the right expression to use, let’s use it. When it’s not, and given that we have a wealth of well-defined, commonly accepted, accurate, and meaningful terminology at our fingertips, let’s communicate with our audiences as effectively as we possibly can.

This story originally appeared on Medium. Copyright 2018.

Steve Miller is a data scientist, an engineer, and a researcher with a PhD in Computer Science, a BSc in Biology, and a BEng in Computer & Electronic Engineering.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.