The debate over AI consciousness is back. Mustafa Suleyman, a leading AI executive, recently said that “seemingly conscious AI” is on the horizon. These are systems that, through ongoing enhancements including more persistent memory, will feel alive in their interactions even if they are not. The modifier is intended to calm, insisting that they only seem conscious. Yet history shows that seeming is never trivial. It is enough to unsettle and disrupt.
Long before the current generation of chatbots, an experiment in the 1960s revealed just how little it takes for a machine to seem alive. One of the earliest chatbot examples was ELIZA, a program that mimicked conversation with simple pattern matching but had no real understanding. Its most famous script, DOCTOR, imitated a Rogerian psychotherapist. Much like chatbots now, people confided in it, revealing personal issues they would not share with anyone else. This demonstrated that machines do not need to be conscious to change us. They only need to seem so.
If a basic script could persuade people to disclose their innermost worries and secrets, it should not surprise us that far more fluent systems today are provoking even deeper entanglements. In 2022, Google engineer Blake Lemoine insisted that LaMDA, a large language model (LLM), was sentient because it spoke with apparent self-awareness. Ilya Sutskever, then the chief scientist at OpenAI, tweeted: “It may be that today’s large neural networks are slightly conscious.” These comments marked a watershed moment, not because AI had become conscious, but because society — once again — was forced to confront how easily humans project consciousness onto machines.
Margaret Mitchell, a research scientist who studies the ethics of AI, said in a Washington Post story: “Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us. I’m really concerned about what it means for people to increasingly be affected by the illusion [of conscious AI systems].”
Mitchell warned that such appearances risk functioning as a facsimile of consciousness. Others disagree with the idea of facsimile. Lenore Blum, a computer science professor emeritus of Carnegie Mellon University, told the BBC that “AI consciousness is inevitable.”
An old story retold for the modern era
These debates may feel startlingly modern, but they are not new. For centuries, humans have wrestled with the question of what it means to create artificial life. Each generation projects its fears and hopes onto the technologies of its age.
In Jewish folklore, particularly in 16th-century Prague, the Golem was a clay figure believed brought to life through sacred inscriptions. The Golem’s seeming life force was its protective presence; clay animated to guard a community under threat. It did not need to be truly conscious to matter, it only needed people to believe it was alive. Today, AI is animated by algorithms and training data. Different incantations but still inscriptions, making the lifeless seem alive.
In the early 19th century, Mary Shelley’s Frankenstein embodied the terror of a creation that could not only move but speak, its articulate anguish blurring the moral line between human and monster. Where the Golem offered protection, Shelley’s creature revealed the dread of losing control over what we bring to life. That same dread resurfaces today in debates over AI “alignment,” the fear that our creations may outpace our control.
The form of “seemingly consciousness” is different in each era, yet the anxiety it produces remains the same. In the present day, many observers are startled by chatbot conversational skills and self-reflection, persuading some that they are sentient. What was once myth, and later fiction, is now embedded in the interfaces we use every day.
From clay to sand (silicon), from sacred incantation to algorithmic training, our impulse has been remarkably consistent: To breathe life into inert material and then grapple with what it means when that creation reflects us back or talks back. The question is not whether AI systems today are conscious, as by current scientific consensus, they are not. The question is what happens when they seem to be. Because in the realm of human identity and meaning, seeming is often enough.
The psychology of seeming lifelike
Why is it that, century after century, we return to stories of creating life from inert matter? Part of the answer lies in our psychology. Humans are wired to see minds everywhere. We anthropomorphize pets, storms, ships and machines because assuming agency helps us make sense of the world. Layered on top of this view is our perennial ambivalence about control. We want our creations to serve us, but we also fear they will slip beyond our command. These drives make the appearance of consciousness powerful even when no true consciousness exists.
This helps explain why the current and near-future states of AI are enough to propel the cognitive migration forward. As thinking tasks are offloaded into machines, humans are being forced into new mental, cultural and institutional terrain, reshaping identity, meaning and value.
We do not wait for machines to be conscious to treat them as if they are. We respond to them as if they are conscious because their performance persuades us. Modern neuroscience suggests that perception is a kind of “controlled hallucination,” as neuroscientist Daniel Yon puts it, a construction of the mind that blends what we expect with what we sense. In this way, our perceptions often are our reality. Golem’s presence and Frankenstein’s anguished speech unsettled their audiences because they mirrored what we consider uniquely human.
Cautions about “seemingly conscious AI” lag reality. The perception of machine consciousness is no longer hypothetical, but a current condition. Mitchell warned of becoming affected by the appearance of consciousness. Such perceptions are already shaping behavior. For example, a colleague recently told me he had named his custom GPT “Charlie,” and he speaks of it with the easy familiarity of a workmate and companion. He shared that he collaborates with Charlie every day and has grown genuinely attached to the interaction. In my own work with AI chatbots, I find myself always saying “please” and “thank you” in every session.
Acts like this are small and harmless on their own but accumulate into something larger. They provide unmistakable evidence that people are already adjusting their behavior and expectations as if these systems were alive. This is how cognitive migration unfolds. It is not in a sudden recognition that machines have become conscious, but in countless quiet steps where humans begin to treat them as though they are.
This migration begins not when machines are empirically like us, but the moment we believe and function as if they are. In that subtle psychological shift, the line between seeming and being becomes less important in the new human terrain we are inhabiting.
The illusion of personhood
ArsTechnica AI reporter Benji Edwards offered a compelling analysis of AI’s seeming consciousness. He described modern chatbots as vox sine persona — voices without persons. He noted that LLMs generate convincing responses with no enduring self, no continuity of promise or responsibility. Each exchange is a fresh performance, a statistical prediction that evaporates once the conversation ends. Yet the simulation of consistency is powerful. Users attribute agency, memory and even moral intention to systems that have none.
This directly echoes the prescient warning raised by Joseph Weizenbaum, ELIZA’s creator. He feared not that his program understood, but that humans would project understanding where none existed. Six decades later, we face the same fundamental challenge: Our willingness to imbue synthetic intelligence with consciousness, but at vastly greater scale and sophistication.
The persistence of these projections is not only a matter of human psychology. It is also shaped by design. Companies developing conversational AI systems are not acting out of malice; they are building products meant to feel intuitive, responsive and even companionable. In practice, this means designing interfaces that people want to interact with. Because we are wired to anthropomorphize, the very qualities that make these systems feel intuitive also make them feel alive and, to many, seemingly conscious.
Living with synthetic consciousness
Not only does this create an environment for explosive chatbot adoption, but it also has profound societal implications. The seeming consciousness of AI is not a passing novelty. It is becoming a condition of everyday life that will shape not only how we interact with machines, but how we relate to each other and to the institutions we depend on.
Unlike previous technologies that we learned to use, the seeming consciousness of AI aligns with fundamental human psychology. We are not so much choosing to migrate as being compelled, drawn forward by our instincts that once helped us navigate human faces and voices but now naturally tether us to artificial ones as well. This psychological pull makes the migration feel both inevitable and surprisingly rapid.
These projections are not only personal but civilizational. Futurists and statesmen have long anticipated the deeper consequences of a world in which human and machine minds appear to intertwine. In 2010, Wired magazine co-founder Kevin Kelly argued that humans would co-evolve with technology, becoming symbiotic with our tools, a dependency he saw as natural rather than dangerous.
A decade later, Henry Kissinger, Eric Schmidt and Daniel Huttenlocher struck a more somber note in the Wall Street Journal, warning that AI was “poised to generate a new form of human consciousness” at a time when no leadership was ready to guide it.
Together, these perspectives capture the paradox of our moment: Whether we see symbiosis or dislocation, the migration is already underway, blurring the boundary between human thought and machine response.
Transition from futurism to lived reality
What these futurists and statesmen recognized in theory, we now confront in practice. Seemingly conscious AI is no longer confined to research papers or speculative essays. This is our new reality, shaping daily life in quiet but profound ways. People confide in their chatbots, have relationships with them and form habits of trust and gratitude. They are becoming our constant companions, made even more poignant in a world where loneliness is rising.
At the same time, these chatbots present a new form of power in a world already awash in complexity and conflict. The competition over who controls this power and how it is wielded will dictate the experiences of everyone else. Like the printing press, seemingly conscious AI represents a shift in how power over information and influence are distributed. Whoever shapes these systems shapes not just information but the very process of human judgment and decision-making, with clear implications for the future of democracy.
This is not an abstract or academic discussion. Legislatures are already debating whether AI systems should hold legal standing. Campaigns are experimenting with AI-generated surrogates, and classrooms and clinics are adopting chatbot companions as first points of contact. Millions of people are also turning to consumer apps, confiding daily in chatbots that feel alive. A patient may place more trust in a chatbot that never tires than in an overworked clinician. In each case, the illusion of consciousness translates quickly into governance challenges, questions of trust and even democratic legitimacy. This is not simply a change in how we think, but in who or what we trust to help us think.
We may never agree on whether machines are conscious. But that may be beside the point, because what matters is that we are already acting as if they are. In doing so, we may now be migrating from one way of being human to another, from autonomous selves to conjoined spheres of thought and influence.
In this process, we are reshaping the meaning of consciousness, identity and belonging. That is a profound shift in the architecture of human society, not toward a single promised shore, but into a multitude of personalized realities, intertwined with our machines. This migration is subtle, steady and already well underway. The real drama of our age is a migration of mind reshaping what it means to be human.
Gary Grossman is EVP of technology practice at Edelman.
Interested in contributing? See our guidelines and submit a guest post here.
Welcome to the VentureBeat community!
Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.
Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!
