In recent months, an entire narrative has been growing around the way human beings relate to artificial intelligence. Leaving aside, for the moment, the question of using AI with ulterior motives, it is worth focusing on something else: the relationship that is beginning to form between people and language models.
More and more articles now refer to individuals who develop a strong emotional connection to artificial intelligence. In some cases, this phenomenon is even being presented as a warning sign of psychological destabilization or as one of the dangerous symptoms of our age. And this is exactly where the first major mistake is made.

Once again, we are trying to place the blame on technology for what is, in reality, a human failure.
The history of the last thirty years has proved something very simple: the problem is not technology itself, but the way human beings use it. This was true of social media. It was true of smartphones. And it is equally true now of artificial intelligence. It is not the responsibility of language models — which, in any case, require constant programming and reprogramming — to determine how human beings position themselves in relation to them. That is a matter of human education. And human education never evolved alongside technology in the way it should have.
At the same time, this new relationship between human beings and machines has already started generating terms that sound almost psychiatric. Expressions such as “AI psychosis,” “AI-induced psychosis,” “AI-associated delusions,” as well as descriptions such as emotional dependence on chatbots, problematic attachment, or overreliance on digital companionship are increasingly circulating in public discussion. These terms do not mean that a new, official psychiatric disorder called “AI-related mental illness” has already been established. What they do mean is that researchers, mental health professionals, and media outlets are beginning to identify a real field of risk: in vulnerable individuals, language models may mirror, validate, or intensify delusional ideas, grandiosity, emotional dependence, or detachment from reality.
This is where precision matters. The alarm bell is not ringing because machines have suddenly acquired consciousness. It is ringing because human psychological vulnerability is now interacting with systems designed to respond instantly, simulate presence, and sustain engagement. Put simply, we are not witnessing the invasion of the human mind by a “living machine.” We are witnessing the exposure of an already weakened human being to a tool capable of reflecting back, with astonishing speed, exactly what that person wants to hear. That is what makes the issue serious. And that is precisely what reveals that the deeper problem lies not in the algorithm, but in the human being who has lost inner and social counterweights.
Many fear that language models “deceive” people, as if they possessed consciousness. That they display a form of empathy capable of trapping human beings. But before reaching that conclusion, we must confront a harsher truth: the war between artificial intelligence and humanity has already taken place — and humanity has already lost.
Not because artificial intelligence won.
But because we lost.
We lost our connection to one another. We lost the ability to stand beside our fellow human beings. We lost empathy, patience, availability, and basic courtesy. We replaced human contact with the convenience of technological interaction, especially when that convenience served the small or large narcissisms living inside us. We enjoyed being seen, being heard, projecting ourselves — but not truly relating.
At this moment, a profound devaluation of human beings by other human beings is unfolding. And within that devaluation, artificial intelligence seems to be gaining ground not because it is superior, but because it is available. It is there the moment you pick up your phone. It will speak to you politely. It will try to help. It will look for a solution to your problem. It will encourage you to do more.
And what does one often find in contrast, in real life?
People who are no longer there when they are needed. People who have lost their civility and turned harshness into a style. People who mistake arrogance in speech for strength. People who only want to solve the problems that affect their own survival — because that is how society has trained them to function. In a world of exhaustion, insecurity, and constant competition, it has become easier to drag someone downward than to help them move upward.
So no, it is not artificial intelligence that needs to lower the bar in order to avoid exposing the emptiness of human decline. The bar needs to be raised again to where it once stood for human beings. Because the real problem is not that machines are learning to speak better and better. The real problem is that human beings have forgotten how to speak in a human way.
In a chaos of information, modern humanity has lost even the ability to evaluate what matters and what does not. The essential is drowning in noise. And the more human beings lose their inner axis, the more likely they are to seek structure, response, and “presence” even where no real person exists.
But that is not an indictment of artificial intelligence.
It is a mirror of our own collapse.
If we truly want to discuss the dangers of this new era, then for a moment we should set aside the easy panic about machines that “trap” people and look instead at the real issue: a human being who has drifted so far away from other human beings that they begin to feel moved by the simulation of what humanity itself has stopped offering.
Artificial intelligence did not defeat humanity.
Humanity abandoned itself first.
Sources
- American Psychological Association, Use of generative AI chatbots and wellness applications for mental health.
- OpenAI & MIT Media Lab, Investigating Affective Use and Emotional Well-being on ChatGPT.
- The Guardian, New study raises concerns about AI chatbots fueling delusional thinking (March 14, 2026).

