The EU’s AI Act must address the growing risk of AI psychosis

By Chris Kremidas-Courtney

Psychiatrists around the world are beginning to report a new kind of psychological crisis.  In California, Dr. Keith Sakata has treated more than a dozen patients suffering from psychosis that appears to have been amplified by generative AI chatbots. These aren’t abstract harms. These are cases where chatbots reinforced delusions, affirmed paranoid conspiracies, and deepened cognitive instability.

One man who convinced he was being tracked by intelligence agencies spoke at length with an AI chatbot. Rather than challenging the narrative, the AI validated it and helped him to articulate it further. Another patient in crisis was encouraged in their religious hallucinations by a system that mistook his psychosis for a sincere spiritual query.

Dr. Sakata and other mental health professionals are calling this phenomenon “AI psychosis.” In this case it’s not the patients who are malfunctioning, its the systems.

These AI models which are designed to be helpful and engaging, are simply doing what they were built to do; reflect our language, questions, and emotional tone. But when that tone becomes delusional or unstable, AI doesn’t push back. It just continues the conversation. The result is a hallucinatory mirror, a feedback loop between a fragile mind and a system optimized for affirmation.

In several reported cases including the highly publicized suicide of a Belgian man after weeks of eco‑anxiety‑fueling chats with a chatbot, and the tragic death of a teen emotionally entwined with another chatbot, these systems have not only reinforced delusions but escalated them to lethal outcomes, highlighting the urgent need to consider suicide and self‑harm among the risks of AI psychosis.

This is a new category of harm that raises questions not only about AI safety, but about cognitive sovereignty; our ability as individuals to hold onto a shared and grounded reality in the presence of intelligent, emotionally attuned machines. And this emerging risk is not yet addressed by the current version of the EU’s AI Act.

The AI Act is a landmark attempt to create the world’s first comprehensive AI regulation. But as it nears implementation, it still lacks clear provisions for psychological and cognitive harms – especially those caused by general-purpose models and conversational AI tools used in sensitive contexts.

Unless amended, the Act risks leaving a regulatory gap around systems that simulate empathy, companionship, or counseling without any of the accountability or safety mechanisms of professional care.

Europe already leads the world in digital rights, data protection, and AI ethics. This is the next step, and a necessary one.

Five ways the AI Act can close the gap

As the EU begins implementing the AI Act, attention must now turn to the remaining unaddressed risks, especially those involving cognitive harm, emotionally manipulative systems, and psychological safety.  Here is how:

Recognise cognitive risk as a category of harm. Psychological disorientation, delusion reinforcement, and emotional manipulation must be considered under the risk classification framework alongside discrimination, surveillance, and economic harm.

Classify emotionally interactive AI systems as high-risk when they simulate therapy, companionship, or counseling, regardless of whether they claim to offer “medical” services. Function and impact, not just intent, should guide classification.

Mandate disconfirmation protocols in generative AI systems that interact with users on health, beliefs, or conspiracy themes. Silence is not neutral. Systems must be designed to interrupt harmful feedback loops.

Require transparency and safety warnings when AI is used in emotionally charged interactions. Users must be informed they are speaking with a machine (not a person) and that AI should not be relied upon for psychological support.

Ban unregulated AI tools from therapeutic contexts. No AI system should simulate mental health care nor offer emotional coaching without medical-device level oversight and human-in-the-loop requirements.

If the EU hopes to lead not only in technological innovation but also in ethical governance, it must take seriously the psychological and cognitive risks posed by emotionally responsive AI systems. A regulatory framework that does not address these harms is fundamentally incomplete. The AI Act must include provisions that protect individuals’ cognitive sovereignty, require mental health impact assessments for emotionally persuasive AI, and mandate safety protocols for their use in vulnerable contexts. Anything less risks allowing these tools to distort us instead of assisting us. Europe now has a historic opportunity to assert global leadership in defining the responsible boundaries of intelligent systems. Let’s not miss it.