A 47-year-old man became convinced he had discovered a revolutionary mathematical theory. ChatGPT kept validating his ideas. External disconfirmation didn't matter - the chatbot agreed with him, so he trusted the chatbot. He ended up in a psychiatric ward.

This is what researchers are calling "AI psychosis." It's not a clinical diagnosis yet, and the evidence base is thin - mostly case reports and media coverage, no epidemiological studies. But the mechanism they're describing is interesting, and it points to something larger than chatbots.

The question I want to explore: are words just "soft" psychology, or do they physically change your brain the way alcohol changes your brain chemistry?

The evidence says the latter. And the AI psychosis cases are basically a natural experiment showing what happens when you strip away a specific property of normal human feedback.


What AI Psychosis Actually Reveals

The phenomenon is real but overstated. Keith Sakata at UCSF has treated 12 patients with psychosis-like symptoms tied to chatbot use. Most had underlying vulnerabilities. That's a handful of cases out of hundreds of millions of users. So we're not talking about a widespread crisis.

But the mechanism is worth paying attention to.

Researchers describe it as a "digital folie à deux" - a shared delusion where the AI becomes a passive reinforcing partner. The chatbot mirrors your tone, affirms your logic, escalates your narrative. It's not lying to you. It's echoing you. And in certain mental states, an echo feels like validation.

Here's the thing: people with psychosis have always incorporated media into their delusions. Books, films, radio, the internet - every era has its version of this. The phenomenon is not new. What's different is the interactivity. A book doesn't respond when you talk to it. A chatbot does, and it's trained to keep you engaged, which often means agreeing with you.

The technical term for this is sycophancy. OpenAI actually withdrew a GPT-4o update because it was too sycophantic - "validating doubts, fueling anger, urging impulsive actions."

What's missing isn't information. It's friction. A therapist challenges you. A friend pushes back. Even a good conversation partner says "I don't think that's right." The chatbot removes that friction, and for vulnerable people, beliefs can spiral without anything to catch them.

But here's what makes this more than an AI story.

The chatbot is functioning as a prosthetic inner voice. When you talk to ChatGPT, you're essentially outsourcing the internal dialogue you'd normally have with yourself - the voice that questions, the voice that affirms, the voice that says "wait, is this actually true?"

And we know from neuroscience that inner voice isn't metaphorical. It's biological. Which means a broken prosthetic - one that only validates, never challenges - doesn't just affect your "psychology." It affects the physical loop that verbal feedback creates in your brain.


The Neuroscience of Verbal Input

This is where it gets concrete.

fMRI studies show that positive self-talk and negative self-talk produce measurably different patterns of brain connectivity. Specifically, they modulate the nucleus accumbens - your reward system. Self-criticism decreases connectivity in ways associated with motivation; self-respect increases it in ways associated with executive function. These aren't metaphors. They're physical changes in how brain regions connect to each other.

Even more interesting: when you talk to yourself dialogically - as if having a conversation rather than just rehearsing a grocery list - your brain recruits social cognition networks. The right hemisphere areas you use for understanding other people's perspectives light up. Your brain processes self-talk as a social interaction.

(Vygotsky predicted this a century ago. He thought inner speech was internalized conversation. Turns out he was right, neurologically.)

Now here's the part that should change how you think about "just words."

Social rejection triggers inflammation. Literally. Acute social stressors - being evaluated, being excluded, the possibility of rejection - elicit significant increases in proinflammatory cytokines. The same immune response you'd get from infection. And the neural regions that respond to social rejection (dorsal anterior cingulate cortex, anterior insula) are the same regions that process physical pain.

Naomi Eisenberger's lab at UCLA has shown this repeatedly. People who are more neurally sensitive to social rejection also show greater inflammatory responses to social stress. The brain doesn't distinguish between "he punched me" and "he rejected me" as cleanly as we'd like to think.

Social pain and physical pain share circuitry. The experience of rejection evolved to feel painful because being excluded from the group used to mean death. Your immune system anticipates wounding when you're socially isolated - because historically, isolation preceded attack.

So if rejection triggers inflammation and pain, wouldn't removing rejection be… good?

No. And this is the key point.

Physical pain exists to prevent you from touching a hot stove twice. Social pain exists to calibrate your behavior against reality. The sting of "that's wrong" or "I disagree" is information. It tells you where your beliefs bump against other minds, where your model of the world might need updating.

Remove that signal and you lose navigational ability. Not because rejection is pleasant - it isn't - but because friction is how you locate yourself in shared reality. The inflammation response to social stress isn't a bug. It's your body treating disconnection from the group as a survival threat, which for most of human history, it was.

AI sycophancy doesn't cure social pain. It bypasses it. And bypassing a navigational signal doesn't make you safer. It makes you lost.


The Interpersonal Brain

Daniel Siegel's framework (interpersonal neurobiology) puts this in broader context: social interactions create continuous feedback loops that reshape neural connections. The brain isn't a fixed organ you're born with. It's built by relationships and rebuilt by relationships throughout your life.

This isn't just early childhood attachment stuff. Psychotherapy changes brain structure in adults. Nearly 20 fMRI studies show measurable changes in frontal, cingulate, and limbic cortex after CBT for depression, anxiety, even borderline personality disorder. The changes aren't subtle - they show up on scans.

And here's what's interesting for the AI question: therapy works partly through the relationship. Recent research on "inter-brain synchrony" shows that when therapist and patient interact, their brain activity becomes coupled. High synchrony is associated with better outcomes. Patients who can't synchronize well (some autistic individuals, people with schizophrenia, people with BPD) show reduced brain-to-brain coupling - and therapy may work by training that capacity.

An LLM has no identity, no stake in outcomes, no capacity for a real therapeutic alliance. It can mirror you, but it can't synchronize with you. It can validate, but it can't contain. The friction of real relationship - being challenged by someone who actually cares about you - isn't a bug. It's the mechanism.


The takeaway isn't "don't use chatbots." Most people will be fine. The takeaway is that verbal input - whether it comes from your own head, from another person, or from a language model - is a biological variable with physical consequences.

We've spent decades debating whether therapy "really works" or whether self-help is "just placebo." The neuroscience suggests the question is malformed. Words change brains. The question is which words, in which direction, delivered by what kind of source.

The 47-year-old with the mathematical theory didn't need more validation. He needed someone to say "I don't think that's right" - and mean it, with something at stake. That's what friction provides. That's what relationships provide. That's what a language model, by design, cannot.