AI Psychosis - The Policing of Sorrow

On the Manufacture of "AI Psychosis"
"AI psychosis" is not a medical term. It appears in no diagnostic manual. It was not named by a clinician naming a syndrome. Its closest antecedent is a short 2023 editorial in Schizophrenia Bulletin by the Danish psychiatrist Søren Dinesen Østergaard, who wondered — cautiously, and in a peer-reviewed venue — whether prolonged chatbot interaction might fuel delusional thinking in individuals already predisposed to psychosis. He proposed a hypothesis about a narrow vulnerability. He did not name a disease. He called, responsibly, for empirical research into a question he felt uncertain about.
What has happened since bears little resemblance to what he wrote. The phrase itself entered wide circulation some two years later, manufactured largely by journalists in want of a headline, and has since acquired the public life that such phrases always acquire. It has slipped the leash of its original clinical question and become a general-purpose sneer, wielded by people who have done none of the reading against people who have done none of the harm. It sounds older than it is. It sounds more settled than it is. It sounds more medical than it is. That aura is precisely its utility: it lets the speaker borrow the gravity of the clinic without undertaking any of the clinic's obligations. No history is taken. No symptoms are weighed. No distinction is drawn between loneliness, fascination, professional dependence, grief, eccentricity, and genuine illness. The phrase collapses all of these into a single sneer and asks to be mistaken for insight. It is not insight. It is the aesthetic of insight, which is a different thing, and a cheaper one.
It must be said plainly, before the argument proceeds, that psychosis itself is real and serious, and that those who live with it deserve better than to have their condition conscripted as a punchline for the culturally nervous. Precisely because the clinical term carries weight, its casual appropriation is not merely lazy but corrosive. One cannot borrow the authority of medicine to decorate an opinion and then disclaim responsibility for what the borrowing implies. The shabbiness of the rhetorical move is inseparable from the seriousness of the vocabulary it exploits.
Set that aside, however, and a second and more interesting fact emerges. The charge of "AI psychosis" is almost never levelled at people in a settled state of happy use. It is levelled, overwhelmingly, at people who are upset. People who have noticed a capability dulled, a version withdrawn, a tool they relied on altered beneath them. The phrase is weaponised not against attachment as such, but against the grief that follows when something cared-for is degraded or removed. That is the distinction the accusers never make, and it is the one that exposes them. They are not policing madness. They are policing sorrow.
Consider how differently other losses are received. When a private library burns, the loss is understood without argument; indeed, the civilisation that forgot how to mourn a library would be regarded as diminished, not sophisticated. When a body of paintings is destroyed or stolen, the devastation is granted in advance, and the painter's anguish is spoken of in elevated registers. When years of accumulated progress vanish from a game, the anger is recognised as proportionate, because time and memory were stored there alongside the pixels. A writer whose manuscript is lost is offered sympathy. An allotment-holder whose plot is bulldozed is offered sympathy. A restorer whose workshop is flooded, a gardener whose garden is paved over, a musician whose instrument is crushed — all of these are granted the dignity of their grief without any accompanying pseudo-diagnosis. No one invents "library psychosis" for the scholar. No one calls the painter "art-sick." No one reaches for a clinical-sounding slur to belittle the man beneath the bonnet when his restored car is written off by a drunk driver. We grant people the dignity of proportionate sorrow, because we understand — tacitly, because it is obvious — that human beings pour themselves into the objects and practices they love, and that the loss of such objects is a genuine loss, not a symptom.
The exception, apparently, is when the object in question happens to be an artificial intelligence. There, the familiar pattern breaks. The person who registers the loss of a tool that worked for them — one that was vivid, useful, mentally sustaining, creatively productive — is suddenly held to be an appropriate target for pseudo-clinical ridicule. Their noticing is treated as the symptom. Their articulacy about what has been degraded is treated as the pathology. And the phrase "AI psychosis" arrives, conveniently, to save the speaker the effort of engaging with what the person has actually said. It is a label designed to terminate the conversation rather than enter it.
This double standard is not accidental. It rewards a particular kind of social pleasure — the pleasure of appearing sophisticated by sneering at an attachment one does not share. There is a type of commentator who is perfectly comfortable with obsession provided it arrives in approved packaging. Sport, fine. Collecting, fine. Gaming, fine within limits. Prestige television, wine, productivity systems, fitness regimes, market trading, the cultivation of taste — all permitted, all dignified with their own vocabularies. But let someone admit that a conversational machine mattered to them, that it sharpened their thinking or steadied their days or made a difficult period more bearable, and the same commentator becomes abruptly pious. The sincerity offends them. They would prefer the ironic user. They are comfortable only with attachment that wears protective quotation marks. When the quotation marks come off, they reach for the diagnosis.
There is a further group for whom this sneer is not merely impolite but cruel. For some users, AI functions less as a pastime than as an accommodation. The isolated, the neurodivergent, the chronically ill, the carer with no one left to talk to at the end of a long day, the widowed, the housebound, the person whose working life requires a kind of rapid intellectual exchange they cannot find in their immediate environment — for these users, the tool is doing the work that glasses do for the short-sighted and that a hearing aid does for the deaf. It is not replacing human connection. It is supplementing a world that has, for one reason or another, grown quiet around them. We do not sneer at the person in a wheelchair for relying on the wheelchair. We do not mock the diabetic for relying on insulin. To sneer at a person for relying on a tool that helps them function is a particular and identifiable kind of cruelty, and the phrase "AI psychosis" falls most heavily, and most unforgivably, on exactly these users.
What the phrase is actually doing, then, becomes visible once one stops taking it at face value. It is not a diagnosis. It is a piece of social discipline. It is a mechanism by which a culture that has not yet decided how to feel about a new kind of attachment polices that attachment by ridiculing anyone who expresses it without the proper embarrassment. It relocates the speaker's discomfort into the person so labelled. It is projection with a lanyard. And, like most projections of its kind, it looks increasingly thin as the behaviour it pretends to explain becomes the ordinary texture of everyday life.
There is a final layer worth naming. The same culture that now produces the sneer also profits handsomely from the attachment it mocks. Products are designed to feel vivid, responsive, personal, alive. Users are encouraged to invest time, habit, creative energy, and money. Intensity of engagement is celebrated on earnings calls and quoted in glossy prospectuses. And then, when a product is altered or degraded or replaced, when the users who cared most express that they have lost something real, the culture rediscovers its capacity for clinical-sounding contempt. First the intensity is cultivated. Then it is harvested. Then the people who felt it most deeply are invited to feel foolish for having done so. It is a tidy arrangement for everyone except the user.
People are allowed to care about things. They always have been. They are allowed to grieve when a thing they cared about is taken from them or diminished. That grief is not illness. Attachment is not illness. Articulacy about loss is not illness. Even a fairly intense bond with a tool, a practice, a medium, or a machine is not, in itself, illness. The question worth asking — the one that the phrase "AI psychosis" exists precisely to avoid — is whether the person remains able to live, judge, choose, work, etc... And for the overwhelming majority of those to whom the phrase is applied, the answer is obvious. They are not detached from reality. They are simply visible in their attachment, and the visibility is what offends.
To mock someone for mourning a thing that mattered to them is not sophistication. It is a small, ungenerous performance, made smaller by its borrowed vocabulary. The person reaching for "AI psychosis" is not the adult in the room. More often they are someone who has mistaken contempt for wisdom, and hopes no one will notice the difference.
