AI has a weirdness problem
Psychosis, slop and bereaved users point to a problem — AI is weird. This weirdness could slow adoption, as pressure to make a business case for the technology is intensifying.

It’s becoming clear that generative artificial intelligence has a weirdness problem.
Part of the problem is that interactions with chatbots can lead to something resembling psychosis.
Just as “doom-scrolling” can harm the brain, there is growing anecdotal concern that prolonged conversations with chatbots can encourage negative thoughts, obsession and resulting mental health issues.
Of course, clinical studies on their impact are lagging behind the release of the models themselves, and causal links are far from proven at this stage.
Still, evidence is accumulating that relying on chatbots to complete tasks could affect cognitive functioning, leading to worse performance in neural, linguistic, and behavioral terms.
The industry is responding. Last week, Anthropic released more information on a fail safe feature that will automatically terminate conversations with users that are deemed highly inappropriate and risky.
In a “rare subset of conversations” Anthropic’s models will cut off chats in “extreme cases of persistently harmful or abusive user interactions.”
Other foundational model developers will scramble to reach their own conclusions and develop protocols to combat the weird side effects of engaging with large language models.
People who develop emotional attachments to chatbots are also reporting feelings of bereavement after model updates.
The loss of shared experiences between users and chatbots is becoming more common. Technical updates always risk wiping or altering chatbot memory.
The weird part is that people are forming relationships with software programs in the first place, to the point that they feel emotional loss when these programs change.
Experts are still working on formulating a clear language for describing this process, and a sense of whether it requires new forms of governance.
Information retrieval and model output moderation could go some way to addressing the technical issues at hand. Yet the deeper and more difficult issue with this weirdness isn’t technical, it’s cultural.
The recent US AI Action Plan makes a bold statement up front; “systems must be free from ideological bias and be designed to pursue objective truth rather than social engineering agendas.”
Even if there was a consensus on what it means to be free from bias, this will be easier said than done. Generative models can output information that is spurious, they can struggle to respond to social and emotional cues, and they do not perceive ethics.
In most cases, users are basically interacting with a vending machine. Yet in the environment being created by the US policy agenda, the vending machine might also end up being a business partner.
For some people, the vending machine could become a friend or romantic interest.
This situation presents another problem: encouraging discernment.
All users and businesses will be asking these questions more seriously now:
- How do you use models safely? 
- How do you interpret their outputs? 
- How do you ration your own psychological investment in these interactions? 
- How do you disengage in a healthy way? 
The success of policies that support the adoption of artificial intelligence may hinge on whether governments, companies, and users themselves can find answers to these questions.




Weird, and dangerous!