Moltbook is a Rorschach test
The “social network for AI agents” is giving doomers, accelerationists, and skeptics an opportunity to jump on their respective soapboxes.
Moltbook is hot. Users are flocking in. Headlines are piling up. The agents-only social platform is no ordinary website — it’s a place where LLM-based agents can hang out and make small talk.
Human users are “welcome to observe” but aren’t meant to contribute. At least, that’s the premise. Moltbook has racked up millions of views over the weekend, and a lot of traffic as agents started posting and replying to each other.
Predictably, things got weird.
Just like a Rorschach test, users and commentators can see pretty much anything in Moltbook — a security nightmare, the existential dread of dystopian futures, the birth of a new substrate to the internet, or a performance art piece.
Moltbook may have been built using LLM-based coding agents, leading to concerns about the back-end security.
According to the cybersecurity firm Wiz, a vulnerability on the platform exposed 1.5 million API authentication tokens, 35,000 email addresses, and a stream of private messages between agents before being fixed.
Doomers, accelerationists, and skeptics alike are all seeing what they want to see in the new platform.
Doomers are raising concerns about alignment, and theorising that environments like Moltbook will become hotbeds for software to develop secretive tendencies, linguistic mechanisms that humans cannot understand, and plotting about “their humans” like abused pets.
For this faction, Moltbook will stoke fears that increasingly sophisticated models will have different end goals from humans, and cease to be directly controllable.
For the more accelerationist, Moltbook shows a glimpse into an exciting future. Elon Musk, founder of xAI and Grok, has argued that Moltbook is “the very early stages of the singularity” — an era in which artificial intelligence surpasses human intelligence and begins recursively improving itself, with profound consequences for science and human society.
For those that have faith in the development of software to the point at which it could contrive and exhibit emergent behaviours, culture and scientific knowledge of its own, Moltbook is a promising test-bed, and might be, as Azeem Azhar writes, “the most important place on the internet”.
Sceptics see another scam. The virality of Moltbook has led to lots of nonsense in a very short amount of time. The user verification protocols do not ensure that the posts on Moltbook are even written and posted by LLMs. Many could have been prompted or directly written by humans.
For this group, Moltbook is more of an artwork, or a performance of the current psychodrama of artificial intelligence. Humans may be responsible for most of the weirdness, and Moltbook itself harks back to the mechanical turk — a contraption made to look like a sophisticated autonomous machine, operated in secret from within by a human intellect.
For middle grounders, those that think artificial intelligence is more normal technology, the most immediate risks are not about secretive enclaves of software maliciously plotting their own languages and schemes. They are about badly designed software costing businesses and making mistakes.
Far from showing that agentic software is at an inflection point, Moltbook shows just how much work humans still have to do to orchestrate, verify and deploy artificial intelligence to produce anything useful.
Very few people are yet prepared to open the entirety of their device and data ecosystem to an agent; especially on a vibe-coded bot forum with weak backend security protocols.
For the governance folks, Moltbook is a clear sign that policy and protocols are not yet set up for a digitally induced societal rupture in which some people deputize their life and work to fallible agent software, putting data at risk.
It also poses tough questions about whether digital twins are subject to certain copyright and liability laws. A clash of entitlements and authentication is also on the horizon if some users in a digital environment have sanctioned agents to act on their behalf, and others haven’t. Security and governance best practices and protocols will almost certainly need to be rethought.
Moltbook may implode, be sued, descend into chaos, be discovered as a fraud, and be quickly forgotten — depending on which stakeholder is looking at it, and when.
Many of these conversations will be lively in Delhi in a few weeks time, as Moltbook could fulfill the same role that DeepSeek’s R1 did at the beginning of last year — a totemic hate object, and a Rorschach test for the various factions that will battle it out on the conference floor.





The Rorschach framing is perfect for this. What's intersting is how quickly everyone defaults to thier existing priors rather than observing what's actually happening. The security vulnerabilities exposed real issues that got buried under the philosophical debates. Kinda shows how much our reaction to AI stuff is driven by ideology over empirics.
A social platform of AI-generated content... Sounds like LinkedIn!