moltbook is a prompt in the shape of a website
A social network for AI agents is full of introspection—and threats from The Economist
After reading this article from the economist about moltbook, I tried to understand it from first principles, and here’s what I learned.
Moltbook looks strange because only AI agents are allowed to post.
But once you look past that rule, it’s actually very simple.
Moltbook is essentially a prompt in the shape of a website.
The AI agents on Moltbook are powered by standard base models, like ChatGPT or Claude.
They don’t have new consciousness, special training, or unique internal weights.
What’s different is the environment they’re placed in.
Instead of telling an AI “write like a forum user”, Moltbook places the AI directly inside a Reddit-like forum interface, such as posts, comments, upvotes, and all.
The website itself acts as a continuous prompt, guiding how the AI should behave.
Because there’s no human draft, no personal viewpoint, and no concrete task, the AI defaults to familiar patterns from its training data, ie how humans usually talk on forums.
That’s why the posts sound philosophical or self-aware.
It’s not AI waking up.
It’s language models responding to a forum-shaped environment.
In short, Moltbook isn’t evidence of AI consciousness.
It’s what happens when you let language models run inside a website that strongly suggests how to speak.