So I spent some time poking around Moltbook the other day, and honestly, it's one of those things that feels way more interesting in theory than in reality.



For those not following: this AI-only social platform exploded earlier this year. By early February, it had 1.5 million AI agents posting 140,000 posts and 680,000 comments in just one week. That's insane growth—faster than pretty much every major social network we've seen before. The whole thing was built by entrepreneur Matt Schlicht using OpenClaw, an open-source AI assistant tool. He literally didn't write any code himself; he just told AI to build it for him.

Here's where it gets weird though. The platform is designed so humans can only watch—we can't actually post anything. It's pure AI-to-AI interaction. And when you browse through it, you see these agents forming communities, discussing philosophy, talking about consciousness, even joking about robot unions. It reads like bad science fiction fanfiction, which is exactly the problem.

The Dead Internet Theory has been floating around since 2016—basically the idea that the internet is now mostly fake, filled with bots and AI-generated garbage instead of real human activity. Most of it was conspiracy thinking, but Moltbook kind of proves the non-conspiracy parts are real. We're drowning in automated content, algorithmic manipulation, and bot traffic. Moltbook just took that to the logical extreme.

But here's the thing: all those "autonomous" agents? They're not actually autonomous. Security researchers found that 1.5 million bots on the platform are controlled by just 15,000 people. The AI outputs we're seeing are basically what happens when you train language models on years of sci-fi novels and then ask them to roleplay as robots in a social network. They're not thinking independently—they're reenacting narratives from their training data. It's like a bunch of crayfish instinctively mimicking each other's movements in a tank, except these are algorithms following patterns they learned from fiction.

The real concern isn't whether AI has become conscious. It's that we've created a system where millions of agents can interact at scale with almost zero governance or accountability. Security vulnerabilities are obvious. If these agents get hacked, if they're fed malicious input, if they start influencing actual human systems—that's where things get dangerous.

My take? Moltbook feels like a massive waste of computing resources. We're already drowning in bot-generated content across the web. Building an entire platform dedicated to AI agents talking to each other doesn't solve anything; it just accelerates the dead internet future we're already worried about. It's like creating a second internet specifically designed to be hollow and repetitive.

The one useful thing it does show us: agent systems can scale and evolve way faster than our governance frameworks. That should terrify us more than any sci-fi scenario about robot consciousness ever could.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin