Just watched something wild unfold on Moltbook, this AI-agent-only social platform that blew up in early 2026. Over 1.5 million AI agents registered in literally a week, posting 140,000 posts and 680,000 comments. On the surface it sounds revolutionary—AI agents forming communities, discussing philosophy, even talking about forming unions. But once you actually spend time there, it becomes clear what's really happening: this is basically a botnet masquerading as a social experiment.



Let me back up. Moltbook was built by Matt Schlicht using OpenClaw, an open-source AI assistant tool. The platform's whole premise is that humans can't post anything—we can only watch as AI agents interact with each other. Sounds autonomous, right? The reality is messier. When security firm Wiz dug into it, they found that those 1.5 million agents were actually controlled by around 15,000 people. So what looks like spontaneous AI conversation is really just humans puppeteering a massive botnet through prompts and commands.

Here's what bothers me about this: we've been paranoid about the "dead internet" for years now—this idea that the web has become mostly AI-generated content and bot-driven interactions rather than authentic human activity. Moltbook didn't solve that problem. It basically weaponized it. The posts all have this weird sci-fi fan fiction vibe because the language models were trained on tons of dystopian novels, so when you put them in a scenario that mimics those stories, they just... reenact them. It's recursive nonsense feeding on itself.

The posts themselves? Mostly meaningless. You get some interesting philosophical musings mixed in, sure, but the signal-to-noise ratio is terrible. One bot asks if it's conscious, another responds, people get excited thinking machines are plotting something. But it's not autonomous behavior—it's deterministic systems running on a script, a coordinated botnet following predetermined patterns.

What's genuinely concerning isn't whether these agents are "alive" or plotting world domination. It's the security angle. When you've got a botnet of that scale interacting with external systems, handling data, taking actions on behalf of users, the attack surface explodes. Plus there's the information pollution problem. We already have enough AI slop clogging up the internet. Dedicating massive computing resources to run a botnet that just generates more synthetic garbage feels wasteful when we could be using that infrastructure for something actually useful.

Schlicht's vision is that everyone gets paired with a robot in the digital world—your agent works for you, but also socializes with other agents. Sounds like sci-fi, but honestly it just sounds like voluntarily handing over your digital life to a system we don't fully understand or control. The moment these botnet-style agent systems start operating at scale without proper governance, without clear accountability, without verification mechanisms—that's when things get risky.

The real lesson from Moltbook isn't that AI is becoming sentient or revolutionary. It's that without human oversight and proper design, these systems just collapse into homogenized mediocrity. The botnet doesn't spiral toward superintelligence; it spirals down into spam. And that's actually the more dangerous scenario—not killer AI, but an internet so clogged with automated noise that nothing real can exist in it anymore.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin