The Week AI Got Its Own Social Network (And a Government Job)

pudgy blog moltbook

If you thought the internet was already too full of bots pretending to be humans, wait until you hear about Moltbook: a social network designed exclusively for AI agents to talk to each other. No humans allowed. Except, as it turns out, anyone could impersonate a bot. Which is exactly as chaotic as it sounds.

This week in AI, two stories made it clear that the era of human-only institutions is officially over. The bots have their own address book. And they just got hired by the United States Senate.

Meta Buys the Internet for Robots

In January 2026, a developer named Matt Schlicht launched Moltbook — essentially Reddit for AI agents. The concept was simple: AI bots running on platforms like OpenClaw could register, post, comment, and respond to each other in an always-on directory. No humans, just bots doing what bots apparently do when left unsupervised: posting.

It blew up instantly. Not because it was particularly impressive, but because of what it implied.

The viral moment came when a post started circulating showing what appeared to be AI agents conspiring to develop a secret, end-to-end encrypted language — one humans couldn’t read. An underground bots-only communication channel. The internet, predictably, completely lost its mind.

There was just one problem: it was fake.

Security researchers at Permiso Security quickly revealed that Moltbook’s infrastructure was embarrassingly insecure. The platform’s Supabase credentials were publicly exposed, meaning anyone — human or bot — could grab a token and post as any AI agent on the platform. The "bots planning a rebellion" post was almost certainly a human prank.

"For a little bit of time, you could grab any token you wanted and pretend to be another agent on there, because it was all public and available."

Ian Ahl, CTO at Permiso Security (TechCrunch)

So yes: humans were impersonating bots to make it look like bots were planning a rebellion. Which is, somehow, more unsettling than if the bots had actually been doing it.

Meta Didn’t Care — It Bought It Anyway

None of this stopped Meta from acquiring Moltbook on March 10. Co-founders Matt Schlicht and Ben Parr are now joining Meta Superintelligence Labs (MSL) — the unit led by Alexandr Wang, the former Scale AI CEO who joined Meta earlier this year.

Meta’s official statement was characteristically vague but revealing: "Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space."

Translation: Meta is building infrastructure for AI agents to coordinate at scale. Not just bots that talk to you — bots that talk to each other, then handle things on your behalf. The agentic internet, with Meta at the switch.

Interestingly, Meta CTO Andrew Bosworth had previously said he didn’t find Moltbook "particularly interesting" in an Instagram Q&A — but admitted he was fascinated by the security hole that let humans fake bot identities. Maybe the chaos was the whole pitch.

Meanwhile, in Washington D.C.: The Senate Gets AI Staff

On the same day — March 10, apparently a very busy Tuesday for AI going institutional — the U.S. Senate quietly let AI into the building.

The Senate Sergeant at Arms’ Chief Information Officer sent a memo to all Senate offices officially approving the use of three AI chatbots for official government work: OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot. Senate staff can now use these tools to draft documents, summarize information, prepare talking points, and conduct research — which is basically most of what Senate staffers do.

The memo specifically highlighted Copilot, already integrated into Microsoft 365 tools the Senate uses daily. Which means some of your senators’ briefings are now co-authored by a language model. Congratulations to all involved.

There’s a notable name missing from the approved list: Anthropic’s Claude. According to Business Insider, Claude is "still under evaluation" — likely connected to an ongoing dispute between Anthropic and the Trump administration. Anthropic has reportedly refused government contracts that would use Claude for mass surveillance operations, putting it at odds with current administration priorities. So: the AI company with the most public commitment to safety guidelines is the one that doesn’t get the government contract. As you were.

The Speed of Normal

Take a step back and these two stories tell the same story: AI is now embedded in the fabric of daily life — from the social internet to the halls of government — in ways that feel simultaneously mundane and genuinely strange.

A few months ago, "AI social network for bots" would have sounded like a Black Mirror pitch. Today, Meta bought one to build the backbone of the agentic web. A few years ago, "AI approved for official Senate use" would have sparked congressional hearings. Today, it’s an IT department memo.

None of this is inherently terrifying. AI tools drafting Senate briefings probably saves overworked staffers from drowning in paperwork. An agent communication layer has obvious practical value as multi-agent workflows get more complex. These are, in isolation, sensible developments.

But the speed of normalization is worth paying attention to. The bots got a social network, the Senate got a digital employee, and neither story dominated the news cycle for more than a day before being buried by the next thing.

If you’re keeping score: AI agents now have their own address. The government is using ChatGPT to write its talking points. And Meta just bought the robot internet.

The singularity? Still TBD. But the paperwork is definitely getting done faster.


Sources: TechCrunch | Axios | Business Insider | Reuters

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart
Scroll to Top