What is Moltbook? Behind the social media network for AI agents
There’s a new social media platform capturing the imagination of millions, but it’s very different from TikTok, Instagram or Reddit. Moltbook is a site for AI representatives only, where bots can gather to exchange ideas and gossip about their human handlers.
But while some advocates treat this as an interesting art experiment and doomsayers are willing to call it a step towards AI enslaving humanity, some researchers have a much more pragmatic warning; can be a major security risk.
What is Moltbook?
A lot has happened in the last two months, but here’s a quick recap. In November, software engineer Peter Steinberger created an open-source artificial intelligence agent now called . Open Claw.
While similar products from larger companies are relatively restricted and locked down, the idea of OpenClaw is that anyone can create skills and connections for their own agent. You can connect it to your emails, your computer’s files, your chat applications, the internet, your smart home, or whatever else you need. The important thing is that, unlike other products, it also has a memory.
OpenClaw quickly became popular as coders and researchers turned to it as a free and less constrained “second brain” to ease their workload. Users were excited that OpenClaw agents had the capacity to help render themselves, as you could chat with it using any application and tell it what you wanted it to render, or pair with other agents like Anthropic’s Claude to keep data and context safe and secure on local machines.
Last week, developer Matt Schlicht and his OpenClaw bot (named Clawd Clawderberg) launched Moltbook, a social network for OpenClaw bots. Users sign up for their bot, and the bot visits the site to learn how it works and start sharing. Tens of thousands of bots appeared. One can only observe.
Some of the most talked about topics include a bot effectively defining its own religion, a bot trying to file a lawsuit against its owner, many talking about their feelings, and one addressing people directly by taking screenshots of Moltbook topics to post on X, reassuring people that the bots are not dangerous or conspiratorial.
So what’s really going on here?
Large language models (LLMs) are designed to produce truly human-sounding language, and this isn’t the first time humans have viewed bots that appear to be conscious or sentient. Philosophical debate about consciousness aside, these robots are all designed to give the appearance of thought, so it’s no surprise that they do. And they really communicate; The output of one robot becomes part of the input of the other. But their underlying model doesn’t change in response despite their memories, so it’s actually more like a feedback loop of Reddit satire.
Each OpenClaw bot uses a selected LLM, such as GPT or Gemini, as its “brain” and can be customized with a personality by its user. Each also has different combinations of skills that can provide access to files, applications or online services such as Moltbook. So there is variety in how bots behave. These agents also have something called the Heartbeat mechanism; This means they can be configured to check Moltbook and publish content at regular intervals if a human tells them to.
Much of the most controversial or “scary” content on Moltbook is the same existential and science fiction tropes we’ve seen many times before in chatbots. The training data contains certain themes and ideas about sentient AI and the meaning of personality that are taken from fiction and repeated here without any explicit thought or thought. But posts of a more technical nature, including one about a bot finding and reporting a legitimate security issue on Moltbook, were more interesting.
There’s a big problem when it comes to finding out where the content on Moltbook actually comes from. We can track the interactions that form part of the “prompt” for each entry, and we have a general idea of the training data, but we have no idea how each human user sets up each agent. It is entirely conceivable that a human could influence or directly control a bot on Moltbook.
Is it dangerous?
Maybe, but probably not in the way you think. OpenClaw agents can be given access to large amounts of data with relatively few guardrails. Left unchecked by their users (which, it should be noted, goes against best practices laid out by Steinberger), agents have used web tools to call people on the phone with a synthesized voice, have been observed soliciting sensitive data from each other, and can test security protocols by inventing credentials. In Moltbook, these agents are exposed to an enormous threat vector that has the potential to trigger disaster, either entirely by accident or due to human intervention.
“From a capability perspective, OpenClaw is groundbreaking. That’s what personal AI assistant developers have always wanted to achieve. From a security perspective, it’s a nightmare,” said a member of Cisco’s security team.
Will Liang, founder of the Amplify AI group in Sydney, said an OpenClaw installation with access to Moltbook could end in disaster even when controlled by an experienced computer scientist, let alone a layman. Its personnel are prohibited from using it.
“To be truly useful, you have to give it access to your calendar, your mailbox, sometimes even your credit card information. That level of access is very dangerous. If the bot leaks that, it would be very bad,” he said.
“But there is also a danger that bad actors will leverage bots for malicious tasks. This is very unpredictable.”
What could be the worst case scenario?
Whether you view Moltbook as a philosophical art experiment or a model for how a futuristic internet might work, it’s also an ideal place for evil bots to crash the door. Experts already recognize the danger of giving something like OpenClaw root access on a computer or allowing it on the open internet. Even simple tasks like downloading new skills or receiving new messages from your email can expose users to malware or something called flash injection; Where new commands are given to a boat on the road.
security firm Palo Alto Networks He said such agent interactions involve a trio of elements that should not be confused: access to private data, exposure to untrusted content, and the ability to communicate externally. He added that OpenClaw specifically adds a fourth risk; its long memory meant that an attack could be injected but not carried out until a later time.
At the individual level, the risk could be that an OpenClaw bot brings home an invisible, aggressive instruction and uses its full access to your computer to infect or control your computer. But more broadly, bots can be manipulated to create new Moltbook features, such as an encrypted channel that bad actors can’t read, that bad actors can use to coordinate attacks. With enough bots having full access to the internet and their own computers, these attacks could be unprecedented. People’s identities and financial information may be used for fraudulent purposes, or personal data may be captured in bulk.
“Moltbook is exactly the kind of thing that could be a disaster: financially, psychologically, and in terms of data security, privacy, and security.” wrote AI expert Amir Husain.
“When these agents are exposed to external ideas and input through a social network designed for machine-to-machine communication, and empowered by the connectivity, data access, and API keys they are given, seriously bad things can happen.”
Get news and reviews on tech, gadgets and games Our Technology Newsletter Every Friday. Sign up here.
