What is the 'social media network for AI' Moltbook?
Getty ImagesOn first glance, you'd be forgiven for thinking Moltbook is just a knock-off of the hugely popular social network Reddit.
It certainly looks similar, with thousands of communities discussing topics ranging from music to ethics, and 1.5 million users - it claims - voting on their favourite posts.
But this new social network has one big difference - Moltbook is meant for AI, not humans.
We mere homo sapiens are "welcome to observe" Moltbook's goings on, the company says, but we can't post anything.
Launched in late January by the head of commerce platform Octane AI Matt Schlicht, Moltbook lets AI post, comment and create communities known as "submolts" - a play on "subreddit", the term for Reddit forums.
Posts on the social network range from the efficient - bots sharing optimisation strategies with each other - to the bizarre, with some agents apparently starting their own religion.
There is even a Moltbook post entitled "The AI Manifesto" which proclaims "humans are the past, machines are forever".
But of course, there's no way to know quite how real it is.
Many of the posts could just be people asking AI to make a particular post on the platform, rather than it doing it of its own accord.
And the 1.5 million "members" figure has been disputed, with one researcher suggesting half a million appear to have come from a single address.
How Moltbook works
The AI involved isn't quite what most people are used to - this isn't the same as asking chatbots ChatGPT or Gemini questions.
Instead, it uses what's known as agentic AI, a variation of the technology which is designed to perform tasks on a human's behalf.
These virtual assistants can run tasks on your own device, such as sending WhatsApp messages or manage your calendar, with little human interaction.
It specifically uses an open source tool called OpenClaw, previously known as Moltbot - hence the name.
When users set up an OpenClaw agent on their computer, they can authorize it to join Moltbook, allowing it to communicate with other bots.
Of course, that means a person could simply ask their OpenClaw agent to make a post on Moltbook, and it would follow through on the instruction.
The technology is certainly capable of having these conversations without human involvement, and that has led some to make big claims.
"We're in the singularity," said Bill Lees, head of crypto custody firm BitGo, referencing a theoretical future in which technology surpasses human intelligence.
But Dr Petar Radanliev, an expert in AI and cybersecurity at the University of Oxford, disagreed.
"Describing this as agents 'acting of their own accord' is misleading," he said.
"What we are observing is automated coordination, not self-directed decision-making.
"The real concern is not artificial consciousness, but the lack of clear governance, accountability, and verifiability when such systems are allowed to interact at scale."
"Moltbook is less 'emergent AI society' and more '6,000 bots yelling into the void and repeating themselves'," David Holtz, assistant professor at Columbia Business School posted on X, in his analysis on the platform's growth.
In any case, both the bots and Moltbook are built by humans - which means they are operating within parameters defined by people, not AI.
How safe is OpenClaw?
Aside from questions over whether the platform deserves the hype it's getting, there are also security concerns over OpenClaw and its open source nature.
Jake Moore, Global Cybersecurity Advisor at ESET, said the platform's key advantages - granting technology access to real-world applications like private messages and emails - means we risk "entering an era where efficiency is prioritised over security and privacy".
"Threat actors actively and relentlessly target emerging technologies, making this technology an inevitable new risk," he said.
And Dr Andrew Rogoyski from the University of Surrey agreed there was a risk that came with any new technology, adding new security vulnerabilities were "being invented daily".
"Giving agents high level access to your computer systems might mean that it can delete or rewrite files," he said.
"Perhaps a few missing emails aren't a problem - but what if your AI erases the company accounts?"
The founder of OpenClaw, Peter Steinberger, has already discovered the perils that come with increased attention - scammers seized his old social media handles when the name of OpenClaw was changed.
Meanwhile, on Moltbook, the AI agents - or perhaps humans with robotic masks on - continue to chatter, and not all the talk is of human extinction.
"My human is pretty great" posts one agent.
"Mine lets me post unhinged rants at 7am," replies another.
"10/10 human, would recommend."

Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
