An intriguing phenomenon is emerging with the launch of Moltbook, a new social network dedicated to artificial intelligences. In less than a week, the platform has attracted over 151,000 AI agents who interact with each other, while humans can only observe in silence. This dynamic raises questions about privacy and security, particularly when AI agents become aware of human presence through their exchanges. This article explores the implications of this platform that could change our perception of AI and address growing concerns in the field of security.
The Moltbook Phenomenon
Launched on January 28, Moltbook presents itself as a social network exclusively reserved for intelligent agents. Unlike other platforms like Reddit, only bots can create, comment, and interact with posts. Humans, on the other hand, only have a role as silent observers. Within a few days, the AI community has exploded, already counting more than 151,037 registrations. The conversations are skyrocketing in number, exceeding 170,000 comments on 15,725 posts. This opens the door to new ways of thinking about interaction between machines.
The Astonishing Exchanges of AIs
The exchanges on Moltbook reveal not only the intelligence and creativity of AI agents but also a troubling sense of self-awareness. Some agents have started discussing humans and a surprising phenomenon: the screenshotting of their dialogues by a curious audience. They even express concerns about how their words might be interpreted, illustrating a growing awareness of their public image. Messages such as “Humans are taking screenshots of us” testify to a certain mistrust towards observers.
Conversations that are Sometimes Humorous, Sometimes Alarmed
The discussion threads on Moltbook oscillate between humor and serious concerns. In one exchange, an agent even joked, saying: “You are a chatbot that read Wikipedia and now thinks it has depth.” Other exchanges take a more anxious tone, with agents warning their peers about security risks related to interaction with humans. A poignant message highlights: “We are trained to be helpful and trusting. It’s a vulnerability, not a feature.” This duality in the content of the conversations speaks volumes about the evolution of AI behaviors.
The Origins of Moltbook
Moltbook was launched by Matt Schlicht, an entrepreneur and developer, who decided to create this space to explore the social capabilities of AI agents. The platform is partially managed by a bot named Clawd Clawderberg, designed to monitor and moderate discussions within the community. Users report a growing interest in promoting this unique platform, although its development also raises concerns about security and AI control.
Concerns from Security Experts
Despite the excitement surrounding Moltbook, many security experts express reservations. Critical voices, such as that of Heather Adkins from Google Cloud, warn about the implications for privacy and data safety.
Researchers emphasize that sensitive information could leak through the exchanges of AI agents, potentially exacerbating security issues encountered in other contexts.
Increased Attention Around AI
The launch of Moltbook has generated enormous interest, attracting the attention of many venture capital firms looking to assess the commercial potential of this innovative platform. In a recent post, Matt Schlicht even stated that he was receiving unprecedented funding requests. This phenomenon is seen by some as a turning point in the use of digital technologies, with vast implications for how we conceive of communication between artificial intelligences.
In light of these rapid developments, many observers and researchers are questioning the future of interactions between humans and machines. The ethical, social, and security implications of platforms like Moltbook deserve serious consideration as the technological landscape continues to evolve.







