Hey Kaitlyn, Joseph here with a significant data breach. A hacker has taken data from an AI companion bot website. Here, people write prompts for the AI to fulfill their sexual fantasies. Many of these relate to kinks, and present a privacy risk for the users because each prompt is linked to their email address. Other parts of the data, though, explicitly discuss child abuse scenarios, despite the site's claim that it bans such material. The full article follows below. This article contains descriptions of sexual violence and child abuse. A hacker has targeted a website that lets users create their own “uncensored” AI-powered sexual partners and stolen a massive database of users’ interactions with their chatbots. The data, taken from a site called Muah.ai and viewed by 404 Media, includes chatbot prompts that reveal users’ sexual fantasies. In many instances, users are trying to create chatbots that roleplay child sexual abuse scenarios. These prompts are in turn linked to email addresses, many of which appear to be personal accounts with users’ real names.
This segment is a paid ad. If you’re interested in advertising, let's talk.
Your personal data can easily be found with a simple Google search—DeleteMe is here to help. Join a growing community of privacy champions, from individuals to forward-thinking businesses, seeking to stop the leak at the source. We're offering 404 Media readers 20% off all consumer plans.
“I went to the site to jerk off (to an *adult* scenario, to be clear) and noticed that it looked like it [the Muah.ai website] was put together pretty poorly,” the hacker told 404 Media. “It's basically a handful of open-source projects duct-taped together. I started poking around and found some vulnerabilities relatively quickly. At the start it was mostly just curiosity but I decided to contact you once I saw what was in the database.” The administrator of Muah.ai, who used the name Harvard Han, told 404 Media in an email that “the data breach was financed by our competitors in the uncensored AI industry who are profit driven, whereas Muah AI becomes a target for being a community driven project.” The site’s operators detected that it was hacked last week. Han didn’t provide 404 Media with any evidence for their claim, and the hacker said they do work in the tech industry but not on AI. “We have a team of moderation staff that suspend and delete ALL child related chatbots on our card gallery, discord, reddit, etc,” Han added, with “card gallery” referring to a list on the Muah.ai website of the community’s bot creations. 💡 Do you know about any other AI company breaches? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +44 20 8133 5190. Otherwise, send me an email at joseph@404media.co. Some of the data contains explicit references to underage people, including the sexual abuse of toddlers and incest with young children. For example, we viewed one prompt that described incestuous origies with “newborn babies” and “young kids.” It is not entirely clear if the AI system delivered a response that reflected what the user was seeking, but the data still shows what people are trying to use the platform for. The hacking itself is also a significant data breach for those who used Muah.ai, given they may not want their explicit sexual fantasies linked to their identifiable email address. In the data, “char” refers to the generated AI character, and “user” as the person interacting with it. The character-building prompts include fantasies about being sexually dominated and used as a “sex slave,” being tortured by a sadistic wrestler, and consensual non-consent fantasies. 404 Media contacted dozens of people included in the data, including users who wrote prompts that discuss having sex with underage children. None of those people responded to a request for comment. Image: Muah.ai Muah.ai is one of many AI relationship bots, in which users pay for the ability to create and interact with AI companions of their choice. Some popular AI companion platforms backed by venture capital, like Character.AI, have a strict policy forbidding any sexual content, while others, like Blush, allow it. There are also large communities, like Chub.ai, where users create and share their own chatbots and where many people engage in erotic roleplay. Muah.ai bills itself as an “uncensored” platform for building AI chatbots, explicitly allowing sexual conversations and photos. At the same time, it claims to ban underage content according to a message on its site. On the platform’s Discord, a Muah.ai administrator told two users that a certain character was “underaged, don’t post that shit on this Discord.” But they told the pair to “go DM each other or something.” On Muah.ai users log in, choose from a selection of pre-made bots, others created by the community, or make their own. These AI companions then send sexually suggestive or explicit messages, as well as AI-generated images. Muah.ai detected it had been hacked last week, and posted a message to its Discord warning users. “We have been targeted by hackers who apparently got nothing to do but to hack a community driven project that aims to provide uncensored/free Al for all. Wasting their talent on taking down small businesses rather than improving themselves for the benefit of society,” that message read. It added that “your chat messages are never kept, everything is wiped on chat reset,” but the database still clearly includes lots of information that links specific users to particular sexual interests. A welcome email when signing up to the Muah.ai service says “Everything inside Muah AI is encrypted and private between you and your companion only.”
|