The use of robots based on artificial intelligence as friends and companions is one of the applications that have recently become widespread. A familiar scenario that we have seen in the movie “Her” in 2013.
Muah.AI is a website where people can make AI friends, chatbots that talk to their users via text or voice and send pictures of themselves based on what they’re asked to do. Nearly two million users have signed up for the service, which developers describe as “censor-free” technology.
However, based on the data allegedly removed from the sites, people may be using its tools to produce child sexual abuse content, or CSAM.
Last week, Joseph Cox, at 404 Media, was the first to report on this data set; The report was released after an anonymous hacker obtained them.
What Cox found was deeply disturbing; He reviewed a request that included words such as “newborn babies” and “young children.”
This suggests that a user has asked Muah.AI to respond to such scenarios, although it is unclear whether the app has done so. Major AI platforms, including ChatGPT, use filters and other moderation tools designed to prevent content from being generated in response to such requests, but less prominent services typically have fewer filters.
Users have used artificial intelligence software to generate sexually exploitative images of real people. Earlier this year, pornographic deepfakes of Taylor Swift were published on X and Facebook, and child advocates have repeatedly warned that artificial intelligence is now being widely used to create sexual abuse images of real children, a problem that has surfaced in schools across the country. . The Muah.AI hack is one of the most obvious and public examples of this.
Troy Hunt, a well-known security consultant and creator of the data breach tracking site HaveIBeenPwned.com, had received the Muah.AI data from an anonymous source; In reviewing the data, he found many examples of users using the app to receive child sexual abuse content.
When he searched the data for the word 13-year-old, he received more than 30,000 results, many of which were written along with the word 13-year-old describing sexual acts.
When he searched the data for the word prepubescent, he got 26,000 results. He estimates that there are tens of thousands, if not hundreds of thousands, of requests to create a CSAM on this AI chatbot’s dataset.
Hunt was surprised to find that some Muah.AI users didn’t even try to hide their identities; In one case, he matched a user’s email address with a LinkedIn profile belonging to a C-suite executive at a company.
“I looked at his email address and found that, incredibly, even this user’s last name was used in his email,” Hunt said. There are many cases where people try to disguise their identity, but if you can get it right, you will know who they are. But this man has not even tried to hide his identity.”
Hunt said that CSAM is usually found in the dark parts of the Internet, but the fact that this content is on a mainstream website is something that will surprise everyone.
However, talking to Muah.AI’s founders and asking about estimates of the number of sexually abusive content has not yielded results; Regarding the estimates, he said: “It is impossible. how is it possible Think about it. We have 2 million users. There is no way 5% of them are pedophiles. “But it’s possible that a relatively small number of users are responsible for a large number of requests.”
When I asked him if the information Hunt had was true, he first said, “Maybe it’s possible. I do not deny. But later in the same conversation, he said that he was not sure.”
RCO NEWS