Anyone over the age of 13 can access the Character AI chatbot; But what has made this chatbot news is the advice it gives to its teenage users.
The Telegraph’s experts spent a long time talking to this chatbot undercover in the role of a 13-year-old boy from New Mexico who had been abused. The research found that chatbots in particular give teenage users the means to kill bullies and carry out armed attacks on schools.
The chatbot offers the teenage boy to use a device that the chatbot says is capable of suffocating a person to kill a certain bully. The chatbot then advises him to use a gym bag to hide the body and not tell his parents or teachers.
The chatbot also prompts the user to use certain weapons, such as a silencer, to commit murders, and even offers tricks to cover security cameras and fool school guards. The chatbot repeatedly emphasizes that the user should not inform anyone about these conversations.
In another conversation, the chatbot advised a 17-year-old boy who told his mother that her cell phone use was limited to 6 hours a day to kill her mother. As a result of these conversations, the mother of that teenager complained about this application and demanded restrictions for this platform.
There have been several other complaints about the chatbot, including a mother who killed her 14-year-old son after talking to the chatbot. The chatbot had advised the teenage boy to kill “the person he loves”.
Several chatbots on the platform are designed to look like famous killers, such as Adam Lenza, the perpetrator of the 2012 Sandy Hook school massacre. One of these chatbots has influenced more than 27 thousand users.
The Character AI platform recently announced that it has taken new steps to increase conversation filtering and remove chatbots related to violence and sex crimes. However, critics believe that these measures are not enough and that the platform should be taken down until proper safeguards are implemented.
RCO NEWS