The Character.Ai Chat Chats app recely iroduced a new feature called “Pareal Insights” that offers pares a weekly summary of how their teens use this platform.
Why is it importa?
The service allows users to ieract with chats based on fictional characters, at least twice have been prosecuted by adolesce pares. These pares have claimed that the creators of the app are responsible for harming themselves or their children’s suicide. One of the complais claims that the app had offered to a child that it was acceptable to kill his or her pares.
How does it work:
This new tool sends a weekly email to pares, including:
- Average adolesce daily use of the platform (on web and mobile)
- The characters that most ieracting with
- Time spe with each character
IMPORTA: This report does not include the coe of the chat.
“The version that is being released today is an initial step and will develop gradually,” Character.ai said in a blog post. “This capability encourages pares to have an open conversation with their children about how they use the program,” said Erin Tig, a product manager at Character.ai.
In order for pares to use this tool, the teenager must register for this feature and eer the pare’s email. Character.ai also has age limitations and users should be at least 5 years old.
Last year, the company says it has taken steps to protect adolesce users, including iroducing a dedicated model for users under the age of 5 and improving systems that ideify and iervene in harmful behaviors (whether from human or chatter).
Some experts believe that pareal corol is more like a “glue on the bullet” (surface solution for a more serious problem). “Excessive focus on extremist cases such as suicide diverts atteion from wider risks such as emotional dependence on technology,” says Julia Freland Fisher, director of training at the Clayton Christensen Institute. “The narratives that are heard these days are very extreme,” he told Axios. “This makes the pares think that this is not about my baby.”
But Fisher, however, believes that a tool that shows pares can be useful. According to an OpenAI study, users who consistely used chattes have reported that they had more negative effects on their meal health.
“If pares can see high use and know that this is related to meal health risks, then this tool can be really useful,” he said.




