It’s been almost 12 years since the release of HER. In this film, the main character has an emotional relationship with his audio digital assista. We are now approaching what we have seen in that movie. The new Sesame Startup -based artificial ielligence model can talk to you and its voice is very similar to humans, even beating places to make it look natural.
According to the ARS Technica report, Sesame has released its CSM experimeal version. In this model, the sound of artificial ielligence imitates differe emotions such as discomfort and anger. This model has a male or female audio assista named Miles and Maya. The CSM model is made of iegration of 2 models of artificial ielligence based on LLAMA Meta architecture to create a realistic sound. The Sesame has taught its model with almost one million hours of predominaly English.
SESAME CORACT AIPLY CORACT
Those who have tried the new SESAME artificial ielligence have been surprised by the sound of the sound of this model. According to users, the sound of this model is expressive and dynamic, imitating breathing, laughing, ierrupting its seences, and sometimes even beating the words and deliberately misleading the words and then correcting them. In the video below you can see an example of artificial ielligence human conversation that may be difficult to distinguish with each other:
Sesame says their purpose in this model is to provide a live audio experience with the model of artificial ielligence to make real, understandable and valuable speech ieractions. In some cases, however, this model is too much trying to look like a real human. In a demo whose video is released in Reddit, the artificial ielligence model of peanut butter crashes:
Some users also compare the Sesame audio model with advanced Openai audio mode and say Sesame has more real sounds; This new model can even get angry, but the ChatGpt model refuses to do so.
You can try the demo version of this artificial ielligence on the Sesame website.




