It’s been almost 12 years since the release of HER. In this film, the main character has an emotional relationship with his audio digital assistant. We are now approaching what we have seen in that movie. The new Sesame Startup -based artificial intelligence model can talk to you and its voice is very similar to humans, even beating places to make it look natural.
According to the ARS Technica report, Sesame has released its CSM experimental version. In this model, the sound of artificial intelligence imitates different emotions such as discomfort and anger. This model has a male or female audio assistant named Miles and Maya. The CSM model is made of integration of 2 models of artificial intelligence based on LLAMA Meta architecture to create a realistic sound. The Sesame has taught its model with almost one million hours of predominantly English.
SESAME CONTRACT AIPLY CONTRACT
Those who have tried the new SESAME artificial intelligence have been surprised by the sound of the sound of this model. According to users, the sound of this model is expressive and dynamic, imitating breathing, laughing, interrupting its sentences, and sometimes even beating the words and deliberately misleading the words and then correcting them. In the video below you can see an example of artificial intelligence human conversation that may be difficult to distinguish with each other:
Sesame says their purpose in this model is to provide a live audio experience with the model of artificial intelligence to make real, understandable and valuable speech interactions. In some cases, however, this model is too much trying to look like a real human. In a demo whose video is released in Reddit, the artificial intelligence model of peanut butter crashes:
Some users also compare the Sesame audio model with advanced Openai audio mode and say Sesame has more real sounds; This new model can even get angry, but the ChatGpt model refuses to do so.
You can try the demo version of this artificial intelligence on the Sesame website.
RCO NEWS