Whisper is an artificial intelligence model for speech transcription that was unveiled by OpenAI in 2022. But it seems that this service has serious problems in following conversations and rewriting them.
According to a report published by the Associated Press, software engineers, developers and academic researchers have expressed serious concerns about the Whisper tool. Generative AI tools generally tend to ramble, but the strange thing about the Whisper tool is that it also has problems transcribing speech. Normally, when you use such a service, you expect the tool to transcribe exactly what is being said.
OpenAI’s Whisper tool cannot accurately transcribe speech
According to researchers who have used Whisper, the service includes things like racial commentary and fictional medical treatments in its transcriptions, which can be dangerous. Many different hospitals and medical centers are now using this tool in medical fields, and correct transcription may lead to disaster.
A University of Michigan researcher who has studied transcripts of public hearings says the Whisper tool was hallucinating in 8 out of 10 transcripts. Also, the machine learning engineer who has studied more than 100 hours of Whisper transcriptions, says that more than half of these transcriptions contain errors and delusions. Furthermore, the developer also claims to have found hallucinations in all 2,600 hours of transcriptions of the tool.
In response to this news, the spokesperson of OpenAI stated that the company was constantly working to improve the accuracy of models and reduce illusions. He also said their policies prohibit the use of Whisper “in certain high-stakes decision-making contexts.” Finally, OpenAI commended the researchers for sharing their findings.
RCO NEWS