When you start a new chat with ChatGpt, the text is displayed at the bottom of the screen. “ChatGpt may make a mistake. Check important information.“This alert is even in the new version of the model, the Gpt-5, and now one of Openai’s senior executives has reaffirmed it.
Nick Torley, head of the ChatGpt department in Openai, said at Decoder Podcast The Verge:
“The issue of being trustworthy is that there is a great gap between the two” very reliable “and” 100 % reliable “modes. “Until we prove that we are more reliable in all areas than a humanist, we continue to advise users to re -examine the answers.”
ChatGpt should be your second opinion, not the original source
He went on to say that people should look at ChatGpt not as the main source of truth, but more as a second view. Torley has also warned that productive artificial intelligence tools such as ChatGPT may be “hallucinations” and show information that is not true. This is because the models are designed to predict the most likely answer and their main purpose is not to understand the truth.
According to him, the tool has the best performance when it is used along with valid sources such as search engines or dedicated data of a company:
“I still believe that the best product is a large language model that is connected to reality. “That’s why we added the search to ChatGpt and I think this feature has made a big difference.”
Torley also emphasized that the GPT-5 has made great progress in reducing errors, but it is still ideal. She said in this regard.
“I’m sure we will finally solve the problem of illusion, but definitely not in the next season.”
Referring to all of this, he recommends that users always compare the information received from ChatGPT or any other chat with valid sources or expert opinions, even if the answer is with the link provided.
RCO NEWS




