The OpenIA, the manufacturer of ChatPT, unveiled the latest version of its artificial intelligence model, Gpt-5, which can claim to be a doctoral level.
Nominated as “smarter, faster and more practical”, this new model has been described by Sam Altman, a co -founder and CEO of OpenAI, as the new era initiator for chat.
“Having something like Gpt-5 was almost unthinkable in any other period of human history,” said Altman on Thursday. The unveiling of the GPT-5 and claiming its abilities in the “Doctoral Level” in areas such as programming and writing comes while technology companies compete for the most advanced artificial intelligence chats.
Ilan Musk has recently made similar claims about his artificial intelligence chats, Grak, which has been merged into the X platform (former Twitter). Mask said at the unveiling of the latest Grak release last month, “the model” is better than a doctorate in all areas “and called it” the smartest artificial intelligence in the world “.
At the same time, Altman announced that the new Open EIA model less likely to “illusory” (the state of which large language models respond) and are less misleading. Open also has also nominated the GPT-5 as a professional assistant to programmers, an approach that other major artificial intelligence developers in the United States are pursuing.
What are the Gpt-5 capabilities?
OpenAI emphasizes the ability of Gpt-5 to fully build software and its better reasoning power; So that the answers include problem solving, logic and inference. The company claims that this model gives more accurate answers for more trained honesty, and in general, interacting with it is more like dialogue with humans.
According to Altman, this model is “better tangible” from previous versions. “Gpt-3 was like talking to a high school student … Gpt-4 was like talking to a college student, but Gpt-5 is the first time you really feel like you are talking to a doctorate in any field.”

However, Professor Karisa Waliz of the Institute for Ethics in Artificial Intelligence believes that the GPT-5 supply may not be as important as its marketing shows. “Although these systems are very advanced, they have not been really profitable,” he said. “These models can only imitate human reasoning, not really thinking like man,” he said.
The researcher warned: “The main challenge is that if we can’t maintain the current exciting atmosphere around this technology, it is likely to collapse as it was as past economic bubbles.
The BBC’s artificial intelligence reporter Mark Sisselek had access to the GPT-5 before the official launch. “Except for minor differences in appearance, the experience of using the old version was similar to that of the text: you ask or delegate a task by typing the text,” he said. “It is now working with the” reasoning model “that means more to solve problems, but this is more of a gradual progress than a revolution in technology.”
The company has been presenting this model to all users on Thursday. In the coming days, it is clear whether Altman’s claims are really well.
Tension with a rival company

Anthropic company recently discontinued its OPI access to the programming interface (API), claiming that the company had violated its use of services using its coding tools before the GPT-5 release.
A spokesman for the OpenIA replied: “Another study of artificial intelligence systems to evaluate our progress and safety is a routine work in the artificial intelligence industry.” “Although we respect the Anthropic decision to disconnect our access, this is a disappointing move, especially given that our API is still available to them,” he said.
The free version of the new model may be a sign of the change of the Openai approach; A move towards non -existent models, contrary to previous policies of the company.
Changes in ChatGT
Open Open announced on Monday that it is making changes to promote a healthier relationship between users and chatting.

“Artificial intelligence can look more accountable and personal than previous technologies, especially for people who are in vulnerable mental or emotional conditions,” the company wrote in a blog post.
The company said to questions like “Do I have to cut off with my friend?” It will not give a definite answer, but “helps you think better by asking questions and measuring the benefits and disadvantages.”
In May, it stopped the iPho Open Open; This update, according to Sam Altman, had “overly flattered” chat.
Altman said in a recent episode of Open Podcast that he is thinking about how people interact with his products. “Not everything is going to be without problems,” he said. There will still be challenges. People may establish pseudo -social relationships with these artificial intelligence; Relationships that may be somewhat problematic. The community must set new frameworks for managing these relationships. “Of course, the benefits of this technology will be great.”

Altman is known as a fan of the Year 2 film, in which a man communicates with an artificial intelligence companion.
In year 4, Aschelet Johansson, an actor who was a voice of artificial intelligence in the film, said that after launching a chat with a “strangely” sound like “his voice was” shocked “and” angry. “
RCO NEWS




