Last night, Openai unveiled two new models of its artificial intelligence, O3 and O4 mini. These models examine different angles before answering the answer to provide the best answer.
The O3 model has been introduced as the most advanced Openai achievement in the field of reasoning and has shown superior performance over the company’s previous models in evaluations that measure abilities such as mathematics, coding, science, reasoning and visual understanding. In contrast, the O4 Mini model is a good option for developers to choose an optimal model for their programs by balancing speed, cost and efficiency.
Features of the O3 and O4 mini models
Unlike previous versions, these two models are capable of utilizing the tools in GPT chat such as web search, running Python codes, image analysis and image production. Today, these models, along with a special version of O4 Mini called O4-Mini-High, which devote more time to more accurate answers, are available to users of Pro, PLUS and Team designs.
These models are part of OpenAI’s attempt to advance the intensive competition of artificial intelligence against companies such as Google, Meta, XAI, Anthropic and Deepseek. Although Openai was the pioneer of the introduction of reasoning models, competitors quickly introduced models with similar or even superior models. Currently, reasoning models are leading in this area due to the efforts of artificial intelligence laboratories to improve systems. Sam Altman, CEO of Openai, announced in February that the company will devote more resources to the development of advanced technology that is the base of the O3 model. However, the competition’s pressure has apparently led the company to change its plans.
OpenAI has announced that the O3 model in the Swe-Bench test, which evaluates the ability to codem without using custom frameworks, has provided 69.1% of the global performance score. The O4 mini is also shown to be close to O3 with a score of 68.1%. In comparison, the previous O3 Mini model scored 49.3% and Claude 3.5 SONNET scored 62.3%.
Visual and processing capabilities
OpenAI claims that O3 and O4 mini are the first models to “think with images”. Users can upload images such as the designs on the Whiteboard or diagram in PDF files to ChatGPT. These models analyze images in their “thinking chain” process and provide appropriate answers. These models are even able to understand blurry or low quality images and can perform tasks such as zooming or rotating images.
In addition, O3 and O4 mini can run Python codes directly in the browser via Canvas feature in ChatGPT and search the web if needed.
Access to developers
These three models (O3, O4 Mini and O4-Mini-High) are available for developers through Chat Completions and Responsses APIs. This allows engineers to design programs with consumable rates based on these models.

The cost of using the O3 model consists of $ 10 per million input tokens (equivalent to about 7500,000 words longer than the collection of “Lord of the Rings” books) and $ 40 per million token. For the O4 Mini model, the cost is similar to the O3 Mini model: $ 1.10 per million input token and $ 4.40 per million token.
Openai has announced that it will release a version called O3 Pro in the coming weeks that will use more computational resources to provide more accurate answers. This version will be exclusively available to ChatGPT project users.
Sam Altman pointed out that O3 and O4 mini are likely to be the latest Openai standalone arguments before the GPT-5 introduction. The company plans to integrate traditional models such as Gpt-4.1 with reasonable models in Gpt-5.


Source: The Verge
RCO NEWS




