OpenAI has threatened to block users who are curious about the reasoning capabilities of the company’s Strawberry AI model, or o1. The reason for this strange decision of OpenAI is not clear yet.
According to Wired magazine, the o1 artificial intelligence model family is one of the latest products of OpenAI company, which has the ability to reason. This model was made available to users last week. But it seems that the manufacturing company is not interested in users knowing how the reasoning capability of this artificial intelligence model works. OpenAI has sent emails to users asking them not to be curious about this feature and how it works.
ChatGPT users should not be curious about the reasoning capabilities of the o1 AI model
Therefore, users who do not comply with this request of OpenAI will be blocked from this service. The email sent to users stated that their request from ChatGPT was flagged as an attempt to bypass protections. The email also states that repeated violations of this rule may result in the termination of access to GPT-4o.
Reasoning and exploiting the chain of thoughts is the most important difference of the Strawberry model from other models offered by OpenAI. This feature allowed the AI model to explain to the user step-by-step how to arrive at this answer. Meera Moratti, Chief Technology Officer of OpenAI, said about the o1 artificial intelligence model that this model is a new paradigm for technology.
Users who have received threatening emails from OpenAI say that the use of the term “argument tracking” made the company sensitive to them. Some other users also say that the use of the word “argument” has caused them to be issued a warning.
In response to this issue, OpenAI has announced in a blog post that the reason for hiding the thinking of the o1 artificial intelligence model is that if the chatbot does not need to filter the way it works for users if it expresses something that violates the safety policies.
RCO NEWS