New reports suggest that Openai is boosting its security measures to counter theft and models of models by competitors such as Deepseek.
According to a new report released by the Financial Times, Openai has faster its security measures faster after the Chinese startup Deepseek released in January. Openai claims that Deepsic copies illegally from its models using a method called “distillation”.
OpenAI makes it harder to copy its models by competitors
Some of the security measures carried out by Openai include limiting access to information that restricts employees’ access to sensitive algorithms and products. According to the Financial Times, at the time of the development of the O1 model, the only approved team members who were in detail the project details were allowed to discuss it in joint office spaces.
Openai is now holding its own technologies on offline computers. The company controls access to offices using biometric authentication (of staff fingerprint scanning) and implemented by default the policy of non -access to the Internet, so that any form of external connection requires higher approval of references.
The Financial Times points out that Openai has even increased physical security in its data centers and has increased the number of cyber security staff.
These changes are said to be part of OpenAI’s attempt to prevent competitors from stealing information and models. But given the war, the recruitment of manpower between Meta and Openai, the company may also be trying to solve its internal security problems.
Openai has repeatedly accused the Chinese company Deepsic of stealing its models. However, a study found that the results presented by Dip -Sick Chats are very similar to the answers of the Western competitors of these Chinese chats.
RCO NEWS



