New reports suggest that Openai is boosting its security measures to couer theft and models of models by competitors such as Deepseek.
According to a new report released by the Financial Times, Openai has faster its security measures faster after the Chinese startup Deepseek released in January. Openai claims that Deepsic copies illegally from its models using a method called “distillation”.
OpenAI makes it harder to copy its models by competitors
Some of the security measures carried out by Openai include limiting access to information that restricts employees’ access to sensitive algorithms and products. According to the Financial Times, at the time of the developme of the O1 model, the only approved team members who were in detail the project details were allowed to discuss it in joi office spaces.

Openai is now holding its own technologies on offline computers. The company corols access to offices using biometric autheication (of staff fingerpri scanning) and implemeed by default the policy of non -access to the Iernet, so that any form of external connection requires higher approval of references.
The Financial Times pois out that Openai has even increased physical security in its data ceers and has increased the number of cyber security staff.
These changes are said to be part of OpenAI’s attempt to preve competitors from stealing information and models. But given the war, the recruitme of manpower between Meta and Openai, the company may also be trying to solve its iernal security problems.
Openai has repeatedly accused the Chinese company Deepsic of stealing its models. However, a study found that the results preseed by Dip -Sick Chats are very similar to the answers of the Western competitors of these Chinese chats.



