OpenAI has announced that the company’s new models have a greater risk of being misused in the creation of biological weapons. The warnings come as security researchers warn of the poteial dangers of advanced technologies.
This company, which recely iroduced and announced its new model called o1, this model has enhanced capabilities in the field of reasoning, solving complex mathematical problems, and answering questions and scieific research. These abilities are considered as an importa developme towards the creation of artificial general ielligence (machines with human cognitive abilities).
In OpenAI’s description of how this artificial ielligence works, it is stated that the new models have a moderate risk of chemical, biological, radiological and nuclear (CBRN) weapons. This average rating is the highest OpenAI has ever given its models, and means that the technology significaly increases the ability of experts in CBRN-related fields to generate known CBRN threats.
Artificial ielligence software with more advanced capabilities, such as the ability to perform step-by-step reasoning in the hands of ill-ieioned individuals, poses a greater risk of abuse, experts say.
The warnings come as tech companies including Google, Meta and Ahropic are racing to build and improve sophisticated artificial ielligence systems, as they seek to create software that acts as ages that help humans perform tasks and Managing daily life helps. These AI ages are also seen as a poteial source of revenue for companies that are already struggling with the high costs of training and implemeing new models.
This has led to efforts to better regulate AI companies. In California, a coroversial bill called SB 1047 has been iroduced that would require companies that produce expensive models to take steps to reduce the risk of their models being used in the developme of biological weapons.
Some veure capitalists and technology groups, including OpenAI, have warned that the proposed legislation could negatively impact the AI industry. California Governor Gavin Newsom must decide in the coming days whether to sign or veto the legislation.
“If OpenAI has indeed reached a moderate level of threat for CBRN weapons, as they have reported, that would underscore the importance and urgency of passing legislation like SB 1047,” said Yoshua Bengio, a professor of computer science at the University of Moreal and one of the world’s leading AI scieists. It strengthens to protect the general public.
“As advanced AI models progress towards general AI, the risks will coinue to increase if the necessary safeguards are not in place,” he added. “Improving AI’s ability to reason and using this skill to deceive is particularly dangerous.”
Meera Moratti, OpenAI’s chief technology officer, told the Financial Times that the company is more cautious about offering o1 models to the public because of their advanced capabilities. The product will be available through paid ChatGPT subscriptions and for developers via API.
He added that this model has been tested by specialized teams of experts from differe scieific fields who try to break the limitations of the model. Moratti said the curre models performed better than their predecessors in overall safety measures.
source




