OpenAI has announced that the company’s new models have a greater risk of being misused in the creation of biological weapons. The warnings come as security researchers warn of the potential dangers of advanced technologies.
This company, which recently introduced and announced its new model called o1, this model has enhanced capabilities in the field of reasoning, solving complex mathematical problems, and answering questions and scientific research. These abilities are considered as an important development towards the creation of artificial general intelligence (machines with human cognitive abilities).
In OpenAI’s description of how this artificial intelligence works, it is stated that the new models have a moderate risk of chemical, biological, radiological and nuclear (CBRN) weapons. This average rating is the highest OpenAI has ever given its models, and means that the technology significantly increases the ability of experts in CBRN-related fields to generate known CBRN threats.
Artificial intelligence software with more advanced capabilities, such as the ability to perform step-by-step reasoning in the hands of ill-intentioned individuals, poses a greater risk of abuse, experts say.
The warnings come as tech companies including Google, Meta and Anthropic are racing to build and improve sophisticated artificial intelligence systems, as they seek to create software that acts as agents that help humans perform tasks and Managing daily life helps. These AI agents are also seen as a potential source of revenue for companies that are already struggling with the high costs of training and implementing new models.
This has led to efforts to better regulate AI companies. In California, a controversial bill called SB 1047 has been introduced that would require companies that produce expensive models to take steps to reduce the risk of their models being used in the development of biological weapons.
Some venture capitalists and technology groups, including OpenAI, have warned that the proposed legislation could negatively impact the AI industry. California Governor Gavin Newsom must decide in the coming days whether to sign or veto the legislation.
“If OpenAI has indeed reached a moderate level of threat for CBRN weapons, as they have reported, that would underscore the importance and urgency of passing legislation like SB 1047,” said Yoshua Bengio, a professor of computer science at the University of Montreal and one of the world’s leading AI scientists. It strengthens to protect the general public.
“As advanced AI models progress towards general AI, the risks will continue to increase if the necessary safeguards are not in place,” he added. “Improving AI’s ability to reason and using this skill to deceive is particularly dangerous.”
Meera Moratti, OpenAI’s chief technology officer, told the Financial Times that the company is more cautious about offering o1 models to the public because of their advanced capabilities. The product will be available through paid ChatGPT subscriptions and for developers via API.
He added that this model has been tested by specialized teams of experts from different scientific fields who try to break the limitations of the model. Moratti said the current models performed better than their predecessors in overall safety measures.
source
RCO NEWS