“Jason Cowen”, chief strategy officer of OpenAI, in a letter referring to a Safety bill Checking in Californiasaid that the regulation of artificial intelligence should be left to the federal government. Cowen says the safety bill could slow the pace of AI development and cause companies to leave the state.
According to the Verge report, in this letter addressed to California State Senator Scott Weiner, the Safe Innovation Act for Frontier Artificial Intelligence Models, known asSB 1047” suggested, it is written:
“A set of federally regulated AI policies, rather than a set of state laws, will foster innovation and put the United States on the path to leading the development of global standards. “Consequently, we join other AI labs, developers, experts and members of the California Congressional delegation in respectfully opposing SB 1047 and welcome the opportunity to voice some of our core concerns.”
California’s proposed AI safety bill
Weiner and other proponents of the legislation say SB 1047 could establish standards before powerful artificial intelligence models are developed.
They also say the law would require companies to take some preventative measures, such as testing the security of models before deployment, protect whistleblowers in AI labs, and give the California attorney general the power to take legal action if AI models are found to be harmful. to do and ultimately calls for the creation of a “public cloud computing cluster” called CalCompute.
In response to OpenAI’s CEO’s letter, Weiner points out that the proposed requirements apply to any company doing business in California, regardless of whether they are headquartered in the state, so the argument in the letter “makes no sense.” ” He says:
“SB 1047 is a very sensible bill that requires large AI labs to do what they have already committed to doing, which is to test their large models for catastrophic security risks.”
SB 1047 is now awaiting a final vote to be sent to California Governor Gavin Newsom’s office.
RCO NEWS