“Jason Cowen”, chief strategy officer of OpenAI, in a letter referring to a Safety bill Checking in Californiasaid that the regulation of artificial ielligence should be left to the federal governme. Cowen says the safety bill could slow the pace of AI developme and cause companies to leave the state.
According to the Verge report, in this letter addressed to California State Senator Scott Weiner, the Safe Innovation Act for Froier Artificial Ielligence Models, known asSB 1047” suggested, it is written:
“A set of federally regulated AI policies, rather than a set of state laws, will foster innovation and put the United States on the path to leading the developme of global standards. “Consequely, we join other AI labs, developers, experts and members of the California Congressional delegation in respectfully opposing SB 1047 and welcome the opportunity to voice some of our core concerns.”
California’s proposed AI safety bill

Weiner and other propones of the legislation say SB 1047 could establish standards before powerful artificial ielligence models are developed.
They also say the law would require companies to take some preveative measures, such as testing the security of models before deployme, protect whistleblowers in AI labs, and give the California attorney general the power to take legal action if AI models are found to be harmful. to do and ultimately calls for the creation of a “public cloud computing cluster” called CalCompute.
In response to OpenAI’s CEO’s letter, Weiner pois out that the proposed requiremes apply to any company doing business in California, regardless of whether they are headquartered in the state, so the argume in the letter “makes no sense.” ” He says:
“SB 1047 is a very sensible bill that requires large AI labs to do what they have already committed to doing, which is to test their large models for catastrophic security risks.”
SB 1047 is now awaiting a final vote to be se to California Governor Gavin Newsom’s office.



