The California state legislature is considering a new bill that would create a framework for the state to “ensure the safe development of AI models” within the state, according to the bill’s sponsors.

Under the Safety in Artificial Intelligence Act, introduced by state Sen. Scott Wiener, AI labs would be required to practice responsible scaling by testing the most advanced models rigorously for safety risks and disclose their planned responses to the state if safety risks are discovered.

Sen. Wiener said the legislation would also establish strong liability for damages caused by foreseeable safety risks. If passed, the bull would create CalCompute, a cloud-based compute cluster available for use by AI researchers and smaller developers housed in California’s public university system.

“Large-scale AI presents a range of opportunities and challenges for California, and we need to get ahead of them and not play catch up when it may be too late,” said Sen. Wiener.

“As a society, we made a mistake by allowing social media to become widely adopted without first evaluating the risks and putting guardrails in place. Repeating the same mistake around AI would be far more costly,” the senator said. “At the same time, this technology shows incredible potential to improve people’s lives. We need to engage with this new technology, and direct the incredible innovation California is known for to chart a course for the rest of the world.”

The legislation is classified as an intent bill, which is a type of legislation that is ineligible to move through the typical legislative process at this stage of the year. Sen. Wiener’s office noted that lawmakers typically use this type of bill to generate discussion and feedback for a period of time before amending the bill with full legislative text and moving them through the formal legislative process.

Sen. Wiener’s office said the bill “establishes the intent of the legislature to enact sweeping safety rules governing AI development,” which is a “key first step before moving through the regular legislative process next year.” Before introducing full legislation, Sen. Wiener said he intends to seek feedback and engagement from a range of stakeholders inside and outside the industry.

“SB 294 is a framework we will fill in over the next several months by engaging closely with a range of researchers, industry leaders, security experts, and labor leaders,” Sen. Wiener explained. “The best way to get feedback on an idea is to put legislative text in print, so after consulting a broad array of experts in industry and academia, we’ve introduced a framework we think addresses the most critical risks of the new technology while preserving and nurturing its incredible benefits. Now that the framework is public, we will collect feedback and continue refining the proposal throughout the fall in preparation to move a fully developed policy through the legislative process in January.”

In a press release, Sen. Wiener noted that the state is home to 35 of the world’s top 50 AI companies and a quarter of all AI patents, conference papers, and companies globally.

“Regulation often responds to harms from new technologies at a lag after they happen,” said Katja Grace, lead researcher at AI Impacts. “With advanced artificial intelligence that method is unacceptably dangerous: the harms we foresee from uncontrolled AI development are many, potentially devastating, and will hit many aspects of life at once. And there will probably be more harms we don’t foresee. Yet development is fast and poised to fuel its own acceleration. I think this bill proposes many sensible and promising preparations for steering AI development toward its abundant promise, through the maze of potential affliction and catastrophe. I’m excited to see California taking a lead in beginning this incredibly important project.”

Read More About
About
Kate Polit
Kate Polit
Kate Polit is MeriTalk SLG's Assistant Copy & Production Editor, covering Cybersecurity, Education, Homeland Security, Veterans Affairs
Tags