
As reported by The Verge.
SB 53, known as the historic AI transparency law, officially took effect in California. Governor Gavin Newsom signed the Frontier Artificial Intelligence Transparency Act, an initiative of Senator Scott Wiener. This is the second version of the law after the veto of the previous SB 1047 last year due to concerns that the provisions could stifle innovation in the region.
The new law requires large AI developers to publicly publish on their websites a framework description of how they integrate national and international standards and broadly accepted industry practices into their frontier AI framework, and to require disclosure of updates with explanations within 30 days. The law also establishes a mechanism for publicly responding to inquiries from the public to ensure transparency of processes.
“California’s leadership in regulating technology is most effective when it complements strong global and federal safety systems.”
At the same time, the text did not implement all provisions of the previous draft, in particular the absence of mandatory independent third-party assessments.
Key Provisions and Impact
The law introduces a new process for reporting potentially critical safety incidents to the California Department of Emergency Services and protects individuals who disclose significant health and safety risks related to frontier models. Civil liability is provided for violations, which may be enforced by the state regulator.
The California Department of Technology is also expected to annually provide recommendations on updating the law, participate in cross-industry dialogue, and take into account international standards and technological developments.
The regulatory landscape in the field has become divided: some companies initially criticized the bill as overly restrictive, warning of a slowdown in investment in California. However, a region with nearly 40 million residents and a strong technology cluster wields significant influence on the global artificial intelligence industry and its regulatory environment.
SB 53 gained public support from Anthropic after lengthy negotiations over the wording of the text, while Meta launched a state-level campaign to influence AI regulation. OpenAI expressed opposition to regulation; its Director of International Affairs Chris Lehein underscored that “California’s leadership in regulating technology is most effective when it complements global and federal safety systems.”
“To make California a leader in global, national, and regional AI policy, we encourage the state to consider frontier models that meet its requirements when they join a parallel regulatory framework, such as the EU Code of Practice, or enter into an agreement with the appropriate U.S. federal safety agency.”
Additionally, questions about future mechanisms for interaction between regulators and industry were highlighted, but it remains open how the law will integrate independent assessments and responses to inquiries into the broader context of the state’s regulatory culture.
Author: Hayden Field
You might be interested in:
- Chinese AI developer DeepSeek reveals $294K training cost for R1 model, significantly lower than Western estimates, highlighting shifts in AI development transparency and market competition.
- NVIDIA plans to invest up to $100 billion in OpenAI, supplying chips for its data centers, marking a major AI industry partnership with significant market and regulatory implications.
- Kamala Harris discusses her memoir “107 Days,” reflecting on Biden’s re-election risk, VP candidate choices, and party dynamics in a revealing interview.