SB 53

California has taken a significant step in AI governance with the passage of SB 53, a landmark AI safety and transparency law signed by Governor Gavin Newsom this week. The legislation shows that state regulation and technological innovation do not have to be at odds, according to Adam Billen, vice president of public policy at youth-led advocacy group Encode AI.

Speaking on TechCrunch’s Equity podcast, Billen explained that lawmakers understand the need to act without stifling progress. “They know from working on countless other issues that it is possible to pass legislation that genuinely protects innovation while ensuring these products are safe,” he said.

A First-of-Its-Kind AI Safety and Transparency Law

SB 53 is the first law in the United States that requires large AI labs to disclose their safety and security protocols, particularly regarding how they prevent their models from being used for harmful purposes such as cyberattacks on critical infrastructure or the creation of biological weapons. It also mandates that companies follow through on those safety commitments, which will be enforced by the state’s Office of Emergency Services.

“Companies are already doing the things we ask them to do in this bill,” Billen noted. “They conduct safety testing and publish model cards. Are some starting to cut corners? Yes, and that is why bills like this are essential.”

Some AI companies, including OpenAI, have previously stated they might relax safety standards under competitive pressure if rivals release high-risk systems without similar safeguards. Billen argues that laws like SB 53 help enforce existing promises and prevent companies from sacrificing safety for competition or profit.

Industry Pushback and Political Battles

Opposition to SB 53 was relatively muted compared to its predecessor, SB 1047, which Newsom vetoed last year. Still, many in Silicon Valley continue to argue that any form of AI regulation could slow down innovation and hinder the United States in its race against China.

Major players like Meta, venture capital firms such as Andreessen Horowitz, and industry leaders including OpenAI president Greg Brockman have invested heavily in political action committees supporting pro-AI candidates. Earlier this year, these groups even supported a proposed AI moratorium that would have banned states from regulating AI for a decade.

Encode AI successfully helped defeat that proposal, but Billen warns that the fight is ongoing. Senator Ted Cruz has introduced the SANDBOX Act, which would allow AI companies to bypass certain federal regulations for up to 10 years. He is also expected to support a federal AI standard that could override state laws under the guise of compromise.

Billen cautions that overly broad federal laws could undermine state-level policymaking on crucial issues. “If you told me SB 53 would replace all state AI bills, I would say that is not a good idea. This bill is designed for a specific set of risks,” he said.

Why State-Level Regulation Still Matters

Billen argues that while competing with China in AI development is important, dismantling state-level initiatives is not the solution. Most state bills focus on issues like deepfakes, transparency, algorithmic bias, children’s safety, and government use of AI, none of which are barriers to global competitiveness.

“Are bills like SB 53 what will stop us from beating China? No,” Billen said. “It is intellectually dishonest to claim that.”

Instead, he believes the U.S. should focus on policies like export controls and ensuring domestic access to semiconductor chips, the real drivers of competitive advantage. Legislative efforts such as the Chip Security Act aim to limit the sale of advanced AI chips to China, while the CHIPS and Science Act seeks to strengthen domestic production. However, major companies like Nvidia and OpenAI have opposed parts of these policies due to revenue concerns and supply chain considerations.

The U.S. government has also sent mixed messages. In April 2025, the Trump administration expanded restrictions on AI chip exports to China, only to reverse course three months later, allowing Nvidia and AMD to sell certain chips in exchange for a 15% revenue cut.

A Model for Democratic AI Governance

Despite these challenges, Billen sees SB 53 as a model for how regulation and innovation can coexist. “This is democracy in action,” he said. “It is messy, but the process of negotiation and compromise is the foundation of our country and economic system.”

He added, “SB 53 proves that collaborative policymaking still works. It shows that we can build a future where AI development and public safety move forward together.”

Leave a Reply

Your email address will not be published. Required fields are marked *