By Lili Kazemi
General Counsel & AI Policy Leader
California has spent the past five years shaping the national conversation around privacy. But with the latest wave of regulatory activity—the CPPA’s finalized rules and the passage of SB-53—the state has moved beyond privacy and squarely into AI governance. And this time, the focus isn’t on “big tech” as a category. It’s on the systems themselves: how they make decisions, how they are trained, how they interact with sensitive data, and what happens when they fail.
For companies building or deploying AI systems—especially any system that could be considered automated decision-making, high-risk data processing, or “frontier-model adjacent”—2026 will not be a quiet year. These laws reshape what compliance means, shift the burden of proof onto businesses, and require a level of governance previously considered “nice to have.” Now it is mandatory.
This is the moment when compliance becomes architecture.
The CPPA’s Final Regulations: A New Operating System for Privacy and AI Governance
After nearly two years of drafts, public comments, rewrites, delays, and legal challenges, the CPPA Board voted on July 24, 2025, to adopt a sweeping regulatory package. The Office of Administrative Law approved the package on September 23, 2025. The rules take effect January 1, 2026.
These aren’t small updates. They reposition the CCPA from a consumer-rights statute into a holistic governance regime. Three pillars define the new structure: automated decision-making technology (ADMT), risk assessments, and cybersecurity audits.
1. Automated Decision-Making Technology (ADMT)
The CPPA now treats ADMT not as a vague category but as a regulated activity. If your system uses personal information to make or materially assist significant decisions—employment, housing, healthcare, insurance, finance—you are now in a heightened compliance zone.
Key obligations include:
- Pre-use notice before deploying ADMT
- The right for consumers to access meaningful information about how the ADMT works
- Opt-out rights for certain ADMT uses
- A human review or appeal process for adverse decisions
What this does, in effect, is force companies to define their models, processes, inputs, and decision logic. “Black box” is no longer a defensible position. If a business cannot explain what its system does or how it impacts a user, it will not be compliant.
The rule also creates what many have called a “mini-GDPR for automated processing,” with some requirements that go even further than Europe’s Article 22. California is clearly staking out leadership in algorithmic accountability.
2. High-Risk Privacy Risk Assessments
Any business engaged in high-risk data processing must conduct formal risk assessments that identify:
- The purpose of the processing
- The benefits and potential harms
- The categories of personal information involved
- The feasible alternatives to achieve the purpose
- The safeguards in place
- The residual risks
These assessments must be certified by a senior executive—meaning the accountability chain is explicit and documented. Companies must also retain the assessments for at least five years or for as long as the processing continues.
Beginning April 1, 2028, companies must submit attestations and summaries of their risk assessments to the CPPA on a staggered revenue-based schedule.
This is a major shift. It will change how businesses architect data workflows, select vendors, train models, and design internal governance. For many organizations—especially those without a privacy officer or AI governance function—this will require building processes from scratch.
3. Cybersecurity Audits
The CPPA now requires annual cybersecurity audits for covered businesses, with deadlines based on company size. These audits must be independent, comprehensive, and aligned with recognized security frameworks.
In many ways, this is the cybersecurity equivalent of SOX: a structured, repeatable, evidence-driven process that documents how the company identifies, mitigates, and monitors security risks.
For AI systems, which rely heavily on data integrity, access controls, supply chain trust, and vulnerability management, these audits will quickly become the backbone of responsible deployment.
The Shift: Compliance Is No Longer “Tell Us What You Do.” It’s “Prove It.”
Taken together, the ADMT rules, the risk assessments, and the security audits mark a clear transition. California is moving from principles-based requirements (disclose, notify, offer rights) to evidence-based requirements (document, assess, justify, audit).
It’s not enough to say your model is safe, fair, or privacy-preserving. You have to show that it is.
This is the compliance maturity model applied to AI.
SB-53: Frontier AI Moves From Voluntary Frameworks to Legal Obligations
While the CPPA focused on data-driven systems, the California Legislature simultaneously passed SB-53—the Transparency in Frontier Artificial Intelligence Act—which Governor Newsom signed into law on September 29, 2025. The law takes effect January 1, 2026.
SB-53 is the first state law in the country directly targeting developers of frontier models—large-scale systems trained with compute above a specific threshold. In practice, this captures the major model developers and any organization attempting to train next-generation models independently.
The obligations fall into three major buckets.
1. Public Frontier AI Safety and Security Frameworks
Large developers must publish a comprehensive safety and security framework describing:
- The model’s capabilities
- Foreseeable catastrophic risks
- Evaluation methodologies
- Security measures
- Governance structures
- Alignment with national and international standards
In other words, the internal safety playbook becomes a public artifact. This creates transparency, but also accountability: if a developer fails to follow its own stated framework, the state can treat that as a violation.
2. Transparency Reports and Incident Reporting
Developers must issue regular transparency reports detailing risk evaluations, mitigations, system changes, and known failure modes.
And critically, they must report “critical safety incidents” within a short time frame. This includes incidents that could contribute to catastrophic harm—biological, chemical, cyber, or other large-scale threats.
This is the first time a U.S. jurisdiction has required mandatory safety incident disclosure for AI. For companies, this is a governance and detection challenge. It requires monitoring, triage, escalation pathways, and the ability to distinguish bugs from risks.
3. Whistleblower Protections and Enforcement
SB-53 includes broad whistleblower protections for employees who raise catastrophic-risk concerns. It also authorizes civil penalties up to $1 million per violation.
For general counsel and compliance leaders, this signals a shift: AI safety is no longer an internal R&D debate. It is a regulated risk domain with legal exposure.
Together, These Regimes Create a New Compliance Landscape
California’s regulatory evolution mirrors a broader global trend. The EU AI Act, the UK safety frameworks, and NIST’s AI RMF all emphasize the same themes: safety, transparency, governance, and accountability.
But California is the first U.S. jurisdiction to translate these themes into binding obligations with concrete deadlines.
If your organization processes data, deploys automated decision-making, or develops AI models, here’s what these laws mean:
1. Governance Must Be Formalized
Ad hoc processes will not work. The CPPA and SB-53 both require repeatable, documented, auditable governance systems.
This includes:
- Data inventories
- Model lifecycle tracking
- Human oversight mechanisms
- Third-party risk management
- Internal escalation channels
- Executive attestation
2. Technical and Legal Teams Must Work in Tandem
Risk assessments require technical detail. ADMT disclosures require engineering and product support. Safety frameworks require a company to articulate why its systems are safe—a legal, technical, and operational exercise.
The companies that succeed will be those that integrate their legal, engineering, and governance functions.
3. The Burden of Proof Has Shifted
Companies must now show:
- Why they are processing data
- How a model works
- What risks exist
- What safeguards are in place
- Why those safeguards are adequate
This is a fundamental cultural change. Compliance is no longer reactive. It is proactive and evidence-driven.
4. AI Development Is Becoming a Regulated Activity
Training frontier models comes with legal obligations—not just best-practice recommendations. Developers must adopt a safety posture analogous to cybersecurity: continuous monitoring, structured reporting, clear lines of responsibility.
What Businesses Should Do Now
January 2026 is not far away. The companies that will meet these obligations without disruption are those that begin preparing now.
Key first steps include:
- Conducting a gap analysis of existing data and AI governance
- Mapping all automated decision-making systems
- Standing up risk-assessment templates and workflows
- Establishing executive-level signoff paths
- Implementing monitoring and incident escalation systems
- Drafting or refining public-facing AI safety frameworks
These are not small undertakings. They require time, coordination, and executive buy-in.
But they also present an opportunity: organizations that build strong governance programs today will not only meet regulatory obligations—they will differentiate themselves in a market increasingly defined by trust.
The Road Ahead
When State AI Rules Meet Federal Power
California didn’t just pass another tech law. With SB 53, it made a deliberate bet: that frontier AI safety, transparency, and incident reporting are no longer theoretical concerns — they are governance obligations.
But almost immediately, that bet collided with Washington.
In early December, the White House issued an executive order aimed at curbing or preempting state-level AI regulation, signaling that the federal government wants to reclaim control over AI policy before states can lock in enforceable standards. The message was unmistakable: fifty different AI rulebooks are bad for innovation — and California may be moving too fast.
That tension matters because SB 53 is not a symbolic law. It does real work.
California now requires developers of frontier models to:
- Publish formal safety and risk governance frameworks
- Report catastrophic or near-catastrophic AI incidents to state authorities
- Protect internal whistleblowers who raise AI safety concerns
- Align governance practices with evolving national and international standards
In other words, SB 53 turns AI safety from a values statement into a compliance function.
Why Preemption Isn’t a Silver Bullet
The federal executive order raises a fundamental question: Can Washington actually override what California has already built?
Legally, preemption is not automatic. Executive orders do not erase state law on their own — especially where states regulate under traditional police powers like consumer protection, public safety, and emergency response. SB 53 is carefully framed around those authorities, not content moderation or speech.
Politically, the optics are just as complicated. California is home to the world’s most advanced AI companies. If federal policy blocks SB 53 outright, it risks looking less like “harmonization” and more like deregulation by force.
And practically, companies are already reacting. Governance teams don’t wait for court decisions — they plan for worst-case exposure. Many will comply with SB 53 regardless of federal uncertainty because the operational cost of building safety infrastructure once is lower than rebuilding it later.
The Real Signal for Companies
This clash isn’t about whether SB 53 survives intact. It’s about what it signals.
We are entering a phase where:
- AI governance is becoming incident-driven, not aspirational
- Transparency obligations are moving from voluntary frameworks into law
- Safety failures are treated like security breaches or environmental disasters
Federal preemption may slow California down. It may reshape enforcement. But it will not reverse the underlying shift.
AI risk is no longer hypothetical — and California just forced that reality into the open.
AI governance is entering a new phase. For years, companies relied on voluntary frameworks, internal principles, and aspirational commitments. Those tools will continue to matter, but they are no longer sufficient on their own.
California has made clear that safety, transparency, privacy, and cybersecurity are not “ethical goals.” They are legal requirements. And as with GDPR in 2018, once one large jurisdiction takes a position, the rest of the country—and the world—tends to follow.
Businesses can view this as a compliance burden. But there is another way to see it: as a forcing mechanism that ensures AI systems are built intentionally, responsibly, and sustainably.
This is the moment to align governance with innovation.
Because in the era of frontier models and automated decision-making, compliant design is not just safer. It is smarter.


