Publication

California Enacts Landmark AI Safety and Transparency Law

Oct 16, 2025

On September 29, 2025, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA) into law, becoming the first state to require developers of advanced artificial intelligence (AI) models to publish detailed safety frameworks on their websites and report serious safety incidents to State officers. TFAIA, which takes effect January 1, 2026, has the potential to reshape the compliance landscape for AI developers by moving beyond voluntary self-regulation toward defined and enforceable standards intended to address AI safety and public trust.

Scope of TFAIA

TFAIA applies to “frontier developers” — that is, people who have trained or initiated the training of frontier models, defined as foundation AI models trained on a quantity of computing power greater than 1026 integer or floating-point operations (FLOPs), including whatever computing power is used in subsequent fine-tuning, reinforcement learning, or other material modifications. TFAIA also applies to frontier AI developers with annual gross revenues exceeding $500 million (large frontier developers). 

Required Disclosure of Frontier AI Frameworks 

Under TFAIA, a large frontier developer must write, implement, comply with, and publicly post “frontier AI frameworks” on its websites. Among other things, the frontier AI frameworks must address how the large frontier developer (1) incorporates national standards, international standards, and industry-consensus best practices; (2) assesses whether its frontier model has capabilities that could pose catastrophic risk; (3) mitigates catastrophic risk, including through third parties; (4) determines when updates to its frontier models warrant public disclosure; and (5) uses cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer. Large frontier developers must review and update their frontier AI frameworks at least once per year, and they must publish any material modifications to those frameworks within 30 days after any such material modification. 

Transparency Reports

Before, or concurrently with, deploying a new or substantially modified frontier model, a large frontier developer must publish a “transparency report” on its website. That transparency report must include a mechanism “that enables a natural person to communicate with the frontier developer,” the frontier model’s release date, supported languages, intended uses, and any restrictions or conditions on deployment. Large frontier developers must also summarize any catastrophic risk assessments conducted pursuant to their frontier AI frameworks, along with the results of those risk assessments, the extent to which third-party evaluators were involved, and other steps taken to fulfill the frontier AI framework’s requirements.

Critical Safety Incident Reporting

TFAIA requires frontier developers to notify the California Governor’s Office of Emergency Services (OES) of any “critical safety incidents” related to their frontier AI models within 15 days of discovery. “Critical safety incidents” include (1) unauthorized access to, modification of, or exfiltration of, model weights causing death or bodily injury; (2) harm resulting from catastrophic risk; (3) loss of control of a frontier model causing death or bodily injury; and (4) a frontier model’s utilization of “deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer” in a manner that demonstrates “materially increased catastrophic risk.” 

If a critical safety incident involves an imminent risk of death or serious physical injury, then the frontier developer must disclose that incident to an appropriate authority within 24 hours. All critical safety incident reports must include (1) the date of the critical safety incident; (2) reasons why the incident qualifies as a critical safety incident; (3) a “short and plain statement describing the critical safety incident;” and (4) whether the incident was associated with internal use of a frontier model.

Beginning January 1, 2027, OES will publish annual reports with anonymized and aggregated information about critical safety incidents. OES will exclude from such annual reports any information “that would compromise the trade secrets or cybersecurity of a frontier developer, public safety, or the national security of the United States.”

Whistleblower Protections

Under TFAIA, frontier developers may not make or enforce any rule, regulation, policy, or contract that prevents an employee responsible for assessing, managing, or addressing risk of critical safety incidents (each a “covered employee”) from disclosing certain information to an authority or supervisor. Specifically, a frontier developer may not retaliate against a covered employee for disclosing information related to such covered employee’s reasonable belief that either (1) the frontier developer’s activities pose a substantial danger to public health or safety resulting from a catastrophic risk, or (2) the frontier developer has violated the TFAIA.

TFAIA also requires frontier developers to provide clear notices to covered employees of their rights and responsibilities under TFAIA.  Further, large frontier developers must provide covered employees with a “reasonable internal process” for anonymous reporting.  Plaintiffs who successfully show that a frontier developer violated these whistleblower protections may be entitled to attorneys’ fees.

CalCompute Consortium

TFAIA directs the establishment of a 14-member consortium to develop a framework for creating a public cloud computing cluster called “CalCompute.”  CalCompute would consist of a fully owned and hosted cloud platform that provides the public with access to safe, ethical, equitable, and sustainable AI.  By January 1, 2027, the CalCompute consortium must deliver a report to the California Legislature that addresses CalCompute’s parameters for AI use, governance structure, funding sources, and proposed partnerships with nongovernmental entities.

Penalties and Enforcement

Under TFAIA, large frontier developers that make false statements, fail to submit required reports, fail to report critical incidents, or fail to comply with their own frontier AI frameworks are subject to civil actions brought by the California Attorney General.  Civil penalties of up to $1 million may accompany each violation. 

Key Takeaways

TFAIA signals California’s move toward a risk‑based, documentation‑heavy framework for AI accountability. Organizations building or deploying AI models that touch the California market should not wait for final rules to begin aligning their governance, risk, and compliance practices. By establishing inventories, assessments, testing, monitoring, disclosures, and contracting frameworks now, companies will both reduce risk and position themselves for compliance as TFAIA’s obligations come online.

About Snell & Wilmer

Founded in 1938, Snell & Wilmer is a full-service business law firm with more than 500 attorneys practicing in 17 locations throughout the United States and in Mexico, including Los Angeles, Orange County, Palo Alto and San Diego, California; Phoenix and Tucson, Arizona; Denver, Colorado; Washington, D.C.; Boise, Idaho; Las Vegas and Reno-Tahoe, Nevada; Albuquerque, New Mexico; Portland, Oregon; Dallas, Texas; Salt Lake City, Utah; Seattle, Washington; and Los Cabos, Mexico. The firm represents clients ranging from large, publicly traded corporations to small businesses, individuals and entrepreneurs. For more information, visit swlaw.com.

©2025 Snell & Wilmer L.L.P. All rights reserved. The purpose of this publication is to provide readers with information on current topics of general interest and nothing herein shall be construed to create, offer, or memorialize the existence of an attorney-client relationship. The content should not be considered legal advice or opinion, because it may not apply to the specific facts of a particular matter. As guidance in areas is constantly changing and evolving, you should consider checking for updated guidance, or consult with legal counsel, before making any decisions.
Media Contact

Olivia Nguyen-Quang

Associate Director of Communications
media@swlaw.com 714.427.7490