The California legislature has concluded another AI-inspired legislative session, and Governor Gavin Newsom has signed (or vetoed) bills that will have new impacts on the AI ecosystem. By our analysis, California now leads U.S. states in rolling out the most comprehensive set of targeted AI regulations in the country – but now what?
The dominant issues this session included children’s safety, AI transparency, and frontier model safety, which have both broad-applicability to AI builders and deployers and narrowed, specific applicability primary to AI builders. But what does that actually mean in practice, and what does compliance for regulated organizations actually look like?
Here’s Trustible’s analysis of the key AI laws enacted this year and our take on how it will impact AI governance professionals. The legislation below does not represent every AI-related bill that passed the legislature this year, but rather highlights legislation that has specific-AI governance considerations.
Table of Contents
What Was Signed into Law?
SB 53
What is it? The legislature’s landmark AI legislation this session amends last year’s SB 1047 (which was vetoed by Governor Newsom).
What does it do? Frontier model providers (FMPs) are required to implement a “frontier AI framework” that describes how FMPs identify, monitor, and mitigate catastrophic risks from their models. The frontier AI framework must address issues such as mitigation assessments, third party reviews, and cybersecurity measures for unreleased model weights. FMPs are also required to publish transparency reports on their websites before deploying new models, which would include information about intended model uses, restrictions, and modalities of output supported by the model. FMPs are required to report “critical safety incidents” to California regulators within 15 days of discovering the incident (or within 24 hours incidents that pose “an imminent risk of death or serious physical injury”). The law also enacts whistleblower protections for FMP employees.
What should you do today? Make sure you are documenting risks you are encountering your LLMs and other tools powered by frontier models. If you need to renew commercial agreements with a FMP, then review for new terms that might require additional governance procedures. Understand what your AI incident response procedures are or implement those processes if they do not exist.
Our Take: Do not be surprised if there are downstream effects even though the law applies to FMP. It is possible that commercial contracts will include provisions requiring companies that use these tools to have similar oversight frameworks in place. Moreover, FMPs may look to third parties that use their tools to better understand how catastrophic risks could be realized under real-world conditions.
SB 243
What is it? The new law takes aim at how minors interact with chatbots, specifically “companion chatbots.” There have been many concerns in the past year about how children are using chatbots, some of which have resulted in suicide. Companies like Meta caused further controversy after an internal memo revealed that its chatbots engaged in “provocative behavior” with minors. Meta has since changed its policy, allowing parents to block said interactions. The Federal Trade Commission has also raised concerns about how children are using chatbots and launched an investigation into how seven of the largest AI providers monitor negative impacts on minors.
What does it do? Companies that allow users to engage with a “companion chatbot” must disclose that the person is interacting with an AI tool if the person would reasonably believe they were talking to a human. These companies must also implement safeguards to prevent the chatbots from discussing suicide and must direct the user to self-harm resources if they express suicidal thoughts to the bot. When companies know their users are minors (beyond disclosure requirements) they must have a three hour “break” mechanism and measures in place to stop their bots from producing sexually explicit material or encouraging sexually explicit conduct. The new law includes a private right of actions for people harmed by violations of the law.
What should you do today? For organizations that are using public facing chatbots, it’s important to understand those outputs and clearly document the intended uses to avoid harmful content from being produced. If your chatbots are aimed at minors, work with your technical teams to implement safeguards to avoid producing harmful content.
Our Take: While customer service chatbots are exempt, that does not mean companies that use customer facing chatbots are off the hook. As chatbots become more general purpose, it’s important for AI governance professionals to understand the safeguards in place and test outputs.
AB 316
What is it? The law prevents defendants from claiming AI caused harm. The “blame AI” defense is not novel (Google blamed its AI for racist photo labeling in 2015) but has become more prevalent with AI’s pervasiveness. Anthropic blamed Claude for mistaken legal filings, and even President Trump has said he would “just blame AI” if something bad happened.
What does it do? Defendants developing, modifying, or using AI that are “alleged to have caused a harm” can not claim that the AI caused the harm. This would keep liability for AI harms on the human or entity responsible for the AI and address long-standing concerns that companies would try to duck liability by shifting blame to AI tools.
What should you do today? Review your third party and service provider agreements to understand who is liable when AI tools malfunction. (Trustible can automatically review your vendor agreements to help here through our AI Analyzer.)
Our Take: Your company should not blame your AI when something goes wrong. Instead you need to understand that when AI tools go array, you will be held responsible. It is important to think about how you want to allocate liability, as sometimes the harm caused may originate from a service provider.
AB 853
What is it? The law amends the California AI Transparency Act (SB 942), which requires covered providers to present users with: (i) a free AI detection tool that allows them to assess whether content was created by GenAI; and (ii) an option to include a disclosure in GenAI content.
What does it do? The amendments broaden the scope of the law and impose new obligations for data and content provenance. A “large online platform” that exceeds 2 million monthly users and distributes content not made by the users must include provenance data on the content, as well as disclose if the content was generated or substantially altered by AI and allow users the chance to inspect the provenance data. Additionally, manufacturers of “capture devices” (e.g., cameras and mobile phones) must provide users with a disclosure about how and when content was captured, as well as have disclosures for the captured content be embedded in the devices by default. This part of the law is limited to devices made for sale in California. The new law also delays the effective date for SB 942 by 8 months, to August 2, 2026.
What should you do today? Assess whether your organization could be covered by the updated rules, especially for the content capture devices requirements. You will also need to understand the types of outputs come from your AI tools that could require provenance updates if you make changes to it.
Our Take: The amended scope could sweep up additional companies that had not originally thought SB 942 would apply to them. There are also potential downstream effects for third parties that alter or modify content because that would need to be accounted for in the contents’ data provenance, meaning there may be obligations shifted to those third parties that alter the content.
What Was Vetoed?
AB 1064
What is it? The Leading Ethical AI Development (LEAD) for Kids Act would have imposed additional protections for kids interacting with chatbots.
What would it have done? The bill prohibited companies from making companion chatbots available to minors unless it was “reasonably foreseeable” that the chatbot could not support certain activities such as encouraging self-harm, offering mental health services, or engaging in sexually explicit interactions. The bill did not include a private right of action, opting for the Attorney General to enforce it.
Our Take: Vetoed bills do not mean the idea is dead forever (e.g., 1047). In his veto statement, Gov. Newsom said he was interested in working with the legislature to strengthen protections enacted under SB 243. That means we are likely to see some version of AB 1064 in the 2026 legislative session.
What’s Next?
The legislative session ended with a mixed bag of concerns for AI governance professionals. Companies that do not develop AI (or are major online platforms) were not the main focus of these bills, but it does not mean they escape the regulatory reach. For example, if a company is using a frontier model and it causes catastrophic harm, then that business may need to cooperate with the FMP so that they can fulfill their reporting obligations under SB 53. Moreover, when a company’s AI tool causes harm they are liable for those harms under AB 316. The key takeaway for the non-tech ecosystem is that strong governance frameworks will help align with obligations or downstream effects of these new laws.


