Everything You Need to Know About the New California Consumer Privacy Act’s Automated Decision-Making Regulations

Understanding the New California Privacy Regulations for AI Systems

Overview of the New Regulations

On July 24, 2025, the California Privacy Protection Agency (CPPA) voted unanimously to finalize rules under the California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act. These new rules introduce significant obligations for businesses subject to the CCPA. They impose requirements on the use of automated decision-making technologies (ADMT) and mandate cybersecurity audits and risk assessments. The California Office of Administrative Law must review and approve these rules by around September 5, 2025. Once approved, they will become effective on a phased basis between 2027 and 2030.

Implications of the New Rules

The new regulations have been narrowed significantly from earlier drafts, yet the requirements remain extensive. Organizations must understand the staggered compliance timelines over the next five years. This means they need to know which obligations take effect at specific times. Notably, the definition for ADMT no longer explicitly includes “AI,” but AI systems are not fully exempt from these new obligations.

Obligations for ADMT

The definition of ADMT now reads as “any technology that processes personal information and uses computation to replace human decision-making or substantially replace human decision-making.” Tools like web hosting, domain registration, antivirus software, spellchecking, and databases are excluded from this definition. However, it is crucial to note that AI systems are still subject to these new obligations despite the removal of “AI” from the ADMT definition.

Businesses must notify consumers before using ADMT to make a “significant decision” about them. This includes decisions related to accessing services in finance, healthcare, or employment. Consumers have the right to opt-out of ADMT being used for significant decisions and can request information about how these decisions are made. Additionally, businesses must update their privacy policies to explain these rights and ensure consumers are not retaliated against for exercising them.

Risk assessments are mandatory when using ADMT for significant consumer decisions. These assessments must be updated at least every three years or whenever there is a material change in data processing activities. The regulations clarify that these obligations also apply to insurance companies for activities beyond completing an insurance transaction. The rules related to ADMT will take effect on January 1, 2027, with the first risk assessment summaries due by April 1, 2028.

What AI Governance Professionals Need to Know

AI governance professionals should take the following steps to comply with the new regulations:

  • Establish Policies and Procedures: Document how ADMT is being used across the business.
  • Conduct Detailed Risk Assessments: Assess risks associated with ADMT that make significant decisions about customers or users.
  • Maintain Documentation: Capture information about consumer opt-out requests and inquiries regarding ADMT usage.
  • Update Privacy Policies: Include language that informs customers about their rights under the new rules.
  • Implement Insight Processes: Gain insight into how third-party vendors and service providers use ADMT to support business operations.

How Can Trustible Help?

The Trustible Platform can assist your business in aligning with these new regulations in several ways:

  • Inventory: Trustible allows you to create an inventory of your AI use cases. You can maintain comprehensive documentation on business goals, values, and data requirements.
  • Risk Assessments: Trustible’s workflow feature enables you to conduct risk assessments for each AI use case. This includes documenting inherent and residual risks, as well as mitigations.
  • Privacy Policy Updates: Trustible’s AI Policy Analyzer helps you assess gaps in your privacy policies, ensuring they reflect the disclosures required under the new rules.
  • Insurance Adherence: Trustible’s US Insurance AI Framework assists you in aligning with requirements specific to the insurance industry. This helps you better understand activities that may fall under the new CCPA rules.

Conclusion

The new regulations from the CPPA represent a significant shift in how businesses must approach automated decision-making technologies. As organizations prepare for compliance, understanding these obligations is crucial. Trustible is here to support you in navigating these changes, ensuring that your AI initiatives are responsible, compliant, and aligned with evolving regulations.

By taking proactive steps now, you can confidently adopt AI, reduce risks, and protect your revenue in this rapidly changing landscape.

Share:

Related Posts

Informational image about the Trustible Zero Trust blog.

When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI

Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?

Read More