The audit found the algorithm was cheating. Nobody knew how long it had been happening, or how much damage it had done. What they did know: the California Privacy Rights Act (CPRA) didn’t care if the flaw was an accident. Under CPRA’s scope, AI governance isn’t optional. It’s law.
AI governance under CPRA means controlling how your models collect, process, and use personal data. It means tracking decision logic, managing consent, and ensuring outputs stay accountable. The act enforces limits on automated decision‑making, especially when personal information or sensitive data is involved. Failure to comply is not just a risk—it’s a punishable offense.
Most teams still run blind. Their models pull in training data from unknown sources. Bias slips in. Decision chains get lost in opaque pipelines. Data retention policies are ad hoc, or missing. CPRA turns these gaps into liabilities. The law requires clear documentation: what data you have, where it came from, why it’s used, which processes touch it, and when it’s deleted.
AI governance is more than a compliance checkbox. Under CPRA, it’s constant due diligence. That includes: