Generative AI is now woven into critical systems that process regulated data. Under the New York Department of Financial Services (NYDFS) Cybersecurity Regulation, this changes the stakes. The 23 NYCRR Part 500 update makes it clear: data security isn’t optional, and AI doesn’t get a free pass. If generative AI handles nonpublic information, financial institutions must apply the same strict controls as any other high‑risk system.
This means data mapping for AI pipelines is no longer a nice‑to‑have. Every token in, every token out, and every API call needs oversight. Controls must align with NYDFS requirements around access governance, encryption at rest and in transit, audit logging, and incident reporting. A chatbot that forgets to mask account data is no different from an unprotected database under the law.
AI risk assessments have to move from paper to code. Deployers must track training data lineage, enforce privacy thresholds, and monitor generated outputs for policy compliance. NYDFS examiners will expect evidence – not promises – that these controls are in place and tested.
Generative models bring a fast feedback loop for attackers. Prompt injection, data poisoning, and hidden prompt extraction can bypass weak guardrails in seconds. NYDFS-regulated firms must prove that AI‑integrated systems can detect, block, and log suspicious prompts with the same rigor as any intrusion detection system.