On October 16, 2024, the New York Department of Financial Services (NYDFS) issued guidance on managing cybersecurity risks associated with the use of Artificial Intelligence (AI) within the framework of 23 NYCRR Part 500. The guidance applies to all entities under NYDFS jurisdiction and provides direction for assessing and managing new cybersecurity risks posed by AI adoption, without introducing new regulatory requirements.
This report consolidates the guidance from NYDFS with the upcoming amendments to Part 500, effective November 1, 2024, and explores key technical and administrative measures for financial institutions to achieve compliance and mitigate AI-related cybersecurity risks.
Key Amendments Effective November 1, 2024
1. Multi-Factor Authentication (MFA)
- Requirement: MFA is required for all individuals accessing information systems, covering both internal access and remote access to third-party applications and privileged accounts.
- Exemptions: Limited exemptions may apply, though compensating controls must be implemented.
- Implementation: Organizations should ensure MFA systems meet the new requirements, especially for privileged accounts and third-party access.
2. Cybersecurity Training Requirements
- Expanded Scope: Annual cybersecurity training must now include content on AI-enabled social engineering, such as phishing and deepfake techniques.
- Action Steps: Entities should update training materials to include scenarios involving AI-based social engineering attacks and provide ongoing training to reinforce employee vigilance.
3. Enhanced CISO Reporting Obligations
- Annual Report: The CISO must submit a comprehensive annual report to the senior governing body, addressing material inadequacies and significant cybersecurity events.
- Governance: The senior governing body is responsible for overseeing cybersecurity risk management and ensuring executive management develops and maintains the cybersecurity program with sufficient resources.
- Action Steps: CISOs should formalize reporting protocols, with a structured format covering risk posture, incident response updates, and planned remediation.
4. Information Security Procedures and Encryption Standards
- Encryption: Nonpublic information must be encrypted in transit and at rest. Compensating controls, approved by the CISO, are required if encryption is not feasible.
- Compensating Controls: Alternatives like data masking or tokenization should be considered.
- Implementation: Organizations should review encryption policies, ensuring compliance with current standards and adjusting controls where necessary.
5. Incident Response (IR) and Business Continuity and Disaster Recovery (BCDR)
- IR Plan Updates: Incident response plans must include specific goals, recovery from backups, root cause analysis, and updates based on testing outcomes.
- BCDR Requirements: Plans must identify critical data, documents, and personnel, and include offsite storage and timely recovery procedures.
- Action Steps: Organizations should simulate incident response and BCDR plans, updating them regularly based on drill findings.
AI-Specific Risk Management Under Part 500
1. AI-Related Cybersecurity Risks
The NYDFS guidance categorizes AI-related risks as either external, arising from malicious actors, or internal, stemming from the organization’s own AI use:
- Threat Actors’ Use of AI
- AI-Enabled Social Engineering: Threat actors increasingly use deepfake audio, video, and text to craft highly realistic social engineering attacks targeting specific individuals through phishing, vishing, and smishing.
- AI-Enhanced Cybersecurity Attacks: AI tools enable attackers to develop adaptive malware, bypass defensive measures, and quickly exploit vulnerabilities.
- Risks from Companies’ AI Use
- Exposure of Nonpublic Information: AI applications often handle large datasets, increasing exposure points and creating more data to secure. Sensitive data, including biometrics, can be leveraged in attacks if improperly protected.
- Supply Chain Vulnerabilities: Heavy reliance on third-party vendors for AI solutions introduces additional risks, as vendors may themselves be vulnerable to cyberattacks.
2. Controls for Mitigating AI-Related Cybersecurity Threats
The guidance outlines several key controls under Part 500 to mitigate AI-related risks:
- Risk Assessments: Covered entities should integrate AI-specific risk assessments across internal systems and third-party vendors. Updates to policies and procedures may be necessary as new risks are identified.
- Third-Party Vendor Management: Entities should conduct AI-focused due diligence on third-party providers, assessing cybersecurity practices and requiring incident notification provisions.
- Access Controls: Entities should avoid using traditional biometrics for MFA and consider AI-resistant methods such as digital certificates or physical security keys.
- Cybersecurity Training: Employees must be trained on AI risks, including recognizing AI-based social engineering tactics, such as deepfakes. Cybersecurity personnel should receive additional training on the use of AI in threat detection and mitigation.
- Data Management and Minimization: Given AI’s data-intensive nature, entities should apply data minimization practices, deleting unnecessary nonpublic information and maintaining data inventories.
3. Practical Considerations for Implementing AI Controls
- AI Governance Committees: Establishing a cross-functional AI governance committee with cybersecurity representation can help in evaluating new AI projects and ensuring compliance with cybersecurity requirements.
- AI Vendor Diligence: Organizations should create standardized diligence questions and contract terms for AI vendors, addressing confidentiality, IP protection, and specific cybersecurity risks.
- Monitoring AI Use Cases: Regular monitoring of AI tools in production is essential to ensure they are used as intended and to detect any mission creep or unanticipated cybersecurity risks.
Future Amendments Effective November 1, 2025
NYDFS has outlined additional amendments to Part 500 for November 2025, covering data retention and access management:
- Data Retention: Institutions must implement clear data retention and disposal policies for nonpublic information.
- Access Management: Organizations should adopt role-based access restrictions, periodic access reviews, and other authentication controls.
Recommendations for Compliance and Implementation
- MFA Implementation: Ensure current MFA systems are compliant, particularly for privileged accounts and third-party access.
- Cybersecurity Training: Update training programs to include both traditional and AI-based social engineering attacks.
- Formalize CISO Reporting: Implement structured reporting processes to facilitate regular board updates on cybersecurity health.
- Data and Encryption Protocols: Review data management policies to ensure compliance with encryption and retention standards.
- Test and Update IR and BCDR Plans: Conduct regular simulations to validate and improve response and recovery plans.
Conclusion
The NYDFS amendments and the recent AI risk guidance reflect the need for a comprehensive approach to cybersecurity in financial services. By updating policies, integrating AI risk assessments, and strengthening incident response, institutions can ensure alignment with regulatory expectations and mitigate emerging threats. This proactive approach will enable firms to maintain compliance and safeguard critical data effectively.