How FINRA’s 2026 report reshapes GenAI compliance

GenAI

Generative AI has rapidly transitioned from a novel experiment to an embedded operational tool across financial services.

According to Saifr, firms are now deploying GenAI for everything from marketing campaigns and customer communications to AML transaction monitoring and KYC verification.

Saifr recently discussed the steps of building a GenAI Governance Framework, as well as takeaways from FINRA’s 2026 oversight report.

While the efficiency gains are considerable, this technological shift carries significant regulatory consequences that compliance teams can no longer afford to overlook.

FINRA’s 2026 Annual Regulatory Oversight Report makes the industry’s position unequivocal: the regulatory frameworks that govern traditional business activities apply just as firmly to GenAI-powered operations. Compliance functions must build governance structures ensuring that GenAI deployment aligns with established supervisory, communications, and recordkeeping obligations — without exception.

Key risks highlighted by FINRA

The report identifies several risk categories that should be front of mind for any compliance professional overseeing GenAI. Accuracy and hallucinations represent perhaps the most immediate concern. GenAI models can produce plausible-sounding but factually incorrect information with striking confidence. When such outputs appear in investor communications, marketing materials, or compliance recommendations, the potential for customer harm, unsuitable product recommendations, or misinterpretations of regulatory requirements is substantial. A chatbot that fabricates performance data or an AI system that misconstrues a regulatory rule could expose firms to enforcement actions and heightened scrutiny.

Bias and concept drift introduce more subtle but equally serious challenges. Models trained on historical data may perpetuate existing biases in areas such as marketing targeting, risk assessments, and modelling. Concept drift compounds matters further, as models trained on older datasets become progressively less reliable — particularly in fast-moving markets. An AML system trained on pre-pandemic transaction behaviour, for instance, may fail to detect emerging fraud patterns or generate an excessive volume of false positives that strain investigative teams.

The autonomy of AI agents represents an emerging frontier of risk. Advanced AI agents can independently execute tasks, make decisions, and act across multiple systems simultaneously. While this offers efficiency benefits, it also creates accountability gaps. Regulators require registered human decision-makers at critical junctures, making it essential for firms to define where human oversight is non-negotiable. Cutting across all these concerns is data sensitivity: GenAI applications often require access to proprietary trading strategies, personally identifiable customer information, and confidential business data. Inadequate data governance can lead to unauthorised disclosures, privacy violations, or cybersecurity incidents.

Existing regulations apply in full

FINRA’s position leaves no room for ambiguity. Rule 3110 supervisory obligations extend to GenAI outputs and model behaviours, and firms cannot delegate supervisory responsibility to algorithms. Rule 2210, which governs marketing content, social media posts, and customer service responses, applies equally to machine-generated material — the fact that content is AI-produced does not reduce a firm’s responsibility for its accuracy and appropriateness.

Recordkeeping obligations apply to GenAI systems as well. Firms must retain records of business-related communications, supervisory activities, and compliance reviews, including logs of AI prompts, outputs, model versions, training data sources, and human oversight actions. The ability to reconstruct decision-making processes and demonstrate supervisory review is likely to prove critical during examinations or enforcement investigations.

Building an effective governance framework

Forward-thinking compliance programmes are moving beyond reactive risk management towards comprehensive GenAI governance. A strong foundation begins with establishing a cross-functional committee to review and approve all GenAI use cases prior to deployment, evaluate ongoing performance, and maintain an enterprise-wide inventory of AI applications. Clear roles, responsibilities, and escalation procedures should be defined within this governance structure, alongside regular reporting to senior management and boards of directors.

Usage policies provide the backbone for consistent GenAI deployment. These should clearly communicate acceptable and prohibited use cases, and ensure that personnel are trained to meet disclosure requirements when AI is used in customer interactions. Branch office supervision merits particular attention: remote locations may adopt GenAI tools without proper approval or oversight, making it essential to specify who can authorise use, what training is required, and how branch managers must monitor AI-assisted activities.

Testing protocols must go well beyond basic functionality checks. Pre-deployment testing should evaluate accuracy across diverse scenarios, assess potential bias across different models, and validate performance under stress conditions. Ongoing testing should detect concept drift, identify emerging bias patterns, and confirm that model updates have not introduced new vulnerabilities. Comprehensive testing documentation — including prompt libraries, reconciliations of expected versus actual outputs, and remediation actions — is essential.

Human-in-the-loop oversight serves as a critical control against AI errors, drift, and overreach. In regulated environments where a qualified, licensed human is required for high-risk decisions — such as customer recommendations, AML alerts, complaint responses, and advertising approvals — human review remains indispensable. The reviewer must have sufficient expertise to evaluate AI outputs critically and understand how the application fits within the firm’s established supervisory framework. Procedures should define reviewing qualifications, review standards, documentation requirements, and override authority when human judgement conflicts with AI recommendations.

Cybersecurity integration is another key pillar. Updated security programmes must address AI-specific vulnerabilities, and vendor due diligence must evaluate how third-party AI providers protect firm data, what security certifications they hold, and how breaches will be communicated. Incident response plans should specifically address GenAI breach scenarios, including unauthorised access to training data or malicious manipulation of model outputs.

Documentation requirements extend across the entire GenAI lifecycle. Firms should maintain model cards describing each AI system’s purpose, capabilities, limitations, training data sources, and known biases. Version control becomes essential as models are updated or retrained, and supervisory records should capture who reviewed outputs, what deficiencies were identified, and what corrective actions were taken.

The case for acting now

GenAI’s potential to enhance efficiency, process intelligence, and customer service is undeniable — but these benefits carry commensurate regulatory and operational risks. Firms that rush to deploy AI without adequate governance expose themselves to customer harm, unintended operational consequences, and heightened regulatory scrutiny.

Compliance leaders should begin by inventorying all GenAI applications in use or proposed for use, conducting a comprehensive audit across all business lines with particular focus on marketing and AML/KYC functions. This process should identify who deploys each system, what data it accesses, what decisions it influences, and what oversight currently exists. Written supervisory procedures should be updated to explicitly address GenAI governance, integrating with existing compliance programmes rather than creating parallel structures.

Firms should also implement ongoing monitoring and bias checks, administer reconciliations of anticipated versus actual outputs on defined schedules, and tailor staff training to specific roles — with more intensive programmes for personnel who directly interact with GenAI systems.

The regulatory environment for GenAI will continue to evolve, but the fundamental principle remains constant: firms are responsible for their regulatory obligations regardless of whether humans or machines execute them. Building robust governance frameworks today positions compliance programmes to adapt as both technology and regulation advance.

Read the full Saifr post here. 

Read the daily FinTech news

Copyright © 2026 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

120,000+ FinTech leaders get exclusive industry stories delivered every week