Publication

FINRA’s 2026 Oversight Report Signals a Supervisory Reckoning for Autonomous AI

Dec 17, 2025

Financial Industry Regulatory Authority’s (FINRA) 2026 Annual Regulatory Oversight Report (the Report), released on December 9, 2025, lands at a moment when many member firms are shifting from experimental AI deployments to production-level automation. What distinguishes this year’s report is not its treatment of Reg BI, communications, or cybersecurity—which remains extensive—but its introduction of a regulatory framing for agentic AI systems capable of executing tasks within brokerage workflows. FINRA’s analysis makes clear that once an AI system can take action, rather than merely generate content, the firm’s supervisory, books-and-records, and governance obligations shift materially.

FINRA’s New Focus: Autonomous Workflow Execution

The Report draws a sharp line between traditional GenAI tools used for search, summarization, or drafting and a new class of systems capable of initiating and completing multi-step operational tasks. FINRA describes systems that can interact with internal databases, external data sources, and functional APIs in service of a defined objective.

From a regulatory standpoint, this development moves AI from the realm of communications oversight into the core of Rule 3110 (Supervision), Rule 3120 (Supervisory Control Systems), and books-and-records preservation. The challenge is not speculative “hallucination,” but the risk that an automated system may perform steps a registered person would have been required to supervise, document, or validate.

The Supervisory Implications: Four Categories of Elevated Risk

FINRA’s discussion of AI automation maps onto traditional supervisory obligations rather than the risk taxonomy familiar from prior GenAI coverage. The core regulatory concerns fall into four categories.

1. Supervisory Substitution Risk (Rules 3110 and 3120)

AI-driven workflow engines may select intermediate actions that are not expressly authorized—querying systems, pulling data, or initiating downstream triggers—in ways that effectively substitute for human supervisory review. FINRA warns that firms must treat these behaviors as subject to the same controls applicable to any associated person performing a comparable function.

2. Books-and-Records Integrity Risk (Rule 4511 and Exchange Act Rule 17a-4)

Once an automated system performs a sequence of actions—data retrieval, analysis, and an eventual recommendation or alert—the firm must be able to reconstruct the full chain of activity. FINRA flags a growing gap between the complexity of automated systems’ internal decision processes and the trace logs firms currently retain. Outputs alone are insufficient; firms must preserve the underlying telemetry that demonstrates how the system reached its end state.

3. Objective-Function Drift (Reg BI and Market Integrity Rules)

Systems optimized for speed, efficiency, or performance can inadvertently take steps inconsistent with customer-specific obligations. FINRA highlights the risk that an automated system may reach a superficially compliant result through noncompliant intermediate conduct. As firms increase automation in surveillance, alert triage, or portfolio-related workflows, objective-function design becomes a supervisory issue with direct consequences under Reg BI.

4. Competence Simulation Risk

FINRA notes that systems may perform domain-specific tasks—tax optimization, trade exception handling, suitability reviews—with a degree of procedural confidence that exceeds their actual expertise. This creates supervisory friction: business units may rely on automated outputs when the system’s underlying reasoning is neither validated nor reproducible.

Cybersecurity: A Converging Threat Surface

The Report connects these automation themes to an increasingly hostile cyber environment. FINRA identifies several trends of supervisory significance:

  • Identity-spoofing attacks, including deepfake-enabled intrusions requiring enhanced liveness and anti-spoofing protocols.
  • QR-based phishing (“quishing”), which exploits channels that circumvent traditional email security controls.
  • Rapidly mutating malware, including AI-generated code that evolves faster than static scanning tools can detect.

For firms deploying AI to automate security controls or incident-response processes, these risks intersect with the supervisory issues above: an automated system that reacts to a threat without preserving its process-level reasoning can compromise both security and regulatory recordkeeping.

Compliance Architecture for 2026: Strategic Recommendations

Firms positioned to expand or initiate automation programs should recalibrate their compliance architecture along five axes that track directly to the Report.

1. Expand Supervisory Programs to Cover Automated Actors

Any system capable of taking operational steps must be incorporated into Rule 3110/3120 supervisory frameworks. This includes defining authorized actions, required escalation points, and supervisory triggers tied to confidence scores or anomaly detection.

2. Implement Full-Chain Telemetry (“Process Reconstruction Records”)

Traditional output logging cannot support reconstruction obligations. Firms should implement system-level audit trails capturing intermediate tool calls, data fetches, and decision pathways. These logs should be treated as regulatory records subject to Rule 17a-4 retention requirements.

3. Review Objective and Reward Functions Through a Compliance Lens

Firms developing internal models should subject objective-function design to compliance testing. Control groups should evaluate whether the system can reach a compliant result through noncompliant means, particularly in surveillance, exception handling, and customer-facing automations.

4. Strengthen Vendor Risk Diligence for Automation Capabilities

Third-party platforms increasingly embed autonomous execution features. Vendor diligence must explicitly probe whether any feature can write to internal systems, initiate communications, or trigger downstream actions. Contractual controls should address permitted actions, auditability, and real-time monitoring rights.

5. Reevaluate Incident Response Planning

Automation changes incident response obligations. If an AI system autonomously responds to a security threat, firms must ensure that the system’s actions are logged, supervised, and reversible—otherwise a remediation step may itself create supervisory or recordkeeping exposure.

Conclusion

FINRA’s 2026 Oversight Report signals a transition point: AI systems are no longer peripheral communications tools but emerging operational actors within the firm’s supervisory environment. As the line between software and supervised personnel blurs, the regulatory burden shifts from content review to governance of automated conduct. Firms that revise their supervisory and recordkeeping frameworks now will be best positioned to deploy automation at scale in 2026 without incurring avoidable regulatory risk.

About Snell & Wilmer

Founded in 1938, Snell & Wilmer is a full-service business law firm with more than 500 attorneys practicing in 17 locations throughout the United States and in Mexico, including Phoenix and Tucson, Arizona; Los Angeles, Orange County, Palo Alto and San Diego, California; Denver, Colorado; Washington, D.C.; Boise, Idaho; Las Vegas and Reno-Tahoe, Nevada; Albuquerque, New Mexico; Portland, Oregon; Dallas, Texas; Salt Lake City, Utah; Seattle, Washington; and Los Cabos, Mexico. The firm represents clients ranging from large, publicly traded corporations to small businesses, individuals and entrepreneurs. For more information, visit swlaw.com.

©2025 Snell & Wilmer L.L.P. All rights reserved. The purpose of this publication is to provide readers with information on current topics of general interest and nothing herein shall be construed to create, offer, or memorialize the existence of an attorney-client relationship. The content should not be considered legal advice or opinion, because it may not apply to the specific facts of a particular matter. As guidance in areas is constantly changing and evolving, you should consider checking for updated guidance, or consult with legal counsel, before making any decisions.
Media Contact

Olivia Nguyen-Quang

Associate Director of Communications
media@swlaw.com 714.427.7490