Artificial intelligence is already embedded across financial services. Employees are drafting emails, summarizing regulations, and creating content with tools that didn’t exist just a few years ago.
That shift hasn’t gone unnoticed by regulators.
FINRA and the SEC have both made it clear that while AI may be new, expectations around supervision, recordkeeping, and investor protection are not. Using AI doesn’t change your compliance obligations. In many cases, it raises the stakes.
The question isn’t whether your firm is using AI, it’s whether it can stand up to regulatory scrutiny.
What Regulators Are Signaling
There is no single “AI rulebook” yet, but the direction from regulators is clear.
From FINRA’s perspective, it doesn’t matter how content is created, the same rules still apply. Whether communication is written by a registered representative or generated by AI, it’s subject to the same standards. That includes supervision, content review, and recordkeeping. In its 2026 Annual Regulatory Oversight Report, FINRA also called out generative AI as an area requiring strong governance, testing, and oversight.
The SEC is approaching AI from a similar angle, with a strong focus on disclosures and investor protection. One area drawing particular attention is so-called “AI washing,” where firms overstate or misrepresent how they use AI. If a firm claims AI-driven capabilities, those claims must be accurate and supportable.
Across both regulators, the message is consistent: firms are responsible for the outputs of the tools they use, regardless of how those outputs are generated.
Where AI Risk Is Showing Up
AI is often introduced as a productivity tool, but in practice, it touches many areas that fall directly under compliance oversight.
Common use cases already in play include drafting client communications, summarizing regulatory guidance, generating marketing content, and assisting with internal documentation. These are all areas where accuracy, supervision, and record retention matter.
That creates several immediate risks:
- Inaccurate or misleading content: AI-generated responses can sound confident while being incorrect or incomplete
- Unapproved communications: Employees may use AI to draft client-facing content outside of established review processes
- Data privacy concerns: Sensitive firm or client information entered into public tools may be exposed or retained externally
- Recordkeeping gaps: AI-assisted communications may not be captured or retainedin accordance with books and records requirements
- Inconsistent outputs: The same prompt can produce different answers, making standardization and supervision more difficult
There is also a growing concern around AI-enabled fraud, including more sophisticated phishing attempts and impersonation tactics that can bypass traditional controls.
Put simply, AI can produce content that appears compliant without actually being compliant. That’s where firms get into trouble.
What Firms Should Have in Place
Regulators are not prohibiting the use of AI. They are expecting firms to manage it like any other risk area.
That starts with governance.
Firms should have clearly defined policies outlining how AI tools can be used, who can use them, and under what conditions. This should be reflected in written supervisory procedures and supported by internal controls.
Oversight is equally important; AI outputs should not be treated as final. A human review process should be in place, particularly for anything client-facing or regulatory in nature.
Vendor due diligence is another key area. Iif your firm is using third-party AI tools, you are still responsible for how those tools operate. That includes understanding how data is handled, what safeguards are in place, and how outputs are generated.
Recordkeeping expectations have not changed. If AI is used to create or assist with communications, those communications may still need to be retained and supervised like any other business-related content.
Training also plays a critical role. Employees need to understand not just how to use AI tools, but how to use them appropriately within a regulated environment.
Practical Ground Rules for Use of AI
Most AI risk doesn’t come from strategy; it comes from how these tools are used in real time.
A few practical starting points:
- Do not enter client or sensitive firm data into public AI tools
- Treat AI-generated content as a starting point, not a finished product
- Require review and approval before any client-facing content is used or distributed
- Define which AI tools are approved and make it clear what is off-limits
- Capture and retain AI-assisted communications where required
- Watch for outputs that sound confident but lack accuracy or context
- Train employees on when AI can be helpful and when it introduces risk
These don’t need to be complex, but they do need to be clear, documented, and consistently followed.
Using AI Without Creating New Risk
AI can absolutely provide value when used appropriately.
Firms are already seeing benefits in areas like faster policy drafting, more efficient research, and improved data analysis. Used correctly, AI can support compliance teams by reducing manual workload and surfacing insights more quickly.
But those benefits only hold up if the right controls are in place.
As adoption increases, compliance programs will need to keep pace. That means having systems that support oversight, track activity, and create a clear record of how tools are being used across the firm.
AI doesn’t lower the bar; it just changes how the work gets done, but responsibility stays the same. Firms need to be able to demonstrate that their use of AI aligns with the same standards of supervision, transparency, and accountability that have always applied.
For firms looking to get ahead of that curve, training is one of the most immediate levers available. Quest CE recently released new Firm Element courses designed to help financial professionals understand both the risks and practical applications of AI, including:
- AI Note-Taking Tools in Financial Services: Compliance Risks and Responsibilities
- Artificial Intelligence Oversight for Supervisors
These courses are designed to support firms as they build policies, set expectations, and train employees on responsible AI use. Discover our full Firm Element catalog or Schedule a Demo with one of our product experts to see how to implement a Firm Element program for your firm.

