Artificial intelligence (AI) and other emerging technologies like crypto assets are creating new opportunities but also new risks for investors. As a result, regulators, such as FINRA, NASAA and the SEC, recently put out an investor alert warning investors about an increase in bad actors seeking to capitalize on the hype around these innovations.

Below are some of the “themes” regulators have picked up on:

1.) Unregistered/Unlicensed investment platforms are claiming to Use AI

The first trend regulators are seeing is an increase in unregistered online investment platforms promoting AI trading systems with unrealistic, guaranteed return claims. They are using claims of proprietary AI trading systems that “can’t lose” or can pick “guaranteed winners” to lure investors in. This is a major red flag, indicative of potential fraud seeking to capitalize on AI hype.

2.) Bad actors are using hype around AI to lure investors into schemes

Regulators caution that the excitement around AI enables deception. Unethical companies may exploit AI buzzwords, making inflated claims about guaranteed profits. This “pump and dump” scheme artificially inflates stock prices through hype, celebrities, and misinformation, before schemers sell off shares. The resultant crash leaves everyday investors with losses. Regulators warn investors to be wary of outsized AI claims used to manipulate the market.

3.) Social media has become more saturated with financial content

Regulators have also witnessed an increasing number of investors using social media to research investment opportunities and connect with others. Influencers have taken notice, and social media has become more saturated with financial content than ever before, leading to the rise of the financial influencer or “finfluencer.” Regulators remind investors that scrutiny and accountability around social media recommendations are essential.

4.) Fraudsters using AI technology to scam investors

Regulators have also found that fraudsters are now using AI itself to scam investors. Deepfake audio and video can impersonate CEOs, government officials, or even family members in an attempt to manipulate. AI-generated content could provide faulty information leading investors to make poor decisions.

Based on these trends, what can compliance do to stay ahead of the curve?

  • Review supervisory procedures to ensure adequate oversight measures are in place around new digital tools like AI analytics or algorithmic trading platforms. Train staff not just on proper usage but also fraud detection.
  • Closely monitor messaging and marketing content related to AI, digital assets or metaverse offerings. Ensure adherence to advertising compliance standards, with special attention to avoiding overpromising or guarantees.
  • Set up crosscheck monitoring procedures to verify licenses and registrations of any new fintech partners, AI firms, data providers or software vendors.
  • Keep pace with deepfake advancements in audio/video technology to update authentication and cybersecurity protocols as needed to prevent impersonation attempts.
  • Expand social media polices to provide guidance around emerging technologies, managing hype themes, verifying questionable third-party posts, proper usage of celebrity endorsements etc.

Emerging digital trends can transform financial services in many positive ways, but it also opens the door to a lot of risks. Compliance programs that proactively adapt to the evolving AI and tech landscape will be better positioned to allow innovations while protecting the firm and clients. Monitor regulatory warnings closely and don’t hesitate to contact regulators for guidance around mitigating new threats.