Executive Summary: The Rise of AI Washing as an Enforcement Priority
The Securities Exchange Commission (SEC), Department of Justice (DOJ), and Federal Trade Commission (FTC) have pursued “AI washing” enforcement actions against companies that have made material, false, or misleading statements about their artificial intelligence (AI) capabilities. These enforcement actions have applied longstanding antifraud principles to AI-related corporate disclosures—a predictable evolution in regulation whereby authorities engage in a “regulation by enforcement” strategy while comprehensive AI-specific regulations remain under consideration. In the securities regulation arena, the SEC is sending a clear message that it is not waiting around for additional regulation and believes that existing securities laws provide ample authority to prosecute misleading AI claims.
Enforcement implications extend broadly across all industries, given that many organizations have been adopting AI capabilities and disclosing their efforts to shareholders. Consequently, chief compliance officers, general counsels, corporate boards, and other consultants advising SEC-regulated entities must add AI-washing to the list of compliance issues that must be addressed by corporate governance frameworks. Specifically, entities must ensure that their policies, procedures, and controls are adequate to validate AI-related claims in an environment where public messaging on technology is highly competitive, rapidly evolving, and often poorly understood, even by senior management.
Evolution of SEC AI Washing Cases (2024-2025)
Currently, AI-related representations are being disseminated broadly to investors in securities filings, earnings calls, and other public statements. Misrepresentations generally involve exaggerated claims about AI system sophistication, mischaracterizations of AI integration into business operations, and unfounded assertions about competitive advantages derived from AI technologies.
The SEC’s AI washing enforcement began in March 2024 with simultaneous actions against two investment advisory firms—Delphia (USA) Inc. and Global Predictions Inc.—both of which settled charges for making false and misleading statements about their use of AI in investment processes. The complaints detailed patterns of exaggeration—not just isolated statements—suggesting that the risk of an SEC action for AI washing will be higher for systemic compliance failures. Both firms agreed to cease-and-desist orders, censures, and civil penalties—typical remedies in enforcement actions—enabling the Commission to establish precedent quickly and setting up the framework for future enforcement cases involving AI-related claims.
The expansion to public companies came soon after. In January 2025, the SEC charged Presto Automation Inc., a formerly Nasdaq-listed company, marking the first AI washing enforcement action against a public company. The complaint alleged that Presto significantly overstated the capabilities and operational effectiveness of its Presto Voice AI product for restaurant drive-throughs. The SEC conducted a detailed analysis comparing public representations against actual system performance data, finding technical performance gaps between claimed and actual functionality. The SEC also noted that the company failed to disclose that the AI speech recognition technology it deployed was owned and operated by a third party, and that it involved significant human intervention, contrary to what was marketed as an automated AI solution.
In April 2025, in parallel cases, the SEC and DOJ charged the founder and former CEO of Nate Inc. with fraudulently raising over $42 million by falsely claiming Nate’s shopping app used AI to process transactions. In reality, the company relied on manual workers to complete purchases and the CEO fabricated the app’s success metrics claiming automation rates above 90% when such rates were essentially zero.
Enforcement Mechanics and Investigation
The SEC’s AI washing investigations demonstrate an adaptation of traditional securities fraud analysis to AI-related technical contexts. The Commission’s investigative approach is increasingly incorporating digital forensics and analysis of what can truly be classified as AI solutions. This technical validation capability represents an evolution in SEC methodology, requiring internal technical expertise or specialized consultants to evaluate AI architecture, training data, algorithm effectiveness, and system outputs.
Central to these investigations is whether the disclosed information was material—i.e., whether a reasonable investor would consider the relevant representation important to investors. The SEC has signaled that AI-related claims meet materiality thresholds because they relate to competitive positioning, operational efficiency, and revenue generation capabilities—factors considered when investors are making buy, hold, and sell decisions. The investigations undertaken by the SEC also underscore that companies will not be insulated from prosecution when boilerplate AI risk warnings are coupled with specific known but undisclosed risks or limitations in AI systems.
Key Compliance Takeaways
The SEC’s AI washing cases demonstrate that companies should evaluate AI disclosures under existing Sarbanes-Oxley internal and disclosure control frameworks. However, because AI risks extend beyond periodic SEC filings and earnings releases, management and boards should consider social media and other forms of public communications.
Management is a critical line of defense, and they must validate AI representations as technically sound and factual before public dissemination. This might also include risk review committees for AI claims with cross-functional participation from legal, compliance, engineering, and finance teams.
Audit committees also have oversight responsibilities and should consider how to monitor the accuracy of AI-related representations as part of their oversight of financial reporting and disclosure controls.
Best Practices
• Policies and Procedures. Companies should develop comprehensive AI governance policies that establish clear authority, responsibility, and accountability for AI-related claims and disclosures. This process should include senior management authorization for any AI-related disclosure, sign-off from technical personnel who can substantiate capabilities, and legal and compliance review to consider materiality and other securities law implications.
• Catalog AI Communications. Identify all past, current, and planned AI-related statements across every public communication channel. This should include every AI capability claim, the implementation timeline, and the corresponding performance metric.
• Audit Claims. Validate claims catalogued against documentary evidence and technical reality. Identify any material gaps and determine how to address them—e.g., remediation, corrective disclosure, etc.
• Technical Testing. Regular AI capability validation testing should become routine for systems mentioned in public disclosures. Consider third-party technical assessments to independently validate the performance and security of AI systems.
• Training. Employee training programs should address AI washing risks and provide reminders for whistleblowing channels, should there be concerns about AI capability misrepresentations.
Future Enforcement Trends and Risks
Continued SEC AI washing investigations will likely continue going forward as well as parallel or joint investigations with the DOJ. The FTC will also likely continue investigating and filing AI washing cases.
The SEC has already targeted investment advisors, a small public company and a technology startup. Larger, well-known public companies—especially those in tech, finance, and healthcare that frequently discuss AI in their investor updates—should expect close examination of their AI claims.
Public companies should be especially cautious about AI claims made during earnings calls as executives may be tempted to answer questions spontaneously after prepared remarks raising the risk of inadvertent misrepresentations when executives are feeling pressure to showcase their AI progress to investors.
With new technology comes risk, and AI is one risk area that cannot be ignored. Taking steps today to validate claims can help avoid regulatory scrutiny in the near- and long-term.
If you have any questions or would like to discuss this topic please reach out to Howard Scheck or Luke Tenery.
To receive StoneTurn Insights, sign up for our newsletter.
 
                                             
                                             
                                             
                                             
                                             
                       
                       
                