Advisories July 26, 2024

Securities Law / Securities Litigation Advisory | Navigating AI-Related Disclosure Challenges: Securities Filing, SEC Enforcement, and Shareholder Litigation Trends

Executive Summary
Minute Read

Companies’ accelerating reliance on artificial intelligence (AI) means heightened Securities and Exchange Commission (SEC) and shareholder plaintiff scrutiny. Our Securities Litigation Group underscores what companies need to know about disclosing AI use and related risks.

  • The SEC intends to focus on companies’ public disclosures of AI use and risks as public companies lean into addressing AI in their routine disclosures
  • The SEC is concerned about risks to investors, including “AI washing,” or making inflated or false claims about AI in business
  • Shareholder plaintiffs have also turned their attention to companies’ AI-related disclosures 
 

In recent years, as the use of artificial intelligence (AI) has exploded across industries, public companies, their regulators, and their shareholders have begun to focus on the disclosure of AI usage and related risks. A growing number of public companies include AI-related risk factor disclosures in their annual Form 10-K filings, and the U.S. Securities and Exchange Commission (SEC) has likewise indicated its intention to focus on public statements regarding the use of AI.

In March, the SEC brought its first two enforcement actions arising out of disclosures related to AI use by financial institutions. These enforcement actions follow numerous statements by the SEC chair and the director of the Division of Enforcement about the importance of accurate and full disclosures concerning AI, as well as the release of the SEC’s proposed rule on the use of AI by broker dealers and investment advisers.

Shareholder plaintiffs are also pursuing this trend and have filed several complaints alleging novel theories related to the accuracy of AI-related disclosures, some of which have survived a motion to dismiss.

Given the rapid technological and regulatory developments in the field of AI, companies should seize this moment to assess their AI-related risks and disclosures.

SEC Remarks Regarding AI and AI Washing

Early this year, SEC Chair Gary Gensler spoke about the “tremendous opportunities” made possible by AI but simultaneously warned that AI models may produce unpredictable and inaccurate outcomes (e.g., hallucinations) that can reinforce historical biases. He warned that AI hallucinations may pose problems for the brokers and advisers that regularly use AI to inform their investment decisions and may additionally threaten the stability of public markets.

Gensler also raised concerns about various risks specific to investors, including “AI washing,” the practice of making inflated or false claims about AI in business. In particular, Gensler warned that issuers must have a reasonable basis for claims about their use of AI and must disclose that basis to investors. He also advised against “boilerplate” AI-related risk disclosures and recommended that issuers detail company-specific risks.

In separate remarks, the director of the SEC’s Division of Enforcement, Gurbir Grewal, compared the present fascination with AI to similar waves of investor interest in cryptocurrency, ESG investing, and SPACs, and noted that “elevated investor interest in rapidly developing technology or offerings often leads to elevated investor risk.” Grewal further stressed the importance of full and accurate disclosure of AI-related risks and encouraged registrants to (1) educate themselves about AI risks relevant to their business; (2) engage with personnel across business units to learn how AI intersects with their operations, strategies, and risks; and (3) develop and implement appropriate AI-related policies and procedures. Gensler’s and Grewal’s comments suggest the SEC is becoming and will remain increasingly focused on issuers’ disclosures about AI use and associated risks.

Most recently, on June 24, 2024, the director of the SEC’s Division of Corporation Finance, Eric Gerding, highlighted AI as a disclosure priority for the SEC, noting that the SEC has “observed a significant increase in the number of companies that mention artificial intelligence in their annual reports.” Gerding explained that the Division of Corporation Finance will consider how companies are describing AI-related opportunities and risks in 2024, including whether the company:

clearly defines what it means by artificial intelligence and how the technology could improve the company’s results of operations, financial condition, and future prospects;
provides tailored, rather than boilerplate, disclosures, commensurate with its materiality to the company, about material risks and the impact the technology is reasonably likely to have on its business and financial results;
focuses on the company’s current or proposed use of artificial intelligence technology rather than generic buzz not relating to its business; and
has a reasonable basis for its claims when discussing artificial intelligence prospects.

Analysis of AI and Cyber Risk Disclosures in Forms 10-K

While the SEC’s focus on AI disclosures will likely encourage an increasing number of companies to disclose AI-related risks, issuers have already started incorporating relevant risk disclosures into their annual Form 10-K filings, with over 40% of S&P 500 companies including such disclosures in their 2023 Forms 10-K and mentions of AI during earnings calls rising by 77% (subscription req’d) in the fourth quarter of 2023.

Additionally, our qualitative analysis of AI-related risk disclosures in fiscal year 2023 Forms 10-K issued by Fortune 100 companies reveals several notable emerging trends. For example, we found that 46% of Fortune 100 companies included AI-related risk disclosures in their Forms 10-K and that such disclosures fall broadly into five buckets: (1) cybersecurity risk; (2) regulatory risk; (3) ethical and reputational risk; (4) operational risk; and (5) competition risk. These risk disclosures were not limited to a certain industry or sector, with a broad range of public companies making risk disclosures that fall into one or more of these buckets.

Cybersecurity and fraud-related risk disclosures detailed novel cyber threats presented by the deployment of AI, such as criminal threat actor use of AI technology to perpetuate cyberattacks. For instance, a pharmaceutical company explained “as Artificial Intelligence … continues to evolve, cyber-attackers could also use AI to develop malicious code and sophisticated phishing attempts.” An investment banking registrant warned that “we are exposed to risks arising from the use of AI technologies by bad actors to commit fraud and misappropriate funds and to facilitate cyberattacks. Generative AI, if used to perpetrate fraud or launch cyberattacks, could result in losses, liquidity outflows or other adverse effects at a particular financial institution or exchange.” Some 54% of the Forms 10-K included in our sample highlighted AI-related cybersecurity or fraud risks, more than any other AI-related risk category.

Regulatory risk disclosures outlined potential liabilities related to the regulation of AI by various agencies and jurisdictions. For example, one automotive manufacturer disclosed that “regulatory actions seeking to impose significant financial penalties for noncompliance and/or legal actions (including pursuant to laws providing for private rights of action by consumers) could be brought against us in the event of a data compromise, misuse of consumer information, or perceived or actual non-compliance with data protection, privacy, or artificial intelligence requirements. The rapid evolution and increased adoption of artificial intelligence technologies may intensify these risks.” Perhaps as the result of comments by numerous regulators and President Joe Biden’s October 30, 2023 Executive Order directing agencies to issue guidance and set standards regarding the procurement and use of AI, more than half the firms we analyzed detailed risks associated with the evolving legal and regulatory landscape for AI.

Ethical and reputational risk disclosures highlighted social and ethical issues associated with AI, including concerns that the technology may infringe on privacy rights, reinforce existing prejudices and biases, and violate intellectual property rights. These risks sometimes acknowledge that generative AI may produce startling results or hallucinate (i.e., present false or misleading information as factual). Companies noted that public debate over the development, use, and potential misuse of AI may in turn harm their reputation among consumers and employees. The risk, a technology company noted, is that “[u]nintended consequences, uses, or customization of our AI tools and systems may negatively affect human rights, privacy, employment, or other social concerns, which may result in claims, lawsuits, brand or reputational harm, and increased regulatory scrutiny, any of which could harm our business, financial condition, and operating results.”

Other companies disclosed a risk that third parties could use AI to impersonate their executives and imitate their communications. For example, an insurance company disclosed that “[m]alicious actors could use AI to create deepfakes of the Company’s executives or manipulate financial documents, leading to loss of customer trust and significant reputational damage. Moreover, the use of AI trained on inaccurate data sets could result in inaccurate or biased decisions.” Just over 41% of the firms detailed ethical and reputational risks associated with AI.

Operational risk disclosures discussed unanticipated disruptions to systems, potential loss or corruption of data, implementation delays, and cost overruns that could stem from underlying defects in the AI tools they use. For example, one insurance company disclosed that “[t]he development and adoption of artificial intelligence …, including generative artificial intelligence  …, and its use and anticipated use by us or by third parties on whom we rely, may increase the [risk that our operations, systems, or data, or those of third parties on whom we rely, may be disrupted] or create new operational risks that we are not currently anticipating.” Interestingly, the disclosure recognized that risks are not limited to AI use by the registrant itself, and that third-party AI use may create operational risks for the registrant as well. Approximately 37% of the Forms 10-K we analyzed included AI-related operational risk disclosures.

Competition risk disclosures called out that rapid adoption of AI may alter competitive advantages and lead to the erosion of issuers’ market share. For example, one consumer electronics company acknowledged that the spread of AI might lead to “the emergence of new products and categories, the rapid maturation of categories, cannibalization of categories, changing price points and product replacement and upgrade cycles,” to the detriment of the business. Roughly a third of the AI-related disclosures we reviewed dealt with competition risks.

Given the rapid technological and regulatory developments in the AI space, it remains to be seen how these disclosures will develop over time.

Recent SEC Enforcement Actions

The SEC has recently started bringing enforcement actions against firms for making alleged false or misleading statements about their use of AI. To date, it has focused on small firms that represented they were using AI but failed to implement the AI they advertised.

On March 18, 2024, the SEC issued cease-and-desist orders memorializing unrelated settlement agreements it reached with two investment advisers. The SEC asserts both companies made false and misleading statements about their use of AI in violation of Section 206(2) of the Advisers Act, which makes it unlawful for any investment adviser, directly or indirectly, to “engage in any transaction, practice or course of business which operates as a fraud or deceit upon any client or prospective client.” Scienter (i.e., fraudulent intent) is not required to establish a violation of Section 206(2), and liability may stem from a negligent violation of the law.

According to the orders, both companies’ websites asserted that they used AI models to enhance their consumer products but, when questioned by the SEC, confirmed that they either (1) did not actually utilize an AI model; or (2) could not provide documents supporting their AI usage. The SEC was especially critical of the limited policies each company implemented for the review of advertisements and marketing efforts. Both subjects of the SEC’s enforcement actions agreed to censure by the SEC and payment of civil penalties between $175,000 and $225,000.

Since the enforcement action, one company has ceased its advisory services. The second continues to operate, but agreed to remove its AI-related advertisements, retain a compliance consultant to review its marketing materials, provide its employees with compliance training, and reimburse the advisory fees its clients paid.

On June 11, 2024, the SEC brought an “AI-washing” enforcement action against the CEO and founder of a now-shuttered AI recruitment startup. The SEC charged the founder with violations of Section 17(a) of the Securities Act and Section 10(b) of the Exchange Act and Rule 10b-5 thereunder. The complaint alleges the startup represented that it used AI to help its customers find diverse and underrepresented candidates in furtherance of the customers’ DEI goals. According to the complaint, the company did not actually use the AI-based technology that it advertised. The SEC alleges the founder used the false representations to raise $21 million from investors. The founder is also alleged to have made false and misleading statements about the number of customers and candidates on the platform, as well as about the company’s revenue. Grewal described the alleged scheme as “an old school fraud using new school buzzwords like ‘artificial intelligence’ and ‘automation.’”

Although all three enforcement actions targeted companies that could not provide any support for their claims of AI usage, it remains to be seen whether the SEC will expand its attention to investigate how larger, more established firms use AI for trading, including whether the SEC will focus on “new” AI technologies or well-established ones. 

Proposed Rule for Broker Dealers and Investment Advisers

In addition to bringing a limited number of enforcement actions, the SEC has also begun using its rulemaking authority to target AI-related disclosures. In July 2023, the SEC issued a proposed rule concerning the use of AI by broker dealers and investment advisers. The proposed rule focuses on potential conflicts of interest created by the use of AI. As drafted, the proposed rule targets a wide-range of “covered technology” but applies only to broker dealers and investment advisers and focuses on investor-facing communications.

The proposed rule has drawn significant attention and continues to draw comments well into 2024, although the comment period ended in October 2023. To date, the SEC has received at least 143 comments from a wide variety of market participants. On June 13, 2024, Gensler indicated that the SEC may revisit the proposed rule in light of the commentary. 

Private Shareholder Class Actions

While the SEC has concentrated primarily on investment advisers and broker dealers, in recent years the plaintiffs’ bar has spearheaded litigation of AI-related claims against issuers. Plaintiffs have generally asserted these claims under Section 10(b) of the Securities Exchange Act and Rule 10b-5 promulgated thereunder, and Section 20(a) of the Exchange Act, which prohibit misstatements or omissions in connection with the purchase or sale of securities.

The underlying themes of these actions involve allegations that the defendants overstated the capabilities of their AI technologies, failed to inform the market of risks inherent in their use of AI, failed to issue prompt disclosures when their technologies did not perform as expected, or were not using or developing the AI models as previously represented. For example, in Jaeger v. Zillow Group Inc., the plaintiffs alleged that Zillow misrepresented the accuracy of the algorithms it utilized in its “Zestimate offer” program. These algorithms were intended to eliminate the need for Zillow to obtain a pricing expert when providing purchase offers to homeowners. According to the plaintiffs, Zillow continued to advertise its AI usage in a positive light while simultaneously engaging in an extensive, “human-driven process” to drive up the number of home purchase offers it made and meet its home-purchasing goals. Attempts to drive up offers in turn led to “overpricing” the value of the homes and allegedly caused Zillow to “significantly overpay for thousands of homes.” These AI-related allegations survived the defendants’ motion to dismiss, and the plaintiffs have moved to certify a class for their remaining claims.

In another example, In re Upstart Holdings Inc. Securities Litigation, the plaintiffs alleged the defendants stated that their AI loan underwriting model provided a “significant advantage” over traditional FICO-based models, including “higher approval rates and lower interest rates at the same loss rate.” In its order denying the defendants’ motion to dismiss the plaintiffs’ claims based on these purported misstatements, the court noted that the plaintiffs “adequately pled that the Upstart model did not provide these [specific] verifiable advantages.” However, the court held that certain ancillary statements that Upstart’s AI model was “magical” or “shined” were too vague to be actionable.

Finally, this year, plaintiffs have filed at least three AI-related securities class actions against publicly traded technology companies. In each of these cases, the plaintiffs allege that the defendants misrepresented their AI prowess and overstated the ability of AI to bolster their future business prospects. These private shareholder actions to date build on the SEC’s concerns about AI washing and highlight the importance of ensuring that a firm’s AI usage and performance mirrors its representations to the public.

Key Takeaways

Although AI-related disclosures have become a new target in securities enforcement and litigation, companies and their counsel can take proactive steps to protect against future liability.

  • Companies should ensure that disclosure counsel understands how the business uses and plans to use AI, and that the business understands the importance of consulting with legal counsel before making public representations about the company’s use of AI.
  • Companies should also be aware that even general “puffery” remarks about the company’s use of AI may be scrutinized by the SEC or private plaintiffs, particularly in the context of “AI washing.” 
  • Public companies should remain aware of continuing SEC guidance, even in the form of SEC remarks, about crafting AI-related disclosures and should consider those remarks when determining how to approach such disclosures.
  • Companies should consider whether and how to educate their boards of directors regarding the company’s use of AI (as well as AI use by competitors, significant vendors, and large customers, as applicable) and the related risks and benefits of such use, including with respect to the company’s disclosure obligations. 
  • Companies should take a broad and deep view of the potential risks posed by AI technologies and consider whether and how to best disclose such risks. 
  • Financial firms in particular should ensure that their policies and procedures and record-keeping requirements comply with the latest SEC guidance. 

You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form.

If you have any questions, or would like additional information, please contact one of the attorneys on our Securities Litigation Team.

Media Contact
Alex Wolfe
Communications Director

This website uses cookies to improve functionality and performance. For more information, see our Privacy Statement. Additional details for California consumers can be found here.