Advisories October 1, 2024

Privacy, Cyber & Data Strategy Advisory: AI vs. AI: Recent Developments in the Cyber Landscape

Executive Summary
Minute Read

The ubiquity of artificial intelligence (AI) has heightened companies’ exposure to cyberattacks of increasingly greater sophistication. Our Privacy, Cyber & Data Strategy Team explores how businesses can enhance their security measures to mitigate the threat.

  • Consider implementing procedures for employees to use in identifying the risks associated with AI systems
  • Consider ongoing training of employees on the evolution in phishing and social-engineering attacks
  • Consider developing a playbook to report deepfakes through appropriate measures, including to social media and other internet sites

The artificial intelligence (AI) era is in full swing and the impact on cybersecurity is far-reaching and multifaceted. Cybersecurity defensive tools, such as endpoint detection and response software, have long used AI to enhance their capabilities to thwart cyberattacks. With the recent advancements in AI and machine learning, including chatbots and other generative AI (GenAI) tools, and the increasing number of AI systems within companies’ technology environments used for business initiatives, companies are (or should be) developing additional measures to guard against the unique threats posed by threat actors’ use of AI and by AI systems more generally.

We detail how threat actors are currently using AI and how they might use it in the future, how cyber defenders have used, and continue to use, AI to counter cyber threats, and the unique cybersecurity risks to AI systems.

Current Use of AI by Threat Actors

Just how are threat actors using AI to launch cyberattacks? As a general matter, threat actors are using AI, specifically GenAI, to create automated, tailored cyberattacks and monitor and model user behavior to inform the threat actors’ tactics, techniques, and procedures (TTPs). The offensive use of AI by threat actors to ramp up the development of exploits and zero-day vulnerabilities was one of the five most dangerous cyber threats in 2023 and 2024, according to the SANS Institute. So it appears that adversaries may still be in an explorative rather than transformative stage of AI use. For example, Verizon’s 2024 Data Breach Investigations Report (DBIR), published with the USSS and other third parties, found no substantial increase in GenAI attacks worldwide.

While the DBIR indicates that AI adoption may not yet be widespread among threat actors, there are several confirmed use cases involving deepfakes and GenAI in phishing and social-engineering attacks that industry professionals should be aware of. A 2024 study by the UK National Cyber Security Centre found that “AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access [and this] enhanced access will likely contribute to the global ransomware threat over the next two years.”

Phishing

Phishing has long been the cornerstone of many cybercriminals’ strategies, and GenAI’s capabilities increase and amplify the threat to companies by allowing less-sophisticated threat actors to conduct increasingly sophisticated phishing attacks. While the basic structure of phishing attacks remains the same, GenAI can increase the believability of phishing emails by accurately mimicking human language. The days of relying on traditional phishing tells like typos, poor syntax, and unnatural speech are long been behind us. But as companies have trained employees to be wary of sophisticated phishing attacks, GenAI tools have drastically improved the quality of phishing content – including more accurate translations – resulting in clearer and more-believable communications than what threat actors can generate without GenAI, the DBIR found.

Further, GenAI can boost the speed and scale of phishing attacks. A Harvard Business Review study found that the “entire phishing process can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates.” Cybercriminals can use GenAI tools to generate and distribute phishing attacks much more rapidly than human actors can, and companies should be prepared for the magnitude of these attacks to increase.

Deepfakes

Deepfakes are AI-generated, highly realistic synthetic media that can be used in malicious ways, including to threaten a company’s brand, impersonate individuals (e.g., company leaders, such as CEOs and CFOs, and celebrities), and enable access to networks, communications, and sensitive information. As the quality of deepfakes has improved in recent years, their use in cyberattacks has increased. For example, the FBI, NSA, and CISA in a 2023 joint cybersecurity advisory warned of a particularly difficult challenge posed by deepfakes – deepfake voices can be used to leave voicemails or voice memos in targeted spear-phishing campaigns, or to fool companies’ IT desks into resetting passwords in order to gain access to companies’ systems.

Deepfake videos have also been used in a slate of high-profile scams involving video chats, where threat actors use deepfake technology to mimic the faces and voices of trusted parties – including friends and individuals in positions of power within companies – to convince victims to transfer funds to the threat actor. For example, in February 2024, Hong Kong police reported that a finance worker in Hong Kong at a UK-based engineering company was deceived by threat actors using deepfake technology on a video call to impersonate a company’s chief financial officer and multiple other executives, persuading the employee to wire $25 million to a threat actor-controlled account.

Technology to detect deepfakes is evolving, with threat actors’ capabilities maintaining a slight edge ahead of the current defensive technology. As deepfake detection tools are enhanced, CISA, the NSA, and the FBI recommend selecting and implementing real-time verification capabilities and passive detection techniques, focusing on protecting high-priority officers and their communications, and planning responses to exploitation attempts.

Use of AI to Enhance Cybersecurity Defense

Even without AI tools, threat actors have been very successful in their attempts to compromise companies’ systems. On May 2, 2024, Avril Haines, director of the National Intelligence Agency, testified before the Senate Armed Services Committee that the number of ransomware attacks worldwide grew as much as 74% in 2023. For years, companies have been using AI-powered cybersecurity defense tools and machine learning capabilities to guard against cyberattacks. Now, as threat actors begin leveraging AI to enhance their TTPs, it is becoming an AI arms race – threat actors’ offensive use of AI to launch cyberattacks vs. companies’ use of AI-powered cybersecurity defensive tools to prevent cyberattacks.

AI tools excel at reviewing and processing large quantities of data, and since they are used in cybersecurity defense tools, they are well equipped to quickly detect anomalous activity, automate (at least the initial) response processes, such as by sending an alert to a technician to review, and identify strategic and tactical trends from aggregated cyberattack data across industries to help improve and inform security measures moving forward. Machine learning tools can also be trained on a company’s own data, enabling threat detection that is tailored to a company’s specific systems and threat profile. While humans can currently perform these functions, AI tools can cover larger quantities of data faster to free up cybersecurity professionals for more manually intensive or appropriate tasks. Companies still need to be cautious when delegating these duties to AI-powered tools since false positives are common. So ensuring proper training of the AI model with high-quality data is crucial, and human oversight and validation of the results is necessary.

Securing AI Systems from Cyberattacks

Like any components of a system, AI systems need to be safe and secured. AI systems, however, present risks that are not otherwise present in traditional systems. According to a report by the National Institute of Standards and Technology (NIST), the “potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of the AI system, and even malicious manipulations, modifications or mere interaction with models to exfiltrate sensitive information about people represented in the data, about the model itself, or proprietary enterprise data.” As a result, NIST says, AI systems are uniquely vulnerable to a variety of attacks, including poisoning, evasion, privacy, and abuse.

  • Poisoning/Prompt-Injection Attacks. During the AI system’s training phase, threat actors with access to the training dataset may manipulate the data by inserting false or misleading data or by modifying or deleting a portion of the dataset, resulting in “poisoning” the dataset and causing erroneous outputs, introducing bias, or otherwise influencing the predictive capabilities of the model set. For example, a threat actor may train a dataset used by a customer service chatbot (on a company’s website) to share certain sensitive information when the chatbot is prompted with a certain question. See NIST 100-2, § 2.3.
  • Evasion Attacks. After an AI system is deployed, an evasion attack involves a threat actor altering the input to change how the AI system responds. For example, altering a stop sign to make autonomous vehicles misinterpret the stop sign as a speed limit sign. See NIST 100-2, § 2.2.
  • Privacy Attacks. After deployment of an AI system, the threat actor attempts, for malicious purposes, to learn sensitive information about the AI or the data contained in the dataset. The threat actor conducts a privacy attack by asking a chatbot numerous legitimate questions and using the answers to reverse engineer private or sensitive information. See NIST 100-2, § 2.4.
  • Abuse Attacks. Unlike poisoning/prompt-injection attacks, abuse attacks involve inserting incorrect information into the dataset from a legitimate, but compromised, source to repurpose the AI system’s intended use or output. See NIST 100-2, §§ 3.3 and 3.4.

AI systems’ unique vulnerabilities to attacks demand strong security measures at each stage of the AI lifecycle, including strong cybersecurity architecture during the design, development, and training phases. Such measures include using comprehensive cybersecurity tooling and cybersecurity programs to prevent the circumvention of security controls, and monitoring AI models to ensure the validity of outputs and that the model does not decay over time as input data changes.

What Can Companies Do?

While the cyber threat landscape becomes increasingly more complex due, at least in part, to threat actors’ further leveraging AI tools and capabilities and companies’ increasing use of (and reliance on) AI systems, companies should consider taking the following steps to guard against AI-powered cyberattacks:

  • Establish an AI risk management and governance program. Companies should develop policies and procedures for personnel to identify, document, and reduce the known or reasonably foreseeable risks of AI systems, such as algorithmic discrimination and vulnerabilities to attacks. As a general benchmark, companies can leverage NIST’s AI risk management framework, which may become the industry standard and includes four core functions: govern, map, measure, and manage. As a part of the risk management and governance program, companies should implement and adapt AI-related policies, as well as its written-information security policies, procedures, and standards, and conduct thorough risk assessments before implementing each piece of new AI tooling.
  • Implement phishing-resistant multi-factor authentication (MFA) wherever possible. Phishing-resistant MFA helps fortify user accounts against phishing attacks by incorporating multiple layers of protection by applying such advanced techniques as biometric authentication, hardware tokens, and push notifications to trusted devices – adding additional layers of protection to guard against increasingly sophisticated phishing attacks enabled by threat actors’ use of AI. Phishing-resistant MFA eliminates the risks of a threat actor obtaining the unique code sent to the user’s mobile device, email account, or mobile app; threat actors are well versed on how to obtain the unique codes by social engineering and other means. Phishing-resistant MFA is now required for federal government agencies and may become a more pervasive trend.
  • Educate employees throughout the company about evolving social-engineering or phishing sophistication, MFA-bypass techniques, and deepfakes. Cybersecurity tooling will not be sufficient to guard against the unique risks posed by AI systems, and the use of AI, to launch more sophisticated cyberattacks; security awareness training must evolve to address these specific issues.
  • Develop deepfake takedown policies and playbook. As deepfake technology becomes widespread and more convincing, companies must be prepared to quickly and effectively address instances of employee and executive imitation. Having policies and a playbook that, for example, specify the steps to request a deepfake takedown on mainstream social media sites and sites hosted by internet service providers and website hosting services can help ensure that deepfakes are taken down quickly and the negative effects are minimized.
  • Continue to monitor for adversaries’ use of AI and adjust defense controls accordingly. As threat actors continue to further leverage AI to launch cyberattacks, it will become more and more important to adjust the company’s security controls and tools to adequately address the threat actors’ evolving TTPs.

You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form. If you have any questions, or would like additional information, please contact one of the attorneys on our Privacy, Cyber & Data Strategy Team.

Meet the Authors
Media Contact
Alex Wolfe
Communications Director

This website uses cookies to improve functionality and performance. For more information, see our Privacy Statement. Additional details for California consumers can be found here.