Back to Blog

The OWASP Top 10 for LLM Apps & GenAI

Prompt Team
November 10, 2024
OWASP's Top 10 for LLM Applications & Generative AI lays out the most critical vulnerabilities found in applications that use LLMs.

The Open Worldwide Application Security Project (OWASP) provides guidance on governance, risk management, and compliance for LLM deployment. Led by more than five hundred experts in cybersecurity, AI and IT, the project serves thousands of members – from developers and data scientists to compliance officers and security practitioners – who seek knowledge concerning risks and security solutions for LLM apps and GenAI.

One of OWASP’s most prominent resources for security best practices is its Top 10 for LLM Applications & Generative AI, which lays out the most critical vulnerabilities found in applications that use LLMs. Prompt Security CEO & Co-founder Itamar Golan, an expert in LLM app security, played a significant role in the list’s compilation and continues to contribute to the intermittent release of new OWASP resources on security guidance.

OWASP Top 10 for LLM Applications and GenAI

1. Prompt Injection

When an attacker manipulates a large language model (LLM) through carefully crafted inputs.

Prevention and mitigation:

  • There is no silver bullet for fool-proof prevention. 
  • Measures that can mitigate the impact of prompt injections include enforcing privilege control on LLM access to backend systems, adding a human in the loop for extended functionality, segregating external content from user prompts, and manually monitoring LLM input and output from time to time.

2. Insecure Output Handling

When backend systems are exposed due to an LLM output being accepted and passed downstream without scrutiny. Potential consequences include XSS, CSRF, SSRF, privilege escalation, and remote code execution. 

Prevention and mitigation:

  • Apply proper input validation on responses that head from the model to backend functions.
  • Encode model output back to users.

3. Training Data Poisoning

When data or the fine-tuning process is manipulated so as to enable vulnerabilities that compromise a model’s security, effectiveness or ethical behavior. 

Prevention and mitigation:

  • Verify the supply chain of the training data.
  • Analyze the behavior of trained models on specific test inputs.

4. Model Denial of Service

When an attacker excessively engages with an LLM-based app, leading to substantial resource consumption.

Prevention and mitigation:

  • Monitor how LLMs allocate resources so as to identify potential attacks.
  • Implement API rate limits to prevent overload from individual IP addresses.

5. Supply Chain Vulnerabilities

When third-party datasets, pre-trained models and plugins render LLM applications susceptible to security attacks.

Prevention and mitigation:

  • Vet suppliers and their policies.
  • Ensure that plugins, however reputable, have been tested for the relevant application requirements.
  • Conduct regular updates of component inventory.

6. Sensitive Information Disclosure

When sensitive data is undesirably revealed as a consequence of either LLM integration (i.e., an LLM application revealing sensitive data via its outputs) or LLM usage (i.e., a user feeding sensitive data into an external LLM app).

Prevention and mitigation:

  • Enforce strict access control methods to external data sources.
  • Use data sanitization and cleansing to prevent user data from entering training model data.

7. Insecure Plugin Design

When plugin design is vulnerable to malicious requests, which could result in a wide range of undesired behaviors, up to and including remote code execution.‍

Prevention and mitigation:

  • Inspect plugins and analyze source code to identify security vulnerabilities.
  • Ensure that action taken by sensitive plugins requires manual user authorization.

8. Excessive Agency

When LLMs take action without sufficient human oversight.

Prevention and mitigation:

  • Set clear guidelines and constraints on LLM autonomy, ensuring that LLM tools only have access to required functions and, when possible, that such functions are closed-ended in nature.
  • Where feasible, require human approval.

9. Overreliance

When users fail to employ enough scrutiny before accepting inaccurate or inappropriate information generated by LLMs.

Prevention and mitigation:

  • Make regular monitoring of LLM outputs a workflow staple. This should include examination of multiple model responses for individual prompts.
  • Split complex tasks into subtasks and assign them to different LLMs so as to reduce the risk associated with any one LLM hallucination.

10. Model Theft

When an attacker gains unauthorized access to duplicate or extract LLM models and their data.

Prevention and mitigation:

  • Implement strong authentication mechanisms for access to LLM model repositories and training environments.
  • Restrict LLMs’ access to network resources, internal services, and APIs.

Prompt Security’s Vital Role in the Top Ten for LLMs

To achieve a list that is both concise and dependable, OWASP brought the most relevant and forward-thinking voices into the decision-making process. Together with his fellow contributors, Itamar assessed and refined language on various vulnerabilities before determining which language would advance for further consideration. 

“The OWASP Top 10 for LLM Apps and GenAI empowers organizations to meet first-rate security standards while keeping pace with Generative AI’s rapid adoption and evolution. I am proud to have supported this project from the beginning and remain committed as it deepens and expands its essential and actionable guidance for navigating the complexities of AI security.” Itamar Golan, CEO & Co-founder of Prompt Security

How Prompt Security Helps

Prompt Security safeguards systems against all of these vulnerabilities and threats, helping make interactions with GenAI applications safe and legitimate. We block prompt injections with minimal latency overhead, counter model denial of service attacks by monitoring for abnormal usage, and alert organizations about potentially harmful prompts directed to integrated LLM plugins. Prompt Security is at the forefront of robust GenAI protection and will defend the integrity of your applications.

Let's talk GenAI Security.

Share this post