Prompt Security launches industry’s first interactive open-source fuzzer tool for GenAI Application Vulnerability Assessment

April 17, 2024

Tel Aviv, Israel— April 17th, 2024 — Prompt Security, the unified platform for generative AI (GenAI) security, today announced the launch of an open-source tool, the "Prompt Fuzzer." The first of its kind, this interactive tool empowers developers of GenAI applications to evaluate and enhance the resilience and safety of their system prompts in a user-friendly way.

System prompts play a crucial role in AI systems, particularly in Language Learning Models (LLMs). They guide the model in understanding and responding to user queries. These instructional components act as the AI's guide, steering its functions and ensuring the outcomes align with the goals set by the app builder.

Prompt Security's mission is to enable the safe and secure adoption of GenAI by protecting all facets of an organization – from GenAI tools utilized by employees to GenAI in homegrown applications. The company’s solutions inspect each prompt and model response to stop prompt injection attempts, prevent sensitive data exposure, block harmful content, and safeguard against a wide array of GenAI threats.

In line with their dedication to fostering a collaborative GenAI Security community, the company is committed to sharing knowledge and resources. As part of this commitment, Prompt Security has launched the Prompt Fuzzer, an interactive tool available on GitHub designed to enhance the security of GenAI applications. Once installed, the users input any system prompt and the relevant configuration, and the Fuzzer starts running its tests. As part of the evaluation, the applications' system prompt gets exposed to various dynamic LLM-based attacks. Examples of the simulated attacks are sophisticated prompt injections, system prompt leaks, jailbreaks, harmful content elicitations, ethical compliance, and many others. The tool offers security evaluations based on test outcomes, enabling developers to fortify the system prompts as needed.

The Prompt Fuzzer - also powered by an LLM - tailors its attack simulations to each application's specific configuration and subject matter area. Users also benefit from access to an interactive Playground, where they can freely iterate and test their system prompts.

The ultimate goal of Prompt Security is to allow organizations to create safer and more secure applications that fully harness the power of Generative AI.

The tool is available to everyone on GitHub.

About Prompt Security

Founded in August 2023, Prompt Security delivers a complete solution for all generative AI security in the enterprise. Its robust platform supports millions of prompts and thousands of users per month. The founding team combines deep expertise in both cybersecurity and AI, with years of experience building and securing machine learning systems at organizations like Check Point, Orca Security, and Israel’s elite intelligence unit 8200. Prompt’s CEO Itamar Golan was on OWASP Top 10 for LLM Applications core team and Prompt’s CTO & co-founder Lior Drihem contributed to the project. The Prompt Security team of researchers has created proprietary LLMs and developed novel patent-pending techniques for detecting generative AI threats and addressing the associated risks.

Share this post