Prompt’s Firewall for AI - The next big thing in appsec, with F5

April 30, 2024

With the newly launched integration, Prompt Security’s offering for homegrown applications or ‘Firewall for AI’ enables F5 Distributed Cloud Services customers to protect their GenAI applications at every touchpoint. With built-in immediate observability and policy management, organizations can secure their Generative AI applications, improve business productivity, and maintain data governance.

Organizations worldwide are building GenAI applications at a dizzying pace. These homegrown applications have become essential to driving innovation and new efficiencies, but they also expose the organization to brand-new attack surfaces.

Successful attacks such as prompt injections, jailbreaks, or remote code execution could lead to security breaches and data leaks. Similarly, potentially harmful or toxic responses by LLMs to an organization’s stakeholders could bring reputational damage, legal issues, and monetary losses.

F5 is an undisputed leader in Web Application Firewall, serving 85% of the Fortune 500 companies. Given the massive adoption of GenAI, addressing specific GenAI risks is becoming the next big thing in application security, and so in response to the growing need to protect homegrown GenAI applications, the F5 Distributed Cloud Platform and Prompt Security have partnered to deliver a Firewall for AI. F5’s Distributed Cloud Platform provides application security such as WAF, bot defense, DDoS protection, and API discovery/security. 

The Prompt Security firewall for AI protects inbound GenAI queries and outbound responses, making sure organizations are safeguarded from prompt injections, sensitive data disclosure, harmful responses, and others.

Some of the benefits that customers will get from the firewall for AI applications:

Addressing GenAI-specific security risks

As we’ve mentioned, GenAI applications are bringing a brand-new array of security risks. As a result, traditional security approaches alone won’t suffice. Prompt Security inspects every prompt and model response to protect against a range of new threats such as prompt injections, jailbreaking, denial of wallet, and more. All traffic to homegrown applications is routed through Prompt Security, providing complete visibility on incoming prompts.

Moderating content produced by LLMs

Equally as important as inspecting user prompts before they get to an organization’s systems, is ensuring that responses by LLMs are safe and do not contain toxic or harmful content that could be damaging to an organization.

Ensuring data privacy and preventing leaks

In developing operational GenAI applications, these will have access to databases and resources of an organization (or third party) to provide the most beneficial user experience. However, without proper security measures, organizational data can be disclosed as part of the responses generated by LLMs. This could potentially enable a user to deceive the application into revealing confidential information, leading to possible legal and reputational consequences.

Implementing governance and visibility

App developers and security teams monitor inbound and outbound traffic from GenAI apps at all possible insertion points. This will become more significant as an organization’s governance and regulations surrounding AI grow in detail and are more strongly enforced. Prompt Security provides full logging on each interaction, which includes user, prompt, response, findings, and more.

Prompt Security’s Firewall for AI can now be easily instantiated anywhere within F5's Distributed Cloud. This quick and simple instantiation can be expanded to different regions across the world to address performance or geographic data protection requirements. 

Share this post