Blog

Read the latest news, research and insights on AI Security from the team at Prompt Security

Clear Filters
Filter by Category
Choose Query
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Read More

Prompt Security Named a Representative Vendor in the Gartner® Innovation Guide for Generative AI TRiSM

Prompt Security was included in Gartner's Innovation Guide for Generative AI in Trust, Risk and Security Management Report of 2024.

Read More

Prompt’s Firewall for AI - The next big thing in appsec, with F5

Prompt Security’s offering for homegrown applications or ‘Firewall for AI’ enables F5 Distributed Cloud Services customers to protect their GenAI applications.

Read More

LLM Jailbreak: Understanding Many-Shot Jailbreaking Vulnerability

LLM jailbreak attacks like many-shot jailbreaking exploit large language models. Prompt explains risks, examples, and defenses against these vulnerabilities.

Read More

eBPF at Prompt Security: The first no-code security offering for LLM-based applications

Prompt Security's use of eBPF brings a new paradigm for application security as it offers unprecedented visibility and control at the kernel level

Read More

Quick overview of the EU AI Act: the first regulation on artificial intelligence

The European Parliament approved the EU Act, the first regulation on AI. This new regulatory framework establishes risk levels and obligations for AI systems

Read More

Zeta Global: GenAI Unleashed and Security Challenges in the Era of User Empowerment

Zeta Global can pride themselves with having been early adopters of NLP and LLM technology, and released an LLM-based application called ZOE.

Read More

“Hello, World!” We’re Prompt Security, the Singular Platform for GenAI Security. Nice to meet you.

Announcing that we have raised $5M seed and emerged from stealth to be the one-stop for all Generative AI security needs of an enterprise.

Read More

Denial of Wallet (Dow) Attack on GenAI Apps

Denial of Wallet (DoW) attacks aim to damage the company or to gain unauthorized free access to Large Language Models (LLMs).

Read More

Extracting GBs of training data from ChatGPT

What's the risk of your employees or applications accidentally disclosing sensitive data to GenAI tools like ChatGPT, Bard, Jasper, Bing, etc.?