Denial of Wallet (Dow) Attack on GenAI Apps

Itamar Golan
January 7, 2024

TL;DR

We are witnessing a rise in attempts to attack or exploit a company's GenAI applications, either to damage the company or to gain unauthorized free access to Large Language Models (LLMs). This is referred to as 'Denial of Wallet', or DoW.

Intro

LLMs are costly, typically charged per token. A high-scale GenAI app can easily incur thousands of dollars per day in fees to OpenAI, Anthropic, AWS Bedrock, Mistral or other. Alternatively, some choose to self-host these apps, a process that is also expensive and computationally exhaustive. What if you could use another's unprotected GenAI App instead of paying for your own? That's the idea behind the Denial of Wallet attack, which we see more and more.

Denial of Wallet Attack

In this increasingly common attack, you gain access to another's GenAI app, usually through their chat interface. Sometimes, you may need to bypass a rate limit, which is often awkwardly implemented or not implemented at all.

Then, you have access to their LLM, but are still constrained by their system prompt/context. The next step is to 'jailbreak' it, allowing you to bypass the system prompt. Once achieved, you essentially have direct and free access to their LLM's backend.

This is exactly what we did in the script demonstrated in the video below (we stopped after spending $5, but a potential nasty attacker wouldn't 😈.)

Drill Down on the Size of the Risk

This attack can be a way to target your company through this new GenAI 'backdoor', or simply a "naive" attempt by a user to use your OpenAI's credits. You might find yourself with huge bills due to a sudden increase in tokens fed into your 3rd party LLM, sometimes amounting to thousands of dollars per day! In the video above their daily limit was $5k! Imagine if we ran it in a distributed process, we could cause them a $5,000 fine within a few minutes, followed by downtime due to the threshold limit.

Therefore, another risk is indeed reaching your LLM's throughput limit (whether 1st or 3rd party), causing your application to be essentially unavailable to genuine customers trying to use your GenAI app.

In summary: potentially huge bills and unpleasant downtime.

Want to uncover GenAI risks and vulnerabilities in your LLM-based applications?

Identify vulnerabilities in your homegrown applications powered by GenAI with Prompt Security’s Red Teaming.

Share this post