Entering 2024, we can all agree that Generative AI has transformed the world. Its widespread adoption and the rapidity of its integration have surpassed anything witnessed before (and yes, receiving screenshots of ChatGPT from Grandma is now a common occurrence, no kidding).
What a year ago would have been almost unthinkable to most, has now become top of mind for Security and AI leaders around the world, who seek to empower their organization to unleash the power of Generative AI. But as GenAI proliferates throughout organizations, security leaders and those responsible for promoting AI-driven innovation are confronted with a fresh set of security challenges. These risks range from employees sharing enterprise data with GenAI tools, which may inadvertently lead to data leaks due to the tools being trained on it, to malicious actors manipulating models through prompt injection in the organization's customer-facing applications.
Today, I couldn’t be prouder to announce that we have emerged from stealth mode to be the one-stop for all Generative AI security needs of an enterprise.
Since my high school days, I've always been fascinated by math, data, and AI in its early stages. Interestingly, my thesis work was focused on transformers in neural networks, which are now foundational to LLM models. Little did we know back in 2017. This passion was evident in my academic studies, my military service in the IDF, and later in my initial career choices as a data scientist, which soon began to intersect with the world of cybersecurity. I started at Check Point and later joined Orca Security, where I met Lior Drihem, my co-founder and Prompt’s CTO.
A year and a half ago, the first API of OpenAI was released. Shortly after, Lior and I during our time at Orca worked on the first feature within a security tool that was powered by GenAI capabilities, helping users with their workflows within Orca: we leveraged GPT-2/3 to enhance Orca’s ability to generate contextual actionable remediation steps for security alerts, speeding up customers’ mean time to resolution.
Working on this project made it even clearer that a completely new attack surface was emerging: applications of any kind that would feature GPT-like capabilities would be vulnerable to a new array of attacks.
And so, fast forward to August 2023, Prompt Security was born.
As part of our ideation we spoke to dozens of AI and security practitioners, to confirm that the new attack vector was indeed real and that there was both concern about it but also interest in exploring the right tools to secure GenAI (and that it wasn’t going to be just another transitory buzzword with a technology behind it with unclear goals.) We’ve also been involved from the beginning in the research and creation of OWASP Top 10 for LLM Applications, a list of the most critical vulnerabilities found in applications utilizing LLMs. At last, I became confident that this would become one of the largest markets we've ever witnessed, and that we are going to be the ones to take the lead in it.
Prompt is the only one-stop security platform designated to protect against all GenAI concerns. With easy onboarding, security leaders can enable GenAI throughout the entire organization within a few hours: from employees using Shadow AI on their browsers, to developers building with Copilot, up to product managers building customer-facing GenAI features. Organizations will have visibility, governance, and real-time protection on all aspects.
The traction we’ve seen so far is pretty remarkable: Prompt currently runs on thousands of endpoints of leading organizations, analyzing and securing millions of prompts every month. It seems like we’re on the right path to enable organizations around the globe to safely and securely embrace GenAI company-wide and unlock the endless possibilities it brings with it.
I am beyond excited about the journey ahead of us.
I want to also take the opportunity to thank our investors and advisors for the trust, and to our customers, partners and employees for being part of this adventure so far.