Zeta Global: GenAI Unleashed and Security Challenges in the Era of User Empowerment

Prompt Team
February 27, 2024

A few days ago we hosted Danny Portman, PhD Head of Generative AI, VP Data Science at Zeta Global, for a conversation on Generative AI, building customer-facing apps and its security implications  


Zeta Global is a cloud-based marketing technology company that empowers brands to acquire, grow and retain customers. Danny and his team are in charge of one of the most innovative projects of the company, a multi-agent large language model (LLM) marketing assistant called ZOE, powered by GenAI. This multifaceted tool helps users of Zeta to get answers to technical questions, execute analytic queries, and build marketing campaigns end to end, turning any user into a super user. 


Zeta has been ahead of the curve when it comes to the implementation of GenAI capabilities in its customer-facing tool, ZOE. 

Danny’s team has been following the evolution of Large Language Models (LLMs) since GPT-1 was released in 2018. Zeta Global can pride themselves with having been early adopters of NLP and LLM technology, and released an LLM-based application for Email Subject Line Generation even before ChatGPT was released in 2022. 

In the past year, they’ve been working on a marketing solution named ZOE, which was first released to production in May 2023. It’s a relatively complex multi-agent LLM system that allows Zeta users to perform a variety of actions, from basic scenarios like knowledge retrieval, to more advanced use cases like executing analytics queries, accessing internal Zeta APIs and even assisting users in end-to-end marketing campaign creation. This app is an example of cutting edge innovation, with 5 patents submitted so far related to applications of Generative AI. 


There’s an array of new risks associated with LLMs, especially when exposed to users. 

With ZOE, users are enabled to use tools that were previously out of their reach even just a year ago. For example, if a marketing account manager needed to answer a complex question about their account data, they would need to ask a dedicated analytics team for help, building the SQL query, analyzing the data etc. Today, with ZOE, they can just ask the question in “intuitive” English and ZOE will take care of the rest. 

However, granting such advanced capabilities implies that any user could become a super-user, creating possibilities for misuse and even malicious exploitation of the application. Beyond issues such as data exfiltration and attempts to access previously unreachable data compartments, which can be partly mitigated through roles and permissions, the incorporation of LLM technology introduces a whole new array of risks. Risks that did not yet exist in January 2023. 

Additionally, human curiosity plays a significant role: users are inclined to test the boundaries of chatbots and LLMs. With traditional user interfaces such as Outlook or Photoshop, users typically stick to the intended use. However, when interacting with an AI chatbot's open-ended dialogue system, they frequently explore its limitations and points of failure. Any breakage or minor deviation from expected performance can erode user trust, leading to the question, "How can I rely on this tool for my tasks if it's so easily compromised?" 


The risk of accruing security debt: Security considerations had to come in early in the game.  

Unlike other attack or risk surfaces, when it comes to the use of Generative AI, because it’s so widely adopted and it’s a part of the offering of organizations, suddenly we’re encountering a brand new shared responsibility, where AI and innovation leaders need to consider what and how they’re exposing to the LLMs and the risk associated with it.  

The team at Zeta Global have been making huge strides to deliver exceptionally fast time-to-market, which is key to stay competitive and deliver industry-first features. But with such fast development, tech debt is inevitable. The way they address it is by having scheduled tactical pauses to rethink and refactor often, or upgrading the open-source packages once a month, so that they can always benefit from the latest developments. 

When it comes to security, one can also consider the accumulation of security debt as a consequence of striving for rapid innovation aimed at full-scale production. Although the concept of establishing an internal red team and automating certain security checks is considered, it's not feasible for a small AI development team to stay abreast of all security updates concerning LLMs, much less to continuously defend against every emerging vulnerability in their products. Furthermore, unlike simple software updates, switching between LLM versions isn't straightforward; prompts that were effective with one version may not work with another, so you need to go back to prompt engineering to make adjustments. 


GenAI Security will become mainstream on the headlines, better to prepare in advance. 

Danny anticipates the growing need for GenAI Security, especially as more and more companies reach a more mature LLM stage and take their customer-facing products to production. It’s only a matter of time before we start hearing news about successful high-impact LLM attacks.  

“It is great to have a design partner like Prompt Security. This really allows us to focus on bringing value to our clients faster and better.” 


Share this post