The new offering is aimed at protecting against prompt injection, data leakage, and training data poisoning in LLM systems. Credit: Kostsov / Shutterstock To address the emerging threats around generative artificial intelligence (gen AI) systems and applications, cybersecurity provider Securiti has launched a firewall offering for large language models (LLMs), Securiti LLM Firewalls. Future applications are going to be more conversational and hence need to undergo a layer of in-line checks to detect attempts at external attacks, according to the company. “The conversational nature of genAI has opened the door for brand new types of threats and attack vectors and Securiti LLM Firewalls are designed to protect against it,” said Securiti CEO Rehan Jalil. “Internal or public facing prompts interfaces are a new pathway to enterprise data.” Securiti isn’t the first to identify this nascent risk to enterprise genAI applications. In March, Cloudflare announced similar features through a new web application firewall (WAF) offering, Firewall for AI. “Securiti LLM Firewalls inherently know the context of what they are protecting,” Jalil added. “To protect a genAI system, the context of the enterprise data and use case for which the genAI system is being designed for can help inspect the prompts for relevancy, topics and jailbreak attempts.” Distributed firewalls for varied genAI threats Securiti’s distributed LLM firewall is designed to be deployed at various stages of a genAI application workflow such as user prompts, LLM responses, and retrievals from vector databases, and can detect and stop a variety of LLM based attacks in-line and in real time, the company said, including prompt injection, insecure output handling, sensitive data disclosure, and training data poisoning. Prompt injections, the most common form of LLM attacks, involve bypassing filters or manipulating the LLM to make it ignore previous instructions and to perform unintended actions, while training data poisoning involves manipulation of LLM training data to introduce vulnerabilities, backdoors and biases. “The firewall monitors user prompts to pre-emptively identify and mitigate potential malicious use,” Jalil said. “At times, users can try to maliciously override LLM behavior and the firewall blocks such attempts. It also redacts sensitive data, if any, from the prompts, making sure that LLM models do not access any protected information.” Additionally, the offering deploys a firewall that monitors and controls the data retrieved during the retrieval augmented generation (RAG) process, which references an authoritative knowledge base outside of the model’s training data sources, to check the retrieved data for data poisoning or indirect prompt injection, Jalil added. Although it’s still early days for genAI applications, said John Grady, principal analyst for Enterprise Strategy Group (ESG), “These threats are significant. We’ve seen some early examples of how genAI apps can inadvertently provide sensitive information. It’s all about the data, and as long as there’s valuable information behind the app, attackers will look to exploit it. I think we’re at the point where, as the number of genAI-powered applications in use begins to rise and gaps exist on the security side, we’ll begin to see more of these types of successful attacks in the wild.” This offering, and those like it, fills a significant gap and will become more important as genAI usage expands, Grady added. Enabling AI complianceSecuriti LLM Firewalls are also aimed at helping enterprises meet compliance goals, whether legislative (such as the EU AI Act) or internally mandated policies (for example, following the NIST AI Risk Management framework, AI RMF). Organizations working to Gartner’s AI Trust, Risk, and Security Management (TRiSM) framework will also be able to use the firewalls for key components, Securiti said. Securiti expects the firewall offering, combined with existing capabilities in its Data Command Center, to cover all aspects of OWASP’s list of the 10 most critical large language model vulnerabilities, extending protection from additional LLM threats such as jailbreaks, authentication phishing, and use of offensive and abusive language. The Securiti LLM Firewalls are available now as part of the company’s overall “AI security and governance” solution announced by the company earlier this year. Related content opinion Will potential security gaps derail Microsoft’s Copilot? Researchers and analysts warn about a variety of security problems with the company’s generative AI assistant — especially for enterprises that use it with Microsoft 365. By Preston Gralla Sep 17, 2024 6 mins Generative AI Data and Information Security news LLMs fueling a “genAI criminal revolution” according to Netcraft report A surge in websites with AI-generated text is expected to continue as threat actors increasingly adopt the technology. And they’re using LLMs for SEO as well, to help them top search pages. By Lynn Greiner Aug 30, 2024 5 mins Phishing Hacking Generative AI feature Custodians looking to beat offenders in gen AI cybersecurity battle The true determinant of success will be how well each side harnesses this powerful tool to outmaneuver the other in the ongoing cybersecurity arms race. By Shweta Sharma Aug 21, 2024 8 mins Generative AI Security Software news Generative AI takes center stage at Black Hat USA 2024 Top gen AI-driven cybersecurity tools, platforms, features, services, and technologies unveiled at Black Hat 2024 that you need to know about. By Shweta Sharma Aug 08, 2024 6 mins Black Hat Generative AI Security Software PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe