Connect with us

Tech News

Lasso Security Sets New Standard in LLM Safety

Published

on

Lasso Security Sets New Standard in LLM Safety



Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More









To scale up large language models (LLMs) in support of long-term AI strategies, enterprises are relying on retrieval augmented generation (RAG) frameworks that need stronger contextual security to meet the skyrocketing demands for integration.



Protecting RAGs requires contextual intelligence



However, traditional RAG access control techniques aren’t designed to deliver contextual control. RAG’s lack of native access control poses a significant security risk to enterprises, as it could allow unauthorized users to access sensitive information.



Role-Based Access Control (RBAC) lacks the flexibility to adapt to contextual requests, and Attribute-Based Access Control (ABAC) is known for limited scalability and higher maintenance costs. What’s needed is a more contextually intelligent approach to protecting RAG frameworks that won’t hinder speed and scale.



Lasso Security started seeing these limitations with LLMs early and developed Context-Based Access Control (CBAC) in response to the challenges of improving contextual access. Lasso Security’s CBAC is noteworthy for its innovative approach to dynamically evaluating the context of all access requests to an LLM. The company told VentureBeat the CBAC evaluates access, response, interaction, behavioral, and data modification requests to ensure comprehensive security, prevent unauthorized access, and maintain high-security standards in LLM and RAG frameworks. The goal is to ensure that only authorized users can access specific information.



Contextual intelligence helps ensure chatbots don’t divulge sensitive information from LLMs, where sensitive information is at risk of exposure.



“We’re trying to base our solutions on context. The place where role-based access or attribute-based access fails is that it really looks on something very static, something that is inherited from somewhere else, and something that is by design not managed,” Ophir Dror, co-founder and CPO at Lasso Security, told VentureBeat in a recent interview.

See also  With the right tools and strategy, public cloud should be safe to use


“By focusing on the knowledge level and not patterns or attributes, CBAC ensures that only the right information reaches the right users, providing a level of precision and security that traditional methods can’t match,” says Dror. “This innovative approach allows organizations to harness the full power of RAG while maintaining stringent access controls, truly revolutionizing how we manage and protect data,” he continued.



What is Retrieval-Augmented Generation (RAG)?



In 2020, researchers from Facebook AI Research, University College London and New York University authored the paper titled Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, defining Retrieval-Augmented Generation (RAG) as “We endow pre-trained, parametric-memory generation models with a non-parametric memory through a general-purpose fine-tuning approach which we refer to as retrieval-augmented generation (RAG). We build RAG models where the parametric memory is a pre-trained seq2seq transformer, and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever.”



“Retrieval-augmented generation (RAG) is a practical way to overcome the limitations of general large language models (LLMs) by making enterprise data and information available for LLM processing,” writes Gartner in their recent report, Getting Started With Retrieval-Augmented Generation. The following graphic from Gartner explains how a RAG works:





Source: Gartner, Getting Started With Retrieval-Augmented Generation, May 8, 2024



How Lasso Security designed CBAC with RAG



“We built CBAC to work as a standalone or connected to our products. It can be integrated with Active Directory or used independently with minimal setup. This flexibility ensures that organizations can adopt CBAC without extensive modifications to their LLM infrastructure,” Dror said.



While designed as a standalone solution, Lasso Security has also designed it to integrate with its gen AI security suite, which offers protection for employees’ use of gen AI-based chatbots, applications, agents, dode assistants, and integrated models in production environments. Regardless of how you deploy LLMs, Lasso Security monitors every interaction involving data transfer to or from the LLM. It also swiftly identifies any anomalies or violations of organizational policies, ensuring a secure and compliant environment at all times.

See also  Dealt turns retailers into service providers and proves that pivots sometimes work


Dror explained that CBAC is designed to continually monitor and evaluate a wide variety of contextual cues to determine access control policies, ensuring that only authorized users have access privileges to specific information, even in documents and reports that contain currently relevant and out-of-scope data.



“There are many different heuristics that we use to determine if it’s an anomaly or if it’s a legit request. And also response we’ll look at both ways. But basically, if you think about it, it’s all comes to the question if this person should be asking this question and if this person should be getting an answer to this question from the variety of data that this model is connected to.



Core to CBAC is a series of supervised machine learning (ML) algorithms that continuously learn and adapt based on the contextual insights gained from user behavior patterns and historical data. “The core of our approach is context. Who is the person? What is their role? Should they be asking this question? Should they be getting this answer? By evaluating these factors, we prevent unauthorized access and ensure data security in LLM environments,” Dror told VentureBeat.



CBAC takes on security challenges



“We see now a lot of companies who already went the distance and built a RAG, including architecting a RAG chatbot, and they’re now encountering the problems of who can ask what, who can see what, who can get what,” Dror said.



Dror says RAG’s soaring adoption is also making the limitations of LLMs and the problems they cause become more urgent. Hallucinations and the difficulty of training LLMs with new data have also surfaced, further illustrating how challenging it is to solve RAG’s permissions problem. CBAC was invented to take on these challenges and provide the needed contextual insights so a more dynamic approach to access control could be achieved.

See also  Public awareness of ID security grows, but big obstacles remain


With RAG being the cornerstone of organizations’ current and future LLM and broader AI strategies, contextual intelligence will prove to be an inflection point in how they’re protected and scaled without impacting performance.





Trending