The Challenges of and Solutions for Enterprise-Wide Adoption of Generative AI Models

Deploying generative artificial intelligence and large language models across the enterprise presents opportunities for both increased productivity and innovation and challenges for managing organizational risk.
Hands typing on keyboard

The Path Taken

In the 10 or so years since artificial intelligence (AI)-dependent tools have become an integral part of the business ecosystem, retail organizations have been among their most enthusiastic adopters. The industry has led the development and deployment of innovative, productivity- and profit-enhancing solutions for issues that have plagued the field for ages, such as meeting market demands, improving customer satisfaction, and preventing theft, as well more recent challenges, such as multi-channel customer options, collecting and analyzing data on consumer buying habits and preferences, and maximizing supply chain performance.

New AI-driven tools have enabled organizations of every size, from multinational chains to one-off specialty shops, to: 

  • Create personalized shopping experiences for both online and in-store customers
  • Aggregate and analyze highly detailed data around shopping and purchasing habits
  • Identify both return customers and known shoplifters via facial recognition
  • Streamline and optimize their supply chain
  • Implement smart fitting-room technologies that enable shoppers to:
    • Find coordinating accessories for the clothes they are trying on
    • Purchase those items on the spot

The Present

The most recent additions to the AI toolbox are large language models (LLMs) and generative AI (GenAI) models. These innovations are being deployed throughout the enterprise for tasks as diverse as developing pricing structures, analyzing data to balance inventory and identify market trends, and generating and executing narrowly targeted marketing campaigns. Across the e-commerce spectrum, LLMs and GenAI models are increasing productivity, driving sales, and reducing fraudulent activity. However, in addition to providing these wow-factor benefits, deploying LLMs and GenAI models can pose significant new challenges.

Organizations adopting these models must acknowledge and prepare for the new layer of risk that accompanies–and can disrupt–successful model adoption and deployment. New additions to the system infrastructure means an expanded attack surface for external actors and new ways for insiders to make mistakes or take deliberately harmful actions. These enterprises must adopt a state-of-the-art response to ensure maximum ROI from these state-of-the-art technologies. We’ve created the following list that identifies the top considerations that accompany LLM deployments, and discusses ways to address them.

Observability and Visibility

When multiple models, including multimodal GenAI models that use voice, images, video, and other channels, are deployed across an organization, they do not integrate well, if at all. When models operate in parallel, security teams must work with a fragmented view of the overall system when what they really need is one tool that provides full visibility into and across all models in use.

Both the number and types of models available are growing at an extremely rapid pace. While the future will undoubtedly provide additional options, right now, every organization can easily deploy:

  • Large external models, such as ChatGPT
  • Models embedded in SaaS applications, as in Salesforce
  • Retrieval-augmented generation (RAG) models
  • Small, internal models fine-tuned on proprietary or confidential company data

Depending on the organization, the combination of models deployed could number in the low dozens or well into the hundreds. Having to access each model manually to monitor performance and other metrics would be very resource-intensive while also being very inefficient.

The answer to this issue involves deploying an automated tool that spans all the models in use, providing unhampered observability at a granular level. Being able to see, evaluate, and leverage insights about model performance, behavior, and use can enhance and streamline decision-making processes, overcome inherent limitations, and provide stability, reliability, efficiency, and added security across the organization.

Data Security

LLMs and GenAI models are trained on enormous datasets and even “small” models trained on proprietary datasets contain a large amount of sensitive information. They must be protected from accidental, as well as deliberate, data leakage. The most common source of data leakage is an unintended exposure contained in a user prompt to the model. This could include something as innocent as an employee who drafts an email that includes detailed account information regarding a vendor or a customer, and then asks the LLM, for instance, ChatGPT, to polish the language and make it sound more professional. The security issues in this scenario are threefold:

  • The private, sensitive, or confidential information in the email is sent outside the organization, which is an unauthorized release.
  • The information becomes the property of a third-party—the model provider—that should not have access to the information and that may or may not have strong security protocols to prevent a data breach.
  • The information, now the property of the third party, could be incorporated into the dataset used to train the next iteration of the model, meaning that the data could be made available to anyone—such as a competitor—who queries that model with a prompt crafted to find such data.

The answer to this issue is also threefold:

  • Employees and other users of the models must be educated as to the risks posed by model use and trained to the models properly.
  • AI security policies must be crafted that describe appropriate and inappropriate use of the models and that align to organizational values.
  • Model usage by individual users must be traceable and auditable.
AI Security

“AI security” is another term that is used more and more frequently lately, but without much explanation. The term refers to the strategic implementation of robust measures and policies to protect an organization’s AI systems, data, and operations from unauthorized access, tampering, malicious attacks, and other digital threats. It goes well beyond traditional cybersecurity because every AI-driven or AI-dependent component linked to an organization’s digital infrastructure adds to the sprawl of pathways into the system.

While many technical solutions exist to address technological vulnerabilities, the most commonly exploited vulnerability in an organization is the user who, as mentioned above, could inadvertently include sensitive information in a prompt or act on information received in a response that they don’t realize is malware, a hallucination, a phishing campaign, or a social engineering attempt. An insider who takes deliberate actions, for instance, by trying to get around security features via prompt injection or “jailbreak” for a singular purpose, and does not realize they could be putting the organization at risk, is another unfortunately common threat vector.

The answer to this issue is to apply strong filters on both outgoing and incoming channels to identify content that is suspicious, malicious, or otherwise misaligned with organizational policies and industry standards.

The Future: Safe, Enterprise-wide Model Adoption

The challenges of deploying AI systems, specifically Gen AI systems, are growing in number and sophistication on par with the number and sophistication of the models themselves. The risks presented by poor or incomplete adoption and deployment plans are also expanding in scope, scale, and nuance. This is why identifying, evaluating, and managing every potential risk is vital for maintaining the models’ integrity, privacy, and reliability, as well as the organization’s reputation and competitive advantage.

Developing the right plan, which includes employee education and training about the role they play in mitigating risk, and deploying the right solution will go a long way toward ensuring the security and stability of your organization’s use of GenAI models.

While having a strong AI security strategy is very important, incorporating the best tools for the situation, such as our first-to-market LLM security solution, Moderator, is also critical to ensuring a safe, secure adoption and rollout. As a model-agnostic, scalable, and “weightless” layer in the security infrastructure, Moderator enables full observability into and across all models in use with no introduced latency issues. When system and security administrators can see who is doing what, how often, and with which models, they are afforded both wide and deep user and system insights that support a strong, transparent deployment posture.

More Recent Blog Posts