A New Class of Digital Operator
Retail organizations are moving quickly to operationalize agentic AI to streamline customer service, create self-serve customer workflows, enable shopping bots, and improve internal employee productivity. This is not just an internal transformation. Your customers will use agents to browse, compare, and buy. Your employees will create their own “mini me” agents that connect to internal applications and automate tasks on their behalf. The result: agentic commerce, where the majority of your traffic, both legitimate and illegitimate, comes not from human-driven browsers and mobile apps but from autonomous agents and bots.
For security leaders, this introduces a fundamentally new challenge. Traditional controls were designed for human users making deliberate decisions. Security technologies that relied on human behavior as the baseline, keyboard input, mouse movements, browsing patterns, when people work and shop, will all be out the window when there is no human on the other side of the request. Agentic systems behave differently. They interpret, infer, and act across multiple systems in ways that are by nature difficult to predict. And when these agents carry the same access privileges as the employees who deployed them, that’s a problem.
The Limits of Traditional Identity-Based Security
Many organizations have approached this problem by extending existing identity and access management frameworks. Agents are tied to user credentials, authenticated through enterprise identity providers, and granted access aligned with the user’s role. However, this only solves part of the problem.
Authentication- and authorization-based identity answers the question of who can access a system, but it does not define what an autonomous agent should be allowed do once it is inside, a critical distinction when agents make decisions independently. AI agents can chain API calls, retrieve sensitive data, and execute actions across systems without a human in the loop. When those agents inherit broad permissions, they effectively gain operational reach across the enterprise.
Access Leads to Exposure
The issue is not simply overly broad permissions. Existing security models rely on users exercising judgment such as knowing when it’s appropriate to modify data and when to escalate a decision. AI agents do not possess that restraint; they act based on probabilistic reasoning, using whatever tools are available if those tools appear relevant.
In retail environments, where customer data, financial systems, and operational platforms are tightly linked, this lack of visibility and guardrails becomes a material risk. A customer service agent designed to summarize support tickets might encounter a refund request and, without explicit instruction, invoke billing APIs. A supply chain agent might access financial systems simply because the pathways exist. In each case, authentication and authorization may function as designed, yet the outcome still falls outside acceptable boundaries. Without clear restrictions on what each agent is allowed to do, organizations are left trying to piece together what it did after the fact rather than constraining it up front.
Why Model-Level Controls Fall Short
AI providers are introducing features that help agents dynamically discover or prioritize tools. While these capabilities improve efficiency, they do not address enterprise security requirements. They lack enforceable policy boundaries, administrative control, and consistent governance across environments. Additionally, retail enterprises are not standardized on a single model or vendor, so security controls must therefore exist outside the model itself, governing how agents interact with systems.
The New Control Layer
To safely operationalize agentic AI, organizations need to define what an agent is allowed to do at a much more granular level. Instead of granting agents broad permissions inherited from an employee, access must be scoped to the specific task the agent is intended to perform. A customer service agent should be able to read and summarize support data but not initiate financial transactions. A development assistant may interact with code repositories and ticketing systems, but not with production data. Agents must be constrained to operate within clearly defined boundaries aligned to their purpose.
This requires a control layer that sits between agents and the systems they interact with—one that can broker access to tools and APIs, enforce policies at the point of execution, and dynamically adjust permissions as workflows evolve. Further, when an agent takes action, organizations must be able to trace not just what happened, but which agent acted, on whose behalf, and within what defined scope.
A Defining Moment for Retail Security
The implications are significant; customer trust, financial integrity, and operational resilience are all at stake. Agentic AI will continue to grow across retail environments, driven by competitive pressure and the need for operational scale. The challenge is to ensure that innovation is matched by appropriate governance.
This moment represents a shift in how security must be approached. Controlling system access is no longer sufficient; organizations must also control how autonomous entities operate within them. The enterprises that succeed will invest in infrastructure that enforces boundaries at the agent level, rather than relying solely on identity and model behavior. And they will treat agentic AI not as an extension of existing systems, but as a new operational layer that requires its own security model and governance. In the end, the organizations that lead in AI will not be those that move fastest, but those that have the controls that enable them to build safely and securely.


