AI agents demand new identity frameworks as security stakes rise
Oracle SVP Eleanor Meritt argues that AI agents require unique identity management to ensure secure access and operations.

As intelligent agents begin to act on our behalf—managing investment portfolios, coordinating supply chains, and executing complex business workflows—a pressing security question emerges: Who, or what, are they? From an enterprise security perspective, this is not a philosophical debate. It’s a critical vulnerability. Before an AI agent can be trusted with the keys to the kingdom, it needs an identity, and the infrastructure for managing that identity is not yet ready for prime time.
Agentic AI's rapid rollout is creating a direct conflict with legacy concepts of Identity and Access Management, the very systems designed to control who can access what. The challenge is forcing a fundamental rethink of digital security, and few are closer to the problem than the experts building the solutions.
We spoke with Eleanor Meritt, Senior Vice President at Oracle, whose perspective is grounded in a nearly 30-year career spent at the core of enterprise technology. Having risen from a software developer to an executive overseeing the middleware and database systems where these new cloud-native workloads will live, Meritt is now focused on the critical task of building the security fabric for the agentic era. Meritt sees the challenge ahead as nothing less than redefining how identity is created, managed, and secured in a world where software—not just people—will hold the keys to critical systems.
Evolving the blueprint: "The Identity and Access Management world needs to evolve; it's a very important component for the security of AI agents," Meritt says. More than just a technical upgrade, she sees this as a rethink of how security and privilege models are embedded into non-human entities. "We have to be thinking about human behavior and how we want to model that inside of agents, particularly in terms of security and access privileges."
Not so human after all: The common analogy of treating AI agents like human employees, Meritt argues, quickly falls apart under scrutiny. "We can't treat AI agents like humans because, from an identity perspective, the human process is so manual," she explains. "You have approval workflows where people need manager sign-offs, and ultimately, humans are making the decisions on what access is appropriate." This manual, high-friction model is fundamentally incompatible with the speed and scale of AI.
The Identity and Access Management world needs to evolve; it's a very important component for the security of AI agents. We have to be thinking about human behavior and how we want to model that inside of agents, particularly in terms of security and access privileges.
Meritt offers a tangible example: an investment agent that provides advice. Initially, it only needs read-access to a portfolio. But what happens when a user delegates a task and says, "Go make those trades for me"? Suddenly, the agent needs write-access. The security mechanisms to grant, monitor, and revoke that privilege on a transactional basis, especially for agent-to-agent communication, are not fully pressure-tested for these new, dynamic use cases.
Community first: This is pushing many organizations toward building their own custom solutions, a trend Meritt believes is a critical mistake. "Going forward with AI agents, you can't get away with custom solutions; there is just too much risk built into it," she states. "You're better off adopting community-based standards because they will have a lot more feedback into their weaknesses and strengths, and you'll also be able to coexist better across different environments." She points to an evolving ecosystem of standards like OAuth, AuthZEN, and SPIRE as the necessary foundation. Without a collective, community-driven effort to build on these shared protocols, the industry risks creating a "wild west" of insecure AI implementations.
Dormant to dominant: This urgent focus on machine identity is a recent development. Meritt notes that when generative AI first exploded, many in the identity field saw it as a tool for minor optimizations, not something that would upend core security practices. "Now with agentic AI, everybody has realized that IAM is absolutely core to its future and security," Meritt explains. "Agents have to have their own identity. Otherwise, it becomes very difficult to track their security."
Now, IAM has moved from a background function to a top security priority, while also sparking a wave of collaboration to tackle one of the field’s hardest problems. "I haven't seen as much innovation or conversation as I have in the past few months in my entire career in identity—not at this speed, anyway," says Meritt. "It's fantastic how everybody's coming together to contribute to the discussions."