14.4 C
New York
Saturday, May 9, 2026
Array

Non-human identity sprawl is agentic AI’s real risk


Enterprises have long depended on non-human identities such as service accounts, API keys, OAuth tokens and other credentials that allow services to interoperate inside digital environments. In modern cloud architectures and continuous development pipelines, these identities consistently outnumber human users, yet their governance rarely reflects the scale and authority they now hold.

A recent NIST request is telling. Just weeks into 2026, the organization issued a request for public input on how organizations should securely develop and deploy AI agent systems. The notice comes at a moment when many enterprises are beginning to operationalize agentic AI, embedding systems designed to not just generate outputs, but also interpret instructions, make determinations and carry out actions across applications and infrastructure.

Agentic systems are beginning to be used in production, while the security and governance models intended to provide their guardrails are still being defined. In too many cases, controls are added to these systems after the authority to use them has already been granted, creating an avoidable yet immense risk as agentic AI is adopted within organizations.

Related:Risk management: The immune system your business needs

The quiet rise of non-human authority

Traditional identity programs were built around people. They incorporate structured onboarding, defined roles, periodic reviews and clear accountability to manage human users through the cycle of their access and responsibilities within the enterprise.

But non-human identities (NHIs) are often overlooked by these governance processes. They persist quietly in the background, often are provisioned as part of administrative activities to keep systems running, and are often granted long-term credentials with elevated permissions — providing rich targets for attackers. As with human identities, there are best practices, such as least-privilege permission assignments and frequent credential rotation, that can help better secure the use of these NHIs. Applying appropriate governance processes to the creation, daily use and ongoing maintenance of NHIs can help ensure secure automation and more effective control.

When automation within enterprises was limited and tightly scoped, this gap may have carried less consequence. Today, it holds far more weight as AI agents are instantiated, execute processes and interact across systems, coordinating workflows and advancing tasks without an integral human role.

When NHIs act, weak controls scale fast

Agentic systems are designed to take action, retrieve data, interact with internal systems and move workstreams forward within the permissions they are granted. A recent report from Deloitte found that nearly three-quarters of 3,325 leaders surveyed plan to deploy agentic AI within two years. As those systems interact across applications and data sets, the scope of their authority matters even more.

When permissions are overly broad or poorly governed, AI agents amplify those weaknesses at machine speed. Sensitive data may have greater exposure than intended, workflows may extend beyond their original design assumptions, and minor configuration gaps can cascade into larger operational risk. The issue is not simply the risk of breach; it’s the scale at which unintended outcomes may occur.

The measures needed to secure AI agents are not conceptually new. Many of the principles applied to human users — least privilege, defined ownership, periodic review — remain directly applicable to NHIs. What changes is the consistency and coordination required when those principles are extended to non-human actors operating continuously and at scale.

In practice, that includes:

  • Define: Assigning each agent a unique identifier and establishing tightly scoped, purpose-driven permissions for both human and non-human actors supporting agent workflows.

  • Assess: Assigning clear ownership and ongoing review processes for NHIs to prevent orphaned identities, stale credentials and permission sprawl.

  • Enforce: Protecting sensitive data through encryption and persistent policy controls that remain enforced, regardless of how or where the data is accessed.

  • Detect: Monitoring access patterns and behavioral access changes to surface unusual activity or drift from expected norms.

  • Automate: Enabling automated response capabilities that can restrict access or suspend credentials when risk thresholds are met, without disrupting essential operations.

For security leaders, this is less about inventing new frameworks and more about extending existing governance disciplines to a class of actors that operates continuously at scale. Identity defines what an agent is allowed to do, making disciplined permissions and constant visibility into these identities essential to maintaining control as automation expands.

Security that doesn’t tax velocity

Enterprises are investing in agentic systems to streamline operations, reduce manual effort and accelerate decision-making. The objective of identity and access management strategies for agents is not to slow that momentum, but to ensure that expansion happens in a controlled and sustainable way to not scale risk.

When agents are securely developed, provisioned with clearly bounded authority and monitored alongside the data they access, organizations gain confidence to expand deployment and scale automation innovation with their business. Risk doesn’t disappear, but it becomes more visible and governable, rather than compounding quietly over time until it becomes too significant to easily contain.

NIST’s request for input reflects an industry still formalizing standards around agentic systems, but organizations can’t afford to wait for finalized frameworks before acting. Agentic AI is already advancing into core business processes. How successfully it scales will depend on whether governance evolves in parallel — ensuring agents operate within defined identity boundaries, with data protection intentionally integrated at every stage.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp