0.9 C
New York
Saturday, November 29, 2025
Array

Charting the path to the autonomous enterprise


The concept of the autonomous enterprise is a compelling vision for the future of automation, mirroring the enthusiasm and progress seen with self-driving cars — but applied to business and technical processes. 

The concept rests on the principle that each component — and, eventually, the enterprise as a whole — could operate with a high level of self-governance, dynamically adapting to market shifts and operational demands with minimal human intervention. 

Importantly, the goal of this autonomous vision has been to go beyond simple hardcoded API automation and approaches like robotic process automation (RPA), which excelled at automating discrete, repetitive tasks but proved to be brittle and challenging to scale. 

To address these limitations, a few years ago, Gartner championed the idea of hyperautomation — the combined use of AI, process mining, RPA and other technologies to automate end-to-end business processes at scale. 

Recent progress in generative and agentic AI is now pouring fuel on the vision of the autonomous enterprise. Advancements in large language models (LLMs) for processing unstructured data are paving the way for systems that support goal-oriented behavior across entire business functions. Yet despite progress in automating an increasing number of processes, gaps remain in achieving true enterprise autonomy. 

Related:Automation Alternatives to AI

The start and stop progress of enterprise autonomy mirrors the development of self-driving cars. Systems are getting better at automating larger portions of the business environment, but, as with self-driving cars, they still rely on a vigilant human ready to take over at a moment’s notice when something goes wrong. 

True autonomy is beginning to emerge, but only in geofenced or highly constrained boundaries. And when these bounded systems face novel or unexpected problems — similar to a self-driving car getting stuck in a cul-de-sac or blocking emergency workers — they often fail in new ways, requiring human intervention. 

The autonomous paradox

The fundamental paradox of the autonomous enterprise is that while autonomy is a clear goal for many enterprise leaders, the use of the term autonomous — particularly by vendors — invites immediate pushback. 

“Plain and simple, for the enterprise, ‘autonomous’ currently equates more to risk than any positive impact. Enterprises do not trust the AI [in these systems] to be autonomous,” said Nick Kramer, leader of applied solutions at SSA & Co., a global consulting firm advising companies on strategic execution. 

Related:Cloud Automation: The Invisible Workforce

Indeed, reports abound of autonomous AI going rogue, with serious consequences: Chatbots are sparking lawsuits over inaccurate advice. CrowdStrike brought down IT systems worldwide with a bad update. Most recently, AWS suffered a massive outage due to the automatic propagation of a DNS misconfiguration. 

Practically speaking, this manifests itself as requiring a human in the loop at frequent points in the augmented process. “Augmentation is a word we use a lot,” Kramer said. “Even emotionally, the connotation has led the conversation away from autonomous to agentic. Agents help us human beings, while autonomous systems replace us.” 

udupa_nishant.png

What’s in a name? The autonomous vs. agentic divide

Semantics plays a key role in the journey to self-operating business systems. Nishant Udupa, practice director at Everest Group, explained that while the terms autonomous and agentic are pretty much synonymous, their practical usage has diverged. 

“In general, autonomous systems refers to independent or self-governing entities composed of multiple agents,” Udupa said. The term agentic, in contrast, is used to denote individual agents working in coordination to create these autonomous or self-operating systems.

Usage of the terms is also industry-dependent, Udupa observed. Autonomous has gained traction in physical domains such as self-driving vehicles and robots, he explained, and agentic is more popular in software-driven workflows, including sales, marketing and engineering. 

Related:Automation Strategies for IT Resource Constraints

This divergence in usage is no accident. 

“The idea of a fully autonomous system, while appealing in theory, remains largely impractical today,” said Udupa, noting that nearly 70% of all agentic AI initiatives are still in the proof-of-concept or pilot stage rather than full-scale deployment, according to Everest Group’s recent polling of 123 executives. More than half fail to progress due to factors such as cost concerns, data privacy issues, uncertainty about the right use cases and limited technical expertise.

“What’s more feasible are agentic components — smaller, goal-driven agents capable of executing discrete tasks autonomously within defined boundaries,” Udupa said. 

There’s another issue. 

Armando Franco, director of technology modernization at TEKsystems Global Services, said that for C-suite executives, the autonomous enterprise is simply the wrong branding — the term is too abstract for business leaders. AI, automation and AI agents have become more tangible and outcome-oriented. 

“Autonomy is the result, not the headline,” Franco said. “When you layer GenAI, workflow intelligence and API-first architectures, what you’re really building is an increasingly self-governing operating model.”

Reframing autonomous levels for the enterprise

Efforts to standardize self-driving levels for cars offer valuable lessons for the enterprise, regardless of whether the term autonomous or agentic is used. SAE International has popularized a six-level framework (L0-L5) for characterizing progress in autonomous cars. This model defined the division of labor and responsibility between humans and AI. 

At the bottom rungs of this ladder, L1 and L2 capabilities support features like speed control and lane keeping, with human drivers firmly guiding operations and taking full responsibility. At L3, the AI can take full control, but humans need to be vigilant in case they need to take over on short notice. At L4, the AI can operate the car autonomously, but only within geofenced areas or under specific environmental conditions. In the future, L5 AI will be able to drive a vehicle under any circumstances. 

This model has some value for enterprise discussions, but caution is warranted regarding vendor claims. A cautionary example from the automotive space is the divergence between Tesla’s marketing for full self-driving and its technical support, which offers only L2 AI –requiring a human to maintain full control at all times. 

“Eventually, we’ll get to some standardization of a similar framework for agentic/autonomous AI, but currently, we view them more as marketing material, unfortunately,” Kramer said. Having some form of the SAE concept is important once everyone is aligned on the fact that AI is key to enterprise automation. 

However, it’s also important to clarify what self-operating levels mean in practice. A common danger and pitfall Kramer has run into is treating all automations like a generative AI or LLM problem. 

“We don’t need agents for everything,” Kramer said. Some automations are simple and effective, akin to RPA-style rule sets and behaviors. So, his team spends a great deal of time helping clients put an objective framework together to determine the best-fit AI solutions. 

Udupa observed that SAE-style self-driving frameworks are gaining traction in some domains, like telecom, for classifying AI progress in network operations. But even here, the framework primarily serves to guide discussions rather than to provide rigid engineering specifications. 

“Such frameworks are more of a taxonomy thing,” Udupa said. They enable an enterprise to communicate the level of autonomy it operates at, making it easier for the media and investors to understand the extent of AI infusion and drive increased funding and positive media attention. However, in terms of the engineering flow, the journey from L0 to L5 is more continuous. 

The human/AI handoff gap: Assigning responsibility a challenge

An interesting gap occurs in the SAE framework between the leap from L2 AI for advanced driver assistance and L3 AI for conditional automation. At L2, the human is fully responsible, even if the system is braking and steering. At L3, the system is fully accountable until it isn’t, and then it may demand the human take back control at any moment. 

For the enterprise, the conditional nature of this handoff creates a legal, technological and human-related nightmare. 

“The handoff problem in autonomous enterprise systems precisely mirrors the SAE Level 2 to 3 gap in autonomous vehicles, where responsibility shifts from human to machine in ways that create profound ambiguity,” Kramer said. 

This ambiguity leads to automation complacency, where the human monitor stops paying attention. When an error occurs, a disengaged human is unprepared to take over. Additionally, human skills can deteriorate over time, leaving the on-call individual unprepared in a critical moment. 

Should a problem occur, the ambiguity of the handoff makes it difficult to assign responsibility. Is it the human’s fault for not catching the error, or the AI’s fault for making it? “Enterprise systems exhibit identical challenges,” Kramer said.

Given this chasm, most enterprises are refusing to make the leap to conditional automation. The current best practice is to improve the human-in-the-loop system. This approach alleviates the risks and can even achieve near-perfect accuracy without hallucinations. The goal is to manage exceptions effectively, with event, intervention thresholds adjusted based on risk level, customer history and business impact. 

Udupa said that the role of the human driver must fundamentally change, with humans removed from routine processes entirely and elevated to a new role. “Essentially, humans in enterprise autonomous systems only focus on governance, exception management and continuous optimization,” he said. 

In this model, an AI orchestration and decisioning layer ensures that human oversight is embedded intelligently within AI-driven processes. Mechanisms for human override need to exist, especially in mission- and safety-critical industries, such as requiring a plant shutdown in a manufacturing setting. 

Practical geofences for enterprise processes

In the automotive industry, numerous pioneers are making incredible progress in high-automation capabilities. Examples include the rollouts of self-driving taxi services from Waymo and Tesla that operate in geofenced areas without steering wheels — importantly, with remote drivers on standby to take over when problems occur. 

In enterprises, these geofenced areas are analogous to cordoned-off aspects of business processes, where some combination of AI and static rules achieves reliability for straight-through processing. 

“The pattern across all sectors shows enterprises deploying autonomous systems within carefully defined boundaries rather than pursuing unrestricted automation,” Kramer said. 

These systems can operate autonomously within specific process boundaries, domain constraints or operational parameters, with explicit handoff points when complexity, risk or uncertainty exceeds thresholds.

For example, in insurance claims processing, Kramer is seeing multi-agent systems use sophisticated geofencing. For simple claims, the system provides fully autonomous, straight-through processing with no human involvement, while complex claims are automatically escalated to human adjusters. The fraud detection boundary operates similarly. AI agents continuously analyze patterns and flag suspicious cases, while human investigators review flagged items in real time.

Udupa suggested that this kind of geofencing serves as the basis for AI orchestration. The process of mapping out these geofences involves identifying which business processes  agents should handle and which require human oversight and intervention.

“This also seems to me to be a business decision rather than a technology decision,” Udupa said. 

For example, many enterprises are familiar and comfortable with the notion of “dark factories” as fully autonomous factories that can, in theory, operate effectively without humans. Yet businesses also need to be mindful of worker unions and sensitive materials when making decisions to embed autonomy in certain parts of their manufacturing while retaining broader human oversight and control.

franco_armando.png

From hyperautomation to agentic AI 

The tools and processes for supporting more autonomous enterprises are undergoing a paradigm shift thanks to generative and agentic AI innovations. Kramer observed that the industry has moved from the idea of using hyperautomation to manage workflows with multiple automation tools to increasingly autonomous agentic AI systems that reason, plan and act independently. 

“This wasn’t incremental improvement, but categorical transformation in how enterprises conceptualize automation,” Kramer said. 

Franco observed that the rise of agentic AI architectures is driving the change from passive AI answering prompts to active AI that can take contextually informed, goal-driven actions. In addition, emerging frameworks from leading AI and traditional enterprise vendors are enabling composable micro-agents that integrate with enterprise systems while maintaining governance and traceability.

“CIOs are no longer experimenting with autonomy, they’re operationalizing it,” Franco said. “We’re seeing early autonomous workflows embedded in incident response, software development lifecycles and customer engagement systems.”

Growing an autonomous stack

Figuring out how to derive the most benefit from more capable geofenced and human-in-the-loop systems requires improving the processes and technical architecture to be able to use emerging tools and best practices safely. 

At a process level, Udupa said one approach is a four-step adoption framework that Everest Group organizes around improving systems of execution:

  • Data architecture and integration: The enterprise needs to create a real-time, interoperable data foundation across layers. In manufacturing, this would include IT, operational technology and internet of things systems. In telecom, this would consist of customer, network and service data. This foundational layer is essentially the data that the AI agents will use for decision-making.

  • AI orchestration and decisioning: This involves training AI agents on the data, defining robust governance of decision rules, building guardrails and testing agents. This intelligent layer helps translate the data and analytics into action.

  • Process automation and workflow adaptation: This layer can help teams redesign and evolve existing workflows to become self-adjusting systems for intelligent execution and minimal human intervention. 

  • Talent transformation and governance: Thislayer equips the workforce to supervise and govern autonomous operations. It must include change management, talent upskilling/reskilling, support for new AI operations roles and training for new AI governance frameworks to mitigate risk.

Building on this, Franco described an emergent autonomy stack organized as a series of five technology layers that parallel the classic cloud stack:

  • Data foundation: Supports trusted, real-time multimodal data and dynamic data pipelines.

  • Model and agent layer: Focuses on foundational models, domain-tuned agents and retrieval augmented generation.

  • Integration and orchestration: This includes secure API gateways, event buses and message queues.

  • Experience and insight layer: Innovations in adaptive interfaces, copilots and autonomous workflows.

  • Governance and ethics layer: Tools for managing policy as code, model risk management and audit-ready platforms. 

The future of the autonomous enterprise

The enterprise of the future is likely to be more autonomous, even if the term is absorbed by more practical, less threatening terms like agentic and augmented AI.

Udupa said he believes the term will continue to follow its current split. “The distinction, in my mind, lies between autonomous physical systems and devices versus agentic software-driven processes and systems,” he said. This means we will increasingly talk about autonomous cars and factories, as well as agentic finance and marketing departments. 

Franco said he suspects the term autonomous enterprise will gradually be absorbed into the language of agentic systems or self-governing operations, much like digital transformation gave way to modernization and AI transformation

“Enterprises aren’t chasing autonomy as a buzzword, it’s the result of what they are building,” he said. “They’re building self-correcting, continuously learning ecosystems where AI, humans and systems produce business results.”



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp