Three years after ChatGPT reignited investments in AI, enterprise focus is shifting from improving large language models (LLMs) to building agentic systems on top of them.
Vendors are bolting agentic capabilities into workflows, spanning copilots, autonomous automations and digital twins used to optimize factory performance. But many of these proofs of concept are colliding with the messy realities, including agents gone rogue, unstructured data quality gaps and new compliance risks.
Over the next year, experts predict four broad trends:
-
Growing competition between large action models (LAMs) and other agentic approaches, as vendors and enterprises chart different paths to achieving similar automation goals.
-
Shifting agentic development investments, from overcoming LLM limitations to more strategic solutions that extend their competitive advantage.
-
Continued maturation of physical AI, improving engineering workflows that will gradually expand across the enterprise.
-
Rising investment in metadata, governance and new AI techniques, driven by data quality issues and tightening compliance requirements.
Let’s dive in.
Patrick Anderson, managing director, Protiviti
LAMs face competition from other agentic approaches.
The excitement over LLMs — the underpinning of ChatGPT’s success — sparked interest in the potential for LAMs that could read screens and take actions on a user’s behalf.
A lead author on the seminal Google paper behind LLMs, Ashish Vaswani, for example, cofounded Adept AI to focus on the potential of LAMs. Adept AI launched ACT-1, an “action transformer” designed to translate natural language commands into actions performed in the enterprise. That effort has yet to gain significant traction. Meanwhile, Salesforce has introduced a family of xLAM models in concert with simulation and evaluation feedback loops.
But despite the hype around self-driving AI browsers and operating systems, progress is mixed and the market confusing, according to Patrick Anderson, managing director at digital consultancy Protiviti.
“The current players have made good progress toward mimicking what an LAM ultimately seeks to do, but they lack contextual awareness, memory systems and training built into a model of user behavior at an OS level,” Anderson explained. “There is also a misconception surrounding LAMs, versus simply combining LLMs with automation.”
One challenge is the limited availability of true LAM models in the ecosystem. For example, Microsoft has started rolling out AI to take action on a PC, but Anderson said the LAM aspects are still in the research stage. This disparity across vendors leads to confusion in the market.
On the surface, the vendor offerings appear to be LLMs that can perform automation (i.e., Copilot and Copilot Studio, or Gemini and Google Workspace Studio). Microsoft has also demonstrated “computer use” capabilities within its agent frameworks that preview LAM-type functionality.
“However, these approaches still lack the memory systems and contextual awareness required for adaptive learning and for avoiding repeating mistakes — capabilities that are key to LAMs,” Anderson said.
Vitor Avancini, CTO at Indicium, an AI and data consultancy, cautioned that LAMs — in their current iteration — also carry higher risks. Generating text is one thing. Triggering actions in the physical world introduces real-world safety constraints. That alone slows enterprise adoption.
“That said, LAMs represent a natural next step beyond LLMs, so the rapid rise of LLM adoption will inevitably accelerate LAM research,” Avancini said.
In the meantime, agentic systems are further along. They don’t have the physical capabilities of LAMs, but they already outperform traditional rules-based systems in versatility and adaptability. “With the right orchestration, tools and safeguards, agent-based automation is becoming a powerful platform long before LAMs reach mainstream viability,” Avancini said.
Agentic primitives grow up.
One of the primary use cases for early agentic AI tools was plastering over the intrinsic limitations of LLMs in planning, context management, memory management and orchestration. Until now, this was largely done with “glue code” — manual, brittle scripts used to wire different components together. As these capabilities mature, the method is shifting from custom-built workarounds to standardized infrastructure.
Sreenivas Vemulapalli, senior vice president and chief architect of enterprise AI, Bridgenext
From glue code to standardized primitives
Sreenivas Vemulapalli, senior vice president and chief architect of enterprise AI at digital consultancy Bridgenext, predicted that in the coming year many enterprises will view this manual orchestration as a waste of resources. Vendors will create new “agentic primitives” — agentic building blocks — as commodity offerings in AI platforms and enterprise software suites, he explained.
The strategic value for the enterprise lies not in “building the agent’s ‘brain,'” or the plumbing that connects it, Vemulapalli said, but in defining and standardizing the tools those agents use.
“The true competitive advantage will belong to the enterprises that have meticulously documented, secured and exposed their proprietary business logic and systems as high-quality, agent-callable APIs,” Vemulapalli said.
Why orchestration is becoming a temporary advantage
In the meantime, the reality for early movers requires building temporary internal platforms to fill the current gaps, said Derek Ashmore, agentic AI enablement principal at Asperitas, an AI and data consultancy. He said between 10%–20% of leading firms he sees are standing up internal “agent platforms” to handle tasks like planning, tool selection, long-running workflows and human-in-the-loop controls because off-the-shelf copilots don’t yet provide the reliability, auditability and policy control they need today.
Ashmore said he is seeing progress, as firms move from ad hoc glue code and “brittle tool wiring” toward reusable patterns. These more mature shops are now converging on a small set of primitives. These include standardized tool interfaces, shared memory/state for agents, policy and guardrail layers, and evaluation harnesses that measure agents’ behavior in realistic workflows. At the same time, vendors are rapidly productizing those same primitives, making it clear that much of today’s homegrown plumbing will be commoditized.
“The smart move is to treat low-level agent orchestration as a temporary advantage, not a permanent asset,” Ashmore said.
The advice: Don’t overinvest in bespoke planners and routers that your cloud or platform provider will give you in a year. Instead, put your money where the value will persist, regardless of which agent framework wins. Good investments over the next year include the following:
-
High-quality domain knowledge and ontologies.
-
Golden data sets and evaluation suites.
-
Security and governance policies.
-
Integration into your existing SDLC/SOC workflows.
-
Metrics you’ll use to decide whether an agentic system is safe and cost-effective enough to trust.
Organizations should also expect the “agent engine” itself to become a replaceable component.
“Use it now to learn what works, but architect your stack so you can swap in vendor innovations as they mature — while your real differentiation lives in the domain models, policies and evaluation data that no platform vendor can ship for you,” Ashmore said.
Physical AI shifts to cloud-based economics.
Nvidia CEO Jensen Huang has been promising that physical AI will reshape every aspect of the enterprise, including smart factories, streamlined logistics and product improvement feedback loops. Over the last year, Nvidia has made substantial progress in evolving its Omniverse platform to harmonize 3D data sets across different tools and workflows.
Nvidia’s Apollo frameworks are making it easier to train with faster AI models. Separately, the IEEE has ratified the first spatial web standards that could further bolster this vision.
Tim Ensor, executive vice president of intelligence services at Cambridge Consultants, said physical AI has matured significantly over the last year, driving a new era of AI development that really understands the world.
“I imagine that we will see an evolution of how these simulators can deliver what we need for training physical AI systems to allow them to become more efficient and more effective, particularly in the way they interact with the world,” Ensor said.
Avancini predicted that in 2026, the combination of physical AI blueprints — such as Nvidia’s ecosystem — and open interoperability standards (like IEEE P2874) will start to reshape industrial R&D. These ecosystems lower the barrier to building simulations, robotics workflows and digital twins.
What once required heavy Capex and specialized engineering teams will shift to cloud-based, pay-as-you-simulate OPEX models, opening up advanced robotics and simulation capabilities previously limited to smaller competitors.
This shift threatens legacy walled garden vendors who historically relied on proprietary hardware and high-priced integration services. Avancini said he believes that the competitive frontier will shift toward managing cloud simulation spend using simulation FinOps and using open standards like OpenUSD to avoid vendor lock-in.
Data quality issues stall agentic AI, force new investment
Over the next year, enterprises will increasingly discover new ways that data quality issues are hindering AI initiatives. LLMs enable the integration of unstructured data into new processes and workflows. But organizations face stumbling blocks, as the vast majority of this data was collected across many tools and apps without data quality considerations in mind, said Krishna Subramanian, co-founder of Komprise, an unstructured data management vendor.
“A large reason for the poor quality of unstructured data is data noise from too many copies, irrelevant, outdated versions and conflicting versions,” Subramanian said.
Anderson agreed that while organizations are eager to adopt AI, many “have not fully accounted for the cost and timeline required to improve data quality,” he said. Even when significant cleanup work is done, he said, it often reflects a single moment in time. Without examining upstream inputs, new “leaks” can continue to cause a data quality issue.
AI can help, but it is not a magic wand. It can assist with processing documentation, identifying sources of bad data and standardization. A key priority is building metadata and a business glossary with relevant KPIs to establish a semantic layer for data that is ideal for LLMs to reason over, rather than the structured data itself.
As LLMs are increasingly used to generate SQL for structured data, rather than reason over it, a semantic layer becomes important now and in the future of agentic AI.
Indeed, data quality cannot be overstated, especially if the goal is to enable agents to make recommendations or decisions, according to Anderson. “As we move toward ambient agents that are autonomous, this will introduce significant risk due to data quality leading to poor decisions,” he said.
Data privacy and security guardrails reshape AI architectures
AI vendors have been demonstrating the benefits of training on extremely large data sets. But some of the most useful data for enterprise workflows face privacy and security concerns. Over the next year, this is likely to drive investment in privacy-preserving machine learning techniques such as secure enclaves, federated learning, homomorphic encryption and multiparty computation.
“We definitely do see some challenges in being able to train AI in enterprise and government-sector settings, as well on the basis of the fact that the data we need to train the models is in some way sensitive,” Ensor said.
Over the next year, federated learning will mature, enabling the training of models locally at the edge rather than centralizing them. Also, innovations in synthetic data will make it easier to train models on analogous copies without exposing sensitive data. Enterprises will also explore new approval and authorization processes to access the data.
But all of these approaches require laborious processes to strike the right balance between better AI and ensuring compliance and security.
“There isn’t, unfortunately, a silver bullet for how you solve this problem because managing consumer and individual data appropriately is absolutely critical,” Ensor said.

