But here’s the limitation early adopters are discovering: both protocols are stateless by design. Once an agent sends or receives a message, the interaction is gone. There is no durable memory, no record, no lineage. That’s fine for experiments. But try to run a multi-agent workflow in production—say, a financial planning agent coordinating with a risk assessment agent and a compliance agent—and you’ll quickly hit walls trying to troubleshoot, test new agent versions, or audit decision-making.
As a community, we had to address this lack of transaction records previously, before microservices could be widely adopted. While stateless services can scale easily and are simple to manage, they ultimately offload the burden of memory, history, and coordination to external systems, creating bottlenecks or deficiencies for complex, distributed environments. Today’s stateless agentic AI systems face the same challenge: they lack the ability to retain context across interactions, making them brittle and hard to coordinate at enterprise scale.
Data infrastructure must become a first-class participant
Making agentic architectures work in production is not merely a protocol problem; it’s a data architecture problem. Agents are not just consumers of static prompts. They’re autonomous decision-makers that react to their environment. This means your data infrastructure needs to support real-time decisions informed by the business environment, not just batch processing and static queries.
Consider the example of an e-commerce company deploying agents for inventory management, customer service, and fraud detection. These agents need to share context. When the inventory agent sees unusual demand patterns, the fraud detection agent should know immediately. When customer service resolves a complaint, both inventory and fraud agents should have that context for future decisions. Without machinery to share this state, each interaction starts from zero.