SAN DIEGO — In the “year of the AI-native workplace” where AI agents proliferate across the enterprise, humans-in-the-loop become an enterprise’s most important asset. At Gartner’s Digital Workplace Summit event in San Diego Monday, analysts Max Goss and Erin Pierre explained that while employees will hand off tasks to AI agents, humans must remain in control.
“What we in the digital workplace need is a compass that puts the human being as a North Star — the compass that reminds us that successful AI deployments are the ones that focus on enabling their employees,” said Goss, a senior director analyst at Gartner.
To reach the end goal of becoming an AI-native organization, enterprises need to focus on three key areas — trust, governance and empowerment.
Road to AI-native starts with trust
When it comes to trust, organizations have a long way to go in establishing buy-in from employees for AI deployments.
Unsurprisingly, only a few hands went up in the audience when Goss asked: “How many of you think their organization is effective at communicating its AI strategy?”
Without trust in an organization’s AI direction, employees are unlikely to want to invest their time in learning new AI technologies. “One of the biggest things that erodes employee trust is the fear of AI replacing their jobs,” Goss said. “At Gartner, we strongly believe that in the end, AI will transform more jobs than it eliminates, allowing new roles to be created, starting as early as this year.”
Beginning in 2028 through 2029, AI will create more jobs than it eliminates, according to Gartner research.
Still, while 70% of organizations Gartner surveyed say they have a central AI strategy, it’s another thing to clearly communicate that strategy, Goss said. HR and communications teams need to communicate clearly with employees what the role of AI is within the organization, how it will benefit employees and how they fit into the overall strategy, he said.
But it’s not only employee trust that’s important — organizations need to build trust with stakeholders and have trust in their vendors. Organizations can build trust with stakeholders by developing an AI steering or governance group and establishing clear communication channels between different stakeholders.
In addition to building trust with stakeholders, organizations need to be able to trust their vendors. “When we look at our most recent survey, only 34% of IT leaders have high trust that their vendors will be able to deliver on their AI roadmap promises. Furthermore, only 21% have high trust that they will provide fair and predictable pricing,” said Pierre, a senior principal analyst at Gartner.
Establishing that trust means organizations need to do their homework and identify vendors with real-world examples of AI success stories, and it will likely also require a multi-vendor approach.
When we look at our most recent survey, only 34% of IT leaders have high trust that their vendors will be able to deliver on their AI roadmap promises.
— Erin Pierre, senior principal analyst, Gartner
AI governance hinges on policy, guardrails
AI governance is another critical step toward becoming an AI-native organization. “According to our latest data, 70% of organizations say security, governance and compliance are the No. 1 blocker for large-scale AI deployments, ahead of common challenges like change management and proving ROI,” Goss said.
The increase in AI agents, the emergence of AI slop and the use of shadow AI are among the challenges organizations face in governing AI usage. New AI challenges are emerging faster than IT teams can fix old ones, which leads those teams to often block or restrict AI use as a remedy. A Gartner survey revealed that more than 50% of 360 IT leaders said blocking or restricting AI was their top risk mitigation strategy.
Gartner analyst Max Goss addresses attendees at the Digital Workplace Summit in San Diego Monday. (Kelsey Ziser/InformationWeek)
But good governance enables — rather than restricts — AI deployments, Goss said. Good governance requires policies, guardrails and education. Organizations can achieve good governance by creating an inventory of their AI tools, assessing the risk associated with those technologies and putting appropriate controls in place. Enterprises should also focus on educating employees on how to safely use AI and how to handle sensitive data when using AI.
“We want to turn governance from the No. 1 reason not to do AI to becoming its key enabler,” Goss said.
AI empowerment is more carrot than stick
After establishing trust and governance for AI, organizations can focus on the empowerment piece as they work toward becoming an AI-native company. Organizations need to develop a culture that encourages employees to have a stake in their own AI education and incentivizes learning rather than penalizing employees. Leadership modeling, creating an environment where it’s safe to fail and rewarding AI innovation are all critical to developing a culture that empowers employees to use AI.
“We need to create a culture where safe experimentation and failure is accepted — even encouraged — if it builds literacy and awareness of what AI can do and, importantly, what it can’t,” Goss said.

