5.5 C
New York
Friday, April 3, 2026
Array

As Microsoft expands Copilot, CIOs face a new AI security gap


Earlier this week, Microsoft expanded its Copilot capabilities with new features designed to provide a persistent AI co-worker across enterprise workflows. These features combine multiple AI models and operate continuously inside the tools that employees already use. At the same time, Google has continued rolling out AI functionality inside its Chrome product that can interpret and act across multiple tabs — effectively turning the browser into an execution layer rather than a passive interface.

Individually, these announcements look like incremental product updates. Taken together, they signal a more meaningful shift. Today’s AI is not confined to discrete tools that users open and close. It is becoming embedded in the environments where work happens — observing, interpreting and increasingly acting on information in real time.

For CIOs, this shift introduces a new kind of security problem — not because AI creates entirely new risks, but because it now operates in a place that most enterprise security programs have not been designed to govern — the interaction layer.

Related:Your AI vendor is now a single point of failure

A model built around data movement

Modern enterprise security is built on the assumption that risk can be controlled by managing access and tracking data movement. Identity systems determine who can access what. Data loss prevention (DLP) tools monitor where information goes. Endpoint and network controls enforce boundaries around both.

That model still holds, but it is no longer complete.

The most immediate concern is also the most familiar. As explained by Dan Lohrmann, field CISO for public sector at Presidio, users are already feeding sensitive information into AI systems as part of everyday work: “Users paste sensitive content — source code, customer records, incident details, internal strategy documents — into chat prompts because it feels fast and informal.” 

In many cases, those interactions happen outside approved workflows, when users access personal accounts on company devices; this creates what Lohrmann described as a persistent shadow AI problem.

But focusing on what users input into AI systems captures only part of the risk. The more consequential change is what happens next.

Shape-shifting data

AI does not simply move data: It reshapes it. Edward Liebig, CEO of OT SOC Options — a consortium of operational technology cybersecurity professionals — explained that this distinction is often overlooked. Enterprises have spent years building controls around data movement, but AI introduces risk through the transformation of that data; it summarizes, recombines and reinterprets information in ways that are difficult to track.

Related:Vibe coding: Speed without security is a liability

“What is changing with AI embedded into browsers, email and workflow tools is not just how data moves, but how context is constructed, and how decisions are influenced,” Liebig said.

That shift creates scenarios that fall outside traditional detection models, he warned. A sensitive report summarized into bullet points may no longer match classification rules. Multiple low-risk data sources, when combined, may produce a high-risk conclusion. Outputs may reflect internal strategy or operational logic, even without containing any original data.

“AI doesn’t need to exfiltrate data to create exposure,” Liebig said. “It can infer it.”

Cameron Brown, head of cyber threat and risk analytics at insurance company Ariel Re, is also concerned about this new security gap. Traditional controls are built to detect clear signals: files leaving a system, data being copied or transferred. But AI-generated exposure is subtler.

“AI doesn’t always leak data in obvious ways,” Brown said. “It summarizes, reshapes, hints, infers. Suddenly that ‘leak’ doesn’t look like a leak at all.”

Authorized access, but unintended outcomes

Related:A practical guide to controlling AI agent costs before they spiral

If data transformation were the only issue, existing DLP controls could evolve to address it. But AI introduces a second, more complex problem: risk emerging from activity that is fully authorized.

“At the interaction layer, the primary risk is not unauthorized access,” Liebig said. “It is authorized use producing unintended outcomes.”

Identity and access management (IAM) systems can determine whether a user is allowed to access a data set. They cannot determine how an AI system will interpret that data once accessed, or how it will be combined with other inputs.

“IAM solves for access,” Liebig said. “It does not solve for outcome.”

That gap becomes even more significant as AI systems are integrated into enterprise environments. Lohrmann pointed out that linking AI tools to systems such as CRM platforms, ticketing tools or code repositories effectively creates a new operator with the user’s permissions — one capable of querying and synthesizing information across multiple systems.

“The AI is a force multiplier for access,” Lohrmann said.

The implication is not just broader access, but also more powerful and less predictable use of that access. In other words, a security nightmare.

The browser as the control gap

Where these interactions take place is just as relevant as how they happen. AI is increasingly embedded in the browser and productivity layer; the same environment where users authenticate into systems, access sensitive data, and interact with external content. That makes the browser a central point of exposure, yet one that has historically been overlooked from a security perspective.

“The browser didn’t become the weakest link,” Liebig said. “It simply exposed a layer we never governed.” 

Enterprises have spent years instrumenting networks, endpoints and identity systems. Far fewer have invested in governing the interaction layer where users and AI systems now converge. Brown is blunt about the implications. 

“It’s where most AI interactions happen, yet it’s treated like the least interesting part of the stack,” he said. “That’s backward. It should be ground zero.”

Lohrmann agreed, noting that embedded assistants and extensions often operate with weaker controls and less visibility than traditional enterprise applications.

The problem is compounded when users operate outside of enterprise-managed environments. Employees introduce security risks by using personal accounts on corporate devices, where data shared with AI tools may be stored outside corporate systems and beyond the reach of audit and response processes, Lohrmann said.

A visibility challenge then emerges: “Model histories pile up, business intel gets tangled in them and good luck to any forensic team trying to unwind that overcooked spaghetti,” Brown said.

Extending control beyond access

None of these developments make existing security controls irrelevant. Identity management, endpoint security and DLP remain essential. But they are not sufficient to address the risks introduced by AI.

Traditional monitoring approaches are limited by what they are designed to detect, Brown explained. “Traditional DLP still does its job catching the obvious stuff,” he said. But AI-driven exposure often falls outside those patterns, requiring a shift toward monitoring behavior and intent, rather than just data movement.

Enterprises need a new layer of control, one that extends beyond access into how AI systems use and transform data, Lohrmann said. “IAM generally answers ‘who are you?’ and ‘what can you access?'” he said. “AI adds ‘how is data used and transformed?'”

That shift implies new requirements: visibility into prompts and outputs, tighter control over how AI tools connect to enterprise systems, and more granular oversight of how AI-generated outputs are used in decision-making.

Taken together, these changes point to a broader evolution in enterprise security, one that does not replace traditional controls but extends them into a layer that has, until now, been largely ungoverned. Monitoring where data goes is no longer enough if its meaning can change without visibility. Controlling access is insufficient if the outcomes of that access cannot be validated.

“We are moving from a world of data protection to a world of decision assurance,” Liebig said.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp