
As enterprise leaders look to scale AI adoption, many are beginning to explore “agentic AI”—systems of interconnected AI agents that collaborate to complete complex tasks. But what are the real implications of handing over coordination—and in some cases, decision-making—to autonomous software?
In a recent feature for CIO, ModelOp CTO Jim Olsen shares a clear framework to help CIOs and enterprise tech leaders evaluate both the risks and rewards of agentic AI.
“Each member of the team brings both abilities, or tools, and expertise, or training, to an overall task. Agentic AI is the whole team working together to solve the problem.”
Below, we break down the three most urgent takeaways from the article.
The hidden risks of agent-to-agent data sharing
As enterprise teams experiment with connecting AI agents across systems, the risks of unintended data exposure grow exponentially. When agents are linked, their individual access controls can be quietly overridden—enabling sensitive information to move in ways no single agent was ever designed to allow.
Imagine an agent with access to a customer database containing Social Security numbers. On its own, it’s safely restricted. But when it passes data to another agent with access to Slack, suddenly that information could be posted to a public channel—without any malicious intent, just flawed coordination.
"If you truly want to do agent to agent, what data can basically leak through that pathway?"
This kind of cross-agent leakage is exactly the kind of governance blind spot enterprises must address as they explore agentic architectures.
Why “final review” is essential before deployment
Agentic AI isn’t just about automation—it’s about autonomy. These systems are designed to make their own decisions about how to complete a task, choosing their methods and tools without human guidance. That flexibility can drive innovation and efficiency—but it also opens the door to unintended outcomes.
Without oversight, even well-intentioned agents can make choices that fall outside acceptable business or compliance boundaries. That’s why human validation isn’t just a safeguard—it’s a necessity.
“Final review is absolutely recommended, as things can go off the rails."
Before granting any AI system the freedom to act independently, CIOs must weigh the benefits of autonomy against the operational and reputational risks it can introduce.
How we’ll move from LLM-powered agents to specialized SLMs trained for expert tasks
While most agents today are powered by large language models (LLMs), Olsen sees a shift coming. He predicts the rise of small language models (SLMs) built for specific use cases, which will allow enterprises to construct smarter, safer, and more reliable agentic systems.
“You’ll start to build up specialized team members, or agents, that will then be very good at performing those specific tasks. Just like you would get a software programmer and an agile expert, and a product manager together.”
The future of agentic AI, in other words, is domain-specific and task-aware—less like a chatbot, and more like a high-functioning team.
As companies accelerate their adoption of AI agents, Jim Olsen’s insights offer a timely reminder: autonomy and coordination are powerful, but without governance, they can quickly spiral into risk.