
Most enterprises know how much AI activity is happening inside their walls. Far fewer can tell you what it’s actually producing. ModelOp CTO Jim Olsen joined a CNBC report on AI in the workplace to put a name on that gap — and a number behind it.
At a glance
- More than two-thirds of enterprises still rely on estimates — time saved, projected cost reductions — rather than measured financial results to assess AI ROI.
- ModelOp calls the gap between AI activity and measurable return the “AI value illusion.”
- The data: ModelOp’s 2026 survey of 100 senior AI enterprise leaders.
- The takeaway: token usage isn’t the same as outcome.
Tracking is not measuring
In a CNBC story on AI in the workplace, ModelOp CTO Jim Olsen surfaced a finding from ModelOp’s 2026 Benchmark Report that should land hard with every CFO and Chief AI Officer: most large enterprises know how much AI activity is happening — but not what, if anything, that activity is delivering.
“Almost every Fortune 500 is tracking overall AI usage. But very few are tracking what the board actually cares about: whether that spending is delivering return on investment.”
— Jim Olsen, CTO, ModelOp
That gap between visible activity and measured outcome is what ModelOp calls the AI value illusion: the assumption that broad AI adoption is the same thing as AI value capture.
Why estimates aren’t enough
Time saved. Projected cost reductions. Productivity uplift. These are the metrics most enterprises lean on today — and according to ModelOp’s 2026 survey of 100 senior AI enterprise leaders, more than two-thirds of organizations are still using them as a proxy for ROI rather than reporting measured financial results.
The reason isn’t that leaders don’t want hard numbers. It’s that they don’t have an operating layer that connects every model, prompt, and agent to the workflow it runs, the policy it’s bound to, and the business outcome it’s supposed to move. Without that connective tissue, the only thing left to count is consumption.
“They see token usage … but not really what those tokens were used for.”
— Jim Olsen, CTO, ModelOp
Closing the gap
Cutting through the AI value illusion takes a system of record for enterprise AI — one place where every ML model, GenAI deployment, agentic system, and vendor-provided AI capability is registered, governed, monitored, and measured against the use case it was meant to deliver.
That’s the operating layer ModelOp builds: the Enterprise AI Command Center that powers workflows and generates the operational intelligence executives need to translate AI activity into AI outcomes — at portfolio scale.

