For the past few years, the corporate world has been locked in an AI race. Every company is trying to move faster, invest more and keep up with the pace set by Big Tech.
But speed isn’t the only challenge. We’ve reached a point where capital investment is outpacing organizational confidence.
A new survey from Collibra, in partnership with The Harris Poll, reveals a clear contradiction at the heart of enterprise AI: 84% of technology decision makers say they must increase AI spending this year to remain competitive with Big Tech, yet 88% admit their organizations are still not using AI to its full potential.
More investment, less confidence.
This is the AI trust gap, the fundamental disconnect between the desire to deploy AI and the ability to stand behind its outputs. And it’s not just a boardroom issue. Data from YouGov shows that while more than a third of Americans now use AI weekly, only 5% say they have a lot of confidence in it. We have a world that is using AI but not yet trusting it and spending alone can’t fix that.
Performance Requires Control
AI systems are already capable of generating insights, recommendations and decisions quickly, and at scale. The issue is not capability, it’s control.
(Peshkova/Shutterstock)
Our research makes this clear. 89% of leaders say they can’t have full confidence in AI insights until the underlying data is trusted and verified.
If the data behind AI is incomplete, unverified or disconnected from business context, the output becomes unreliable. And when outputs can’t be trusted, they have to be checked. That control doesn’t happen by default, it comes from governance.
Governance defines the data, context and boundaries AI systems operate within, ensuring outputs are accurate, safe and aligned. Without it, AI doesn’t scale decisions, it scales uncertainty.
The Human In The Loop: Tasks Move, Accountability Doesn’t
That lack of trust has a direct cost.
According to our Harris Poll research, more than half of decision-makers (55%) say they at least sometimes need to correct or push back on AI-generated outputs. That’s not a minor inconvenience, it’s executive time being spent on quality control instead of strategy.
It also highlights a distinction that often gets lost in the debate about AI and jobs: the difference between a task and a job. AI can take on repetitive, data heavy tasks but it can’t take on accountability. The public already understands this. 68% of Americans say they would never trust an AI system to act on their behalf without reviewing each action first.
When an executive uses AI to generate a complex report, the technology handles a task that was once a manual burden. But if that executive is still reviewing every line for errors and hallucinations, the burden of responsibility hasn’t moved. The machine completed the labor. The job stayed human. Bridging the AI trust gap means getting past that cycle, so leaders can stop supervising outputs and get back to making the decisions that actually matter.
The New Red Flag For 2026
The AI trust gap is also redefining what competence looks like. Our research found that 64% of decision makers already consider it a red flag when candidates lack familiarity with AI tools. But familiarity is quickly becoming table stakes.
As we move further into 2026, the real red flag will be leaders who can’t tell the difference between outputs that look right and outputs that are right. Knowing how to prompt a model is one thing. Understanding whether the data behind it is reliable, governed and verified is another. That distinction is where AI literacy is heading, and organizations that don’t build this capability into their culture will struggle to assess risk or realize the value of the systems they’re investing in.
Closing The Gap
The path to AI ROI doesn’t start with a bigger budget. It starts with control. The findings from our Harris Poll research is clear: investment without trust creates expensive uncertainty. And trust doesn’t happen by default. It has to be built into how AI systems operate. Organizations that will pull ahead in 2026 are those that pair their AI ambitions with trusted data, strong governance and the ability to evaluate what AI produces.
That’s when the trust gap closes. That’s when AI systems stop stalling and start delivering real performance. And that’s when AI starts being a true competitive advantage.
About the author: Felix Van de Maele is Founder and CEO of Collibra. He has led Collibra for more than ten years of record growth and is responsible for global business strategy. Prior to co-founding Collibra, he served as a researcher at the Semantics Technology and Applications Research Laboratory (STARLab) at the Vrije Universiteit Brussel, where he focused on ontology-focused crawlers for the semantic web and semantic data integration.
If you want to read more stories like this and stay ahead of the curve in data and AI, subscribe to BigDataWire and follow us on LinkedIn. We deliver the insights, reporting, and breakthroughs that define the next era of technology.
The post The AI Trust Gap: Why AI Performance Requires Control appeared first on BigDATAwire.
Go to Source
Author: Felix Van de Maele

