Enterprise leaders have spent two years and hundreds of billions of dollars on AI. The results have been uneven. According to McKinsey’s 2024 global survey, fewer than one in three companies report that their AI investments have generated meaningful, sustained business value. The demos tend to impress, and production tends to disappoint.
The diagnosis offered most often is that the model isn’t good enough, the data infrastructure isn’t mature, or employees haven’t been trained. That diagnosis is largely wrong, or at least incomplete.
The real problem is context, specifically the absence of a durable, dynamic enterprise context layer sitting between the AI and the data. Until organizations understand what that means and build accordingly, model upgrades and infrastructure investment won’t close the gap.
The Context Gap
Every enterprise AI system, whether a conversational analytics tool, a financial planning agent, or a supply chain optimizer, operates by translating a human question into a machine-executable task. To do that translation accurately, the system needs to understand the business, what the data means, how metrics are defined, which business rules apply, and how those rules have evolved over time.
(eelnosiva/Shutterstock)
That understanding is what we mean by context. In most enterprise deployments, it is either absent, incomplete, or decaying faster than anyone is tracking.
Consider a Global 2000 manufacturer deploying an AI system for financial analytics. The system can access the data warehouse and run queries. But can it accurately calculate gross margin across business units when the rules account for intercompany transfers, regional cost allocations, and exceptions carved out during the last two acquisitions? Those rules live in the heads of a handful of senior finance analysts. They exist in spreadsheets, in three-year-old Slack threads, in undocumented institutional memory. When those analysts rotate roles or retire, the knowledge disappears, and the AI system, lacking that context, begins generating answers that are precise but wrong.
This is not a data quality problem. It is a context problem, and it shows up across industries.
Four Dimensions Leaders Get Wrong
There are four structural requirements for an effective context layer, and most organizations are failing at all of them.
1. Context must be self-learning
The most common mistake is treating context as a one-time implementation. Organizations invest heavily in an initial context-capture effort, tagging metadata, documenting business definitions, cataloging approved queries, and then treat it as finished, which it never is.
Context decays continuously and often invisibly. Schemas change as engineering teams evolve the data model. Data drifts as upstream sources shift in ways no one formally announces. Business metrics get redefined, “ARR” means something different after an acquisition or a pricing model change. Business processes reorganize, and the logic that powered last quarter’s dashboards becomes silently incorrect. By the time an error surfaces, the context has often been stale for months.
If the context layer depends on humans to maintain it, humans become the bottleneck, and they will always be losing ground. An effective context engine needs to learn continuously from usage patterns, validated answers, and human corrections, improving over time rather than degrading.
2. Context is multi-dimensional and cannot be captured in one place
Enterprise knowledge does not reside in a single system. It lives simultaneously in schemas, in the logic encoded in years of validated analyst queries, in formal and informal documentation, in semantic and metadata layers, and in the tacit knowledge that exists only in people’s heads.
The mistake most enterprises make is pursuing a single source of context, a metadata catalog, a semantic layer, a data dictionary, and expecting it to carry the full burden. No single layer can. The approved queries that an expert analyst has refined over five years encode business logic that no documentation effort will fully capture. The metadata layer captures structure but not meaning. The semantic layer captures definitions but not the judgment calls embedded in how those definitions get applied.
An effective context layer has to span all of these dimensions simultaneously and maintain coherence across them as each evolves independently.
3. The context layer must be architecturally independent of underlying data platforms
This is the most consequential architectural decision most organizations are not treating seriously enough.
When context gets built inside a specific platform, whether a cloud data warehouse, a lakehouse, or a vendor-specific semantic layer, it becomes entangled with that platform’s proprietary structures and APIs. The context layer is the most valuable intellectual asset a data organization creates. It encodes years of business logic, validated queries, and institutional knowledge. When that asset is platform-dependent, the organization has surrendered its architectural flexibility and negotiating leverage.
This is compounded by a reality most enterprises already live with, which is that data rarely lives in one place. The typical Global 2000 company operates across a heterogeneous landscape, Snowflake for the enterprise warehouse, Databricks for data science workloads, Salesforce for CRM, SAP for ERP, and a long tail of legacy and departmental systems that will not be consolidated anytime soon. A context layer built inside any one of these platforms captures what that platform sees and nothing more. The business questions that matter most, connecting revenue performance to operational data to customer behavior, require context that spans all of them.
Abstraction is therefore not just a hedge against future platform changes, it is the only architecture that can serve the reality of how enterprise data actually exists today. Data stacks evolve, migrations happen, and the platform that is optimal today may not be optimal in three years. Organizations that have abstracted their context layer can serve the full breadth of their data landscape now and make platform transitions without starting over, while those that have not are constrained in both dimensions, often discovering the cost only when a migration is already underway.
4. Every AI agent inherits the context problem, and makes it worse
The fourth dimension is only now becoming urgent as enterprises move from copilots and chatbots to autonomous agents.
With a copilot, there is a human in the loop. The analyst reads the answer, applies judgment, catches the error. The feedback cycle is forgiving. The defining characteristic of agentic AI is that it operates without that constant human checkpoint. Agents run queries, synthesize data, generate reports, and trigger downstream workflows autonomously, at scale, continuously.
That autonomy is the value proposition, and it is also why the quality of the underlying context layer becomes non-negotiable.
A poorly configured dashboard delivers a wrong number to one person in one meeting. An agent operating on stale or incomplete context propagates that error across dozens of downstream systems and decisions before anyone realizes something has gone wrong. The autonomy that makes agents valuable is the same property that makes bad context so dangerous. Every agent an organization deploys is only as trustworthy as the context grounding it, and confident but wrong answers delivered at machine speed, embedded in automated workflows, represent a governance failure waiting to happen.
A Framework for Investment Decisions
For senior leaders evaluating AI investments, four questions are worth asking directly.
- Does the system learn, or does it require manual maintenance? A context layer that depends on human curation will decay, so it is worth asking vendors specifically how context is updated over time and what human effort is required to keep it accurate.
- How many dimensions of context does it capture? Solutions that address only one layer, metadata, semantic definitions, or query history in isolation, are worth treating with skepticism. The more defensible systems integrate multiple context dimensions and keep them coherent as each evolves.
- Is the context portable? If the organization needed to migrate data platforms in two years, what happens to the context it has built? The answer reveals how much strategic lock-in is embedded in the architecture.
- What is the governance model for agents? Before deploying autonomous agents, organizations should be able to articulate what context those agents are grounded in, how that context is validated, and what mechanisms exist to detect and correct errors before they propagate.
The Strategic Implication
The pattern emerging across successful enterprise AI deployments is consistent. The organizations generating durable value are not necessarily the ones with the largest models or the most data. They are the ones that have invested in a living, multi-dimensional, platform-independent context layer and treated it as a strategic asset rather than an implementation detail.
For enterprises operating at scale, building and maintaining that context layer is the AI investment. The organizations that recognize this now will build a compounding advantage, while those that continue to treat it as a footnote will find themselves in an expensive and recurring cycle of pilots that impress in demos and disappoint in production.
About the Author: Soham Mazumdar is a serial entrepreneur and technology leader, currently the Co-Founder and CEO of WisdomAI, an AI-powered data insights platform that helps enterprises query their data using natural language. Previously, he co-founded Rubrik, where he served as Chief Architect and helped scale the company to IPO. Soham also co-founded Tagtile (acquired by Facebook), and led core search infrastructure at Google.
If you want to read more stories like this and stay ahead of the curve in data and AI, subscribe to BigDataWire and follow us on LinkedIn. We deliver the insights, reporting, and breakthroughs that define the next era of technology.
The post Why Enterprise AI Keeps Failing, and It’s Not the Model’s Fault appeared first on BigDATAwire.
Go to Source
Author: Soham Mazumdar


