Artificial intelligence dominated discussions at Davos this year, but inside enterprises the conversation has shifted from excitement to execution. Organizations are moving quickly to deploy agents and automation. However, many are discovering that technology alone does not guarantee operational value.
Against that backdrop, Neal Ramasamy, Chief Information Officer at Cognizant, shared his perspective with BigDATAwire on where enterprise AI adoption stands today and what separates early experimentation from sustainable scale. In this Q&A feature, he explains why context engineering is emerging as a critical discipline for enterprise AI success.
From what you saw coming out of Davos, what’s the biggest gap between enterprise AI ambition and reality right now?
Most enterprises are still treating AI as a technology rollout. They’re deploying agents before designing the environments those agents need to operate in. That’s where the gap shows up.
(Shutterstock AI Image)
AI succeeds or fails based on how clearly an organization defines decision ownership, operating boundaries, and accountability for its agents. When guardrails and escalation paths are implicit or fragmented, agents will underperform or break under real operating pressure once out of a controlled setting. What’s working is treating agents as part of the core operating model, not an overlay. The organizations pulling ahead are redesigning how work runs and how decisions are made and governed before they scale automation.
You’ve said “context engineering” matters more than prompt engineering. In practical terms, what does that look like inside a Fortune 500 stack?
Prompt engineering optimizes an interaction between a user and the system. Context engineering defines the operating system that interaction runs on.
In a Fortune 500 environment, that means clearly defining who makes decisions, what authority they have, and how exceptions are handled. Much of that context has traditionally lived in people’s heads. AI systems don’t have access to it unless it’s intentionally designed into the environment. In practice, this becomes a persistent context layer aligned to workflows, policies, and data lineage, with governance embedded into execution rather than applied after the fact. When that foundation is in place, agents behave consistently across teams and use cases and become an extension of the team rather than just a tool. That’s what makes it operationally dependable at scale.
For data and analytics teams specifically, what’s the hardest part of operationalizing AI agents?
For data and analytics teams, the challenge shows up after agents leave controlled environments and enter live workflows – and that’s when tacit knowledge becomes a constraint. Teams must surface judgments that were never formalized, determine the context agents need at runtime, and ensure that outputs remain explainable as agents operate across datasets and organizational boundaries.
This is also where another gap becomes visible. The Velocity Gap. Infrastructure and proofs of concept move quickly, but operational value lags as teams work through governance, accountability, and cross-silo consistency. When people, processes, and policies aren’t aligned, agents will behave unpredictably. So, these areas will be key for enterprises to focus on.
What’s one concrete metric enterprises should track to know their AI agents are actually working, not just demoing well?
I look at cycle time in a core operational process. Cycle time tells you whether agents are actually removing friction across the full flow of work, not just accelerating a single step. It’s easy to baseline, it’s owned by the business, and it reflects the impact that operators and customers feel directly—whether that’s resolving a case, processing an exception, or moving a decision through the system.
I also expect traceability alongside it. Leaders need to understand why an agent produced a recommendation. To your point, at this stage, the bar has moved past prototypes. What matters is whether agents are driving real productivity and more consistent outcomes, with accountability built into the decision trail.
If a data team has six months to prepare for agentic AI, what’s the single highest‑impact change they should make first?
Spend six months building your context backbone so the organization is legible to agents. Again, that means getting clear on how decisions are made, where accountability sits, and how policies and exceptions actually play out in day-to-day work.
Teams that take the time to codify that context, implement runtime lineage, and build responsible governance into execution will move faster with fewer surprises as they scale agents. Those who skip this work will likely end up compensating later with controls and rework. Treating context as a first-class asset early is what allows agent initiatives to deliver sustained value over time.
If you want to read more stories like this and stay ahead of the curve in data and AI, subscribe to BigDataWire and follow us on LinkedIn. We deliver the insights, reporting, and breakthroughs that define the next era of technology.
The post Context Engineering Will Decide Enterprise AI Success, Says Cognizant CIO Neal Ramasamy appeared first on BigDATAwire.
Go to Source
Author: Ali Azhar
