
Data & Control: Essential Ingredients for Effective Agentic AI Adoption
Agentic AI offers a path to unmatched operational efficiency but we need controls. A robust “undo” capability is essential not only for cyber resilience but for enabling the safe, confident adoption of autonomous AI.
blog
Data & Control: Essential Ingredients for Effective Agentic AI Adoption
Agentic AI offers a path to unmatched operational efficiency but we need controls. A robust “undo” capability is essential not only for cyber resilience but for enabling the safe, confident adoption of autonomous AI.
blog
Since the beginning, technology has been about enabling efficiency. In IT, AI promises to increase our productivity to a degree not seen since the arrival of the printing press, steam engine, or personal computer.
The AI assistants powered by LLMs are a good start, but when it comes to efficiency, automation is the gold standard. And truly automating complex, multi-step tasks requires a move to agentic AI.
Agentic AI differentiates itself from generative AI by being capable of perceiving its environment, acting on instructions, setting goals, pursuing courses of action to achieve them, and refining its approach throughout the process.
It’s easy to see the benefits of a technology that, for example, promises to automate multi-step customer service requests from the time support is requested all the way through to resolution without involving a single human. The IT advisory firm Gartner predicts that agentic AI may make as much as 15% of day-to-day work decisions automatically by 2028, up from essentially zero percent today.
But, it must be noted, Gartner also estimates that 40% of agentic AI initiatives will be canceled before achieving any ROI due to “escalating costs, unclear business value or inadequate risk controls."
What will separate the lucky 15% from the 40% that are abandoned early? Two critical enablers are robust, well-governed data and clearly defined AI resilience guardrails like escalation thresholds, human-in-the-loop triggers, and response rollback capabilities.
Clean data is necessary for AI autonomy
Even with today’s LLM-based AI assistants, the evolutionary precursors of more autonomous agents, successful implementations are no certainty. In fact, according to one widely-cited study from MIT’s NANDA initiative, up to 95% of generative AI initiatives fail to drive measurable business value before being shelved.
According to the report, one of the top reasons for failure isn't insufficient training, it's models’ limited access to relevant, workflow-specific business data. This drives home the importance of training LLMs on data pertaining to the tasks they will ultimately be performing, in addition to the broad training data that provides their language facility.
Without robust, broad training data, LLMs will lack the structural underpinning to return useful answers to prompts. But without relevant, context-aware business data they will also fail to meaningfully enhance the productivity of departments or business units. The ability to quickly and efficiently train large language models on nuanced business application and process data is critical to accelerating time-to-value in AI implementations.
CIOs aiming to accelerate time-to-value from AI must prioritize efficient training or augmenting LLMs/SLMs with curated, application-specific datasets. By pruning and aligning training data to key use cases and systems, organizations can improve AI’s ability to deliver cost savings, productivity, and risk mitigation.
Ensuring agentic AI security
As firms like Gartner and Deloitte track rising interest in agentic AI, they also caution that uncertainty and “inadequate risk management” threatens to derail its adoption. Today’s agents show a worrying tendency to become "confused," make questionable decisions, and fail at even simple multi-step tasks.
In order to follow the progression of tasks from end to end, agents also require robust access permissions, an approach at odds with some zero trust principles like least privilege. There is also the concern that, when agents begin interacting with one another (“multi-agent”) and directly with LLMs (“mult-modal”), it becomes difficult for humans to untangle the chains of causality in these increasingly complex systems.
Two risks in particular—rogue decision-making and sensitive data disclosure—will need to be addressed directly. To realize agentic AI’s benefits without assuming unnecessary risk, organizations must enforce:
Traceability – maintaining complete visibility into prompt chains, tool use, and decisions across the agentic system. This enables forensic investigation and auditability.
Reversibility – ensuring every agent-initiated change (to files, configurations, databases, code, etc.) can be rolled back instantly to a clean, immutable snapshot.
Together, these controls form the digital equivalent of an “undo button” for AI, a safeguard that transforms unbounded autonomy into controlled, accountable automation and reversible units of work.
Human-in-the-loop & snapshots
Agentic AI security best practices reinforce the need to keep humans in the loop. This is especially critical where actions carry financial, reputational, or regulatory consequences. Certain actions like deleting large datasets or attempts to tamper with backup are cause for immediate alarm.
Organizations can protect themselves while scaling automation by combining:
Immutable backups for all critical systems
Near-instant rollback capabilities
Real-time visibility into agent workflows
Guardrails that can easily undo high-risk actions
Especially in the short term we should make peace with a balancing act between increased autonomy and the need for human oversight. Agentic AI offers a path to unmatched operational efficiency but we need controls.
A robust “undo” capability, powered by immutable architecture and transparent telemetry, is essential not only for cyber resilience but for enabling the safe, confident adoption of autonomous AI.
NEWSLETTER
Get insights straight to your inbox