Genpact’s Autonomy requires trust in AI: Four leadership decisions that determine whether AI scales report explores how the promise of agentic AI—systems capable of autonomous action and decision-making—represents the next frontier in enterprise transformation. While generative AI (GenAI) has demonstrated its capacity to accelerate tasks and improve productivity, a significant gap exists between enterprise ambition for agentic AI and organizational readiness for its full-scale deployment. This challenge is not primarily technological; rather, it centers on critical leadership decisions concerning trust, accountability, measurement, and process design. Without resolving these fundamental constraints, enterprises risk costly pilot programs that fail to scale, ultimately hindering true autonomous execution.
The Trust Deficit: Why Autonomy Stalls at the Edges
Despite widespread recognition of agentic AI’s transformative potential, most organizations remain hesitant to grant these systems full autonomy. The transition from AI assisting human judgment to AI independently executing actions introduces a profound trust challenge, largely stemming from unresolved questions of accountability and measurable value.
The data reveals a stark contrast: 92% of senior executives surveyed by HFS Research in partnership with Genpact (2026) anticipate agentic AI will fundamentally change how work is executed. Yet, nearly 80% continue to operate these systems in supervised or assisted modes, retaining human sign-off on consequential decisions . This reluctance is not a technical limitation; rather, it reflects an organizational “risk appetite problem.” When an agentic system takes an action that cannot be quietly reversed, enterprise leaders face critical questions about operational and legal consequences, explainability, and regulatory exposure . Compliance or regulatory exposure (35%), reputational damage (34%), and blurred lines of accountability (31%) are cited as top concerns hindering trust in autonomous action .
A related challenge is the prevailing measurement paradigm. Enterprises expect rapid returns from agentic AI, with 71% believing it will deliver ROI faster than previous technology waves. However, 53% report lacking accurate KPIs for autonomous systems, and 67% still rely on productivity-based metrics designed for earlier automation . These traditional metrics, such as cost savings or reduced manual effort (32% prioritization), fail to capture the value of adaptive, decision-driven systems . Investment is accelerating rapidly—an average 38% increase in agentic AI budgets over the next 12 months—but without a corresponding shift in accountability frameworks and outcome measurement, capital allocation remains misdirected (HFS Research, 2026, p. 12).
What to do:
- Define accountability upfront: Explicitly determine who owns an agent, who is responsible for its failures, and how escalations (e.g., RAG statuses for critical incidents, defined SLA thresholds) are managed before deployment.
- Establish agent-native metrics: Move beyond productivity. Measure autonomous execution, such as end-to-end workflow completion rates (e.g., 85% fully autonomous claim processing), decision latency reduction (e.g., 50% faster credit approval decisions without human intervention), independent exception handling (e.g., 70% resolution of billing discrepancies), and reduction in human intervention points (e.g., customer service FCR for complex queries increases by 20% due to autonomous diagnosis).
- Implement auditable governance: Ensure agent actions are logged, explainable (e.g., a “reasoning trace” for each decision in a CRM system), and reversible (e.g., 24-hour window for reversal of a system-initiated order).
What to avoid:
- Deferring accountability decisions: Assuming these issues will resolve themselves or can be addressed post-deployment.
- Applying legacy ROI frameworks: Judging agentic AI solely on traditional efficiency gains risks misrepresenting its strategic value and stifling investment in transformational capabilities.
- Ambiguous oversight: Lack of clear ownership in a multi-agent system (e.g., where an inventory agent interacts with a pricing agent) will create operational and legal exposure.
Organizational Readiness: Redefining Roles and Processes for Autonomy
The shift to agentic AI necessitates a fundamental redesign of both organizational structures and underlying business processes. Enterprises are currently outrunning their ability to adapt human roles and operational frameworks, creating a significant impediment to scalable autonomy.
Structural changes are already visible, with 44% of enterprises expecting agentic AI to flatten hierarchies by reducing management layers, and 36% anticipating the elimination of specific roles . This rapid restructuring, however, is outpacing the deliberate redesign of human roles. The workforce often experiences anxiety stemming from uncertainty about job security, unclear governance mechanisms, and discomfort with the pace of change, leading to a largely neutral or negative sentiment towards agentic AI . This indicates a lack of clear communication and concrete frameworks for how human responsibilities will evolve alongside autonomous systems.
Furthermore, the skills required for agentic AI success are shifting dramatically. The priority is no longer just building AI models but operating alongside them. Enterprises now prioritize skills such as workflow orchestration and integration (42%), data engineering and API development (39%), and monitoring and observability systems (36%) . These capabilities are crucial for connecting, monitoring, and governing autonomous systems embedded in live workflows.
Critically, agentic AI does not fix broken processes; it exposes them. Unprepared business processes are the leading barrier to agentic AI adoption, cited by 33% of enterprises . Simply layering autonomy onto legacy process logic, characterized by sequential approvals, manual handoffs, and unclear ownership, leads to brittle deployments and difficult-to-demonstrate ROI. Functions with high decision volumes, frequent exceptions, and visible manual coordination costs—such as customer service (44% workflow redesign), IT infrastructure (34%), and finance and accounting (33%)—are seeing the most significant workflow redesign efforts . Autonomy scales only when processes are explicitly redesigned for system execution, removing human dependency points and clarifying decision ownership across functions.
What to do:
- Make human transition a design constraint: Integrate role clarity, escalation paths (e.g., defined thresholds for human intervention in an automated customer onboarding process), and oversight responsibilities directly into agentic system design. For example, a “CX Agent Supervisor” role monitors agent performance dashboards and intervenes when CSAT drops below 70% or complaint rates exceed 5%.
- Prioritize end-to-end process redesign: Before deploying agents, identify and re-engineer workflows that currently rely on sequential human approvals, manual handoffs (e.g., handoff from an automated fraud detection system to a human investigator with clear trigger conditions), and ambiguous ownership.
- Invest in new skill sets: Focus on developing expertise in AI governance, data engineering, workflow orchestration, and human-agent interaction design, not just model development. This includes training IT operations teams on monitoring agent performance and data integrity (e.g., using RAG status reporting).
What to avoid:
- Retrofitting agents to existing broken processes: Automating inefficiencies will compound them, leading to poor outcomes and distrust.
- Ignoring workforce anxiety: A lack of clarity on new roles and responsibilities will generate resistance and undermine adoption.
- Underestimating the complexity of integration: Autonomous agents require robust data foundations and API connectivity to operate effectively across enterprise systems (e.g., CRM, ERP, billing).
Strategic Imperatives for Scaling Agentic AI: Governance, Metrics, and Operating Models
Scaling agentic AI from isolated pilots to enterprise-wide autonomous execution requires a deliberate shift in an organization’s operating model. The leaders in this space are not simply moving fast; they are systematically resolving the foundational constraints of accountability, measurement, people, and process.
Immediate Priorities (First 90 Days):
- Establish an AI Governance Council: Form a cross-functional body including legal, compliance, risk, IT, and business leaders. Define the mandate for agentic AI, acceptable levels of autonomy, and initial policies for oversight.
- Pilot agent-native metrics: Identify one high-impact, low-risk agentic AI pilot. Develop and implement a tailored measurement framework that captures autonomous execution outcomes, not just productivity.
- Begin workforce impact assessment: Map existing roles to anticipated agentic AI deployments. Identify roles most likely to be augmented or redesigned, and initiate communication strategies around skill development.
Operating Model and Roles:
- AI Governance Lead: Responsible for establishing and enforcing AI policies, ethical guidelines, and compliance frameworks for agentic systems. This role collaborates with legal and regulatory teams.
- AI Operations Engineer: Focuses on monitoring agent performance, ensuring system reliability, managing data pipelines, and implementing escalation protocols. They operate monitoring dashboards (e.g., tracking autonomous action rates vs. human intervention rates).
- Process Re-engineering Specialist (AI-focused): Works with business units to redesign end-to-end workflows explicitly for autonomous execution, removing manual handoffs and clarifying decision logic.
- Human-Agent Interaction Designer: Develops intuitive interfaces and protocols for human oversight, intervention, and collaboration with agentic systems (e.g., designing “function calling” mechanisms for agents to request specific human input).
Governance and Risk Controls:
- Layered Autonomy Thresholds: Implement granular controls over agent actions (e.g., a “supervised autonomy” mode where all high-value transactions above $5,000 require human review; “domain autonomy” where agents can act independently within predefined parameters but escalate exceptions).
- Pre-defined Escalation Paths: Clearly document scenarios requiring human intervention, the responsible role, and the communication channels (e.g., a critical error in a financial transaction agent triggers an alert to the Financial Operations team with a 5-minute response SLA).
- Audit Trails and Explainability: All agent actions, decisions, and data inputs must be recorded in an immutable log within the CRM or ERP system for post-hoc analysis and regulatory compliance. Implement tools that explain an agent’s reasoning (e.g., “Why was this customer offer generated?”).
- Red-Teaming and Adversarial Testing: Proactively test agentic systems for vulnerabilities, biases, and unintended consequences to build resilience before broad deployment.
What “Good” Looks Like:
- Clear accountability chains: Every autonomous action can be traced to a responsible individual or team.
- Value-driven metrics: Business leaders can demonstrate the measurable impact of agentic AI beyond efficiency, linking it directly to outcomes like increased customer lifetime value, accelerated time-to-market, or reduced operational risk.
- Integrated human-AI workflows: Human employees understand their evolved roles—focusing on oversight, complex problem-solving, and exception management—and seamlessly collaborate with autonomous systems.
- Resilient, re-engineered processes: Workflows are designed from the ground up to support autonomous execution, minimizing manual handoffs and enabling seamless coordination across systems (e.g., 90% straight-through processing for routine order fulfillment).
Summary
The evolution from GenAI to agentic AI marks a pivotal shift, moving beyond mere augmentation to true autonomous execution. While technological capabilities are advancing rapidly, the ability of large enterprises to scale agentic AI hinges not on speed, but on a deliberate and thoughtful resolution of four interconnected constraints: accountability, measurement, people, and process. Organizations that successfully navigate this transition will be those that fundamentally redefine how decisions are governed, how responsibility is assigned, and where control sits when systems begin to act independently. This requires an enterprise-wide operating model redesign, robust governance, tailored metrics, and a proactive approach to workforce transformation, ensuring that trust in AI is earned, not just assumed.
Source: HFS Research, in partnership with Genpact. (2026). Autonomy requires trust in AI: Four leadership decisions that determine whether AI scales.










