AI That Acts, Not Just Answers

Artificial intelligence in Indonesia is undergoing a qualitative shift. Systems are moving beyond answering questions to autonomously planning tasks, calling tools, updating records, and executing decisions across       enterprise workflows—without waiting for human instruction at every step. The OECD defines agentic AI as coordinated systems that decompose goals, collaborate, and operate with higher autonomy in open-ended environments.

The distinction is not incremental; it is structural. A flawed summary is inconvenient. An AI agent with access to internal systems can generate operational, financial, legal, and reputational consequences. ISACA’s 2026     analysis identifies four emerging risk areas: agent security, organizational liability, data integrity, and the       widening gap between AI ambition and governance readiness.

The central question for boards has therefore changed: it is no longer only whether AI outputs are accurate, but under what conditions AI may act on the organization’s behalf. Governance must shift from output assurance to action assurance.

Figure 1. Agentic AI moves beyond static copilots by planning tasks, calling tools, and executing actions across enterprise systems

 Indonesia’s Regulatory Trajectory

This capability shift arrives as Indonesia enters a regulatory transition. In January 2026, the Minister of Communication and Digital stated that AI regulation had become a presidential priority, with a presidential regulation on the national AI roadmap and safety-and-ethics guidelines under preparation. Derivative sector rules and mandatory labelling of AI-generated content will follow.

Three regulatory pillars are converging. The forthcoming presidential regulation will formalize national AI governance. Stranas KA 2020–2045 and the Komdigi National AI Roadmap already provide the normative foundation—framing AI around ethics, human-centric decision-making, transparency, non-discrimination, and alignment with Pancasila values. The UU PDP supplies the compliance backbone: clear consent, purpose limitation, and data minimization, with full enforcement expected by late 2026. In financial services, OJK strengthened its AI ethics guidance in late 2025, sharpening focus on consumer protection, data reliability, inclusion, cyber resilience, and fairness.

Figure 2. Agentic AI adds an "action governance" layer that defines what the system may do, where it can act, 
and how those actions are  controlled and audited. 

 From Model Governance to Action Governance

Traditional AI governance focuses on model quality—fairness, robustness, explainability, and validation. Those    controls remain essential. But agentic AI demands an additional layer: action governance—defining what the AI may do, which systems it can access, when human approval is mandatory, what is logged, how anomalies are escalated, and how actions are reversed when things go wrong.

The analogy is straightforward: model governance is like checking the quality of an analyst’s recommendation; action governance is like defining that analyst’s delegated authority, approval limits, and audit trail. Agentic AI turns that distinction from useful theory into an operational necessity.

Global Standards, Local Implementation

Global frameworks are highly relevant but must not be imported mechanically. ISO/IEC 42001 provides the AI management system backbone; ISO/IEC 42005 extends it with structured impact assessments; ISO/IEC 42006 strengthens certification credibility; and the NIST AI RMF offers practical guidance for governing, mapping, measuring, and managing AI risk.

The challenge is translation, not imitation. Global standards describe sound control architecture; Indonesian law determines how those controls are operationalized locally. Organizations must weave Pancasila-based ethics, UU PDP obligations, labelling expectations, and sector-specific requirements into one coherent governance model. Global standards supply the chassis and safety engineering; Indonesian law supplies the road rules and cultural expectations. Companies need both.

Figure 3. A phased roadmap helps Indonesian organizations move from isolated experiments to a structured, certifiable model for governing agentic AI.

 Five Priorities for Boards

  • First, inventory AI uses cases and distinguishes advisory AI from action-capable AI—higher autonomy demands stronger controls.
  • Second, establish a cross-functional AI governance forum spanning risk, compliance, legal, technology, cybersecurity, data, audit, and business ownership.
  • Third, implement an AI management system aligned with ISO/IEC 42001, adapted to local requirements and sector obligations.
  • Fourth, design action-governance controls: least-privilege access, approval thresholds, human-in-the-loop checkpoints, comprehensive logging, and incident response.
  • Fifth, invest in workforce readiness across both lines of defense—agentic AI does not eliminate the need for human judgment; it changes where that judgment sits.

Indonesia has a genuine opportunity to shape an AI governance model that is both globally credible and locally grounded. The organizations that will benefit most are those that recognize the next challenge early: governing what AI does, not just what it says. 

The question for leadership is no longer whether agentic AI will arrive, but whether the governance architecture is ready to deploy it safely, lawfully, and on scale.

 

by Sindhu Wardhana, Consulting Practice