– Extract-

The conversation around artificial intelligence continues to evolve. It is becoming less about what AI says and produces and more about what AI is actively doing that concerns institutions and regulatory bodies. That shift from outputs to actions marks a turning point for every organization deploying intelligent systems, placing communication leaders at the center of a new and consequential challenge.

As this incredibly fast change takes hold, it leaves a gap between what agents can do and what institutions have actually built to contain them.

This is the core insight behind recent United Nations University research on agentic AI: systems capable of autonomous action require boundaries before they are granted freedom. The problem has moved upstream. Governing what a model says is manageable. Governing what an agent does, across connected systems, data stores, and irreversible transactions, is a structural challenge of an entirely different order.

The UNU framing of “bounded autonomy” draws a direct line from technical design to institutional accountability. Its argument: institutional deployment of agentic systems should begin from a minimum necessary privilege and a clearly delimited scope, with accountable oversight embedded from the start rather than bolted on later. Agents perform best when they operate in isolated environments with explicit permissions, approval gates for high-risk behaviors, and rollback capabilities treated as core features rather than optional extras.

This is part of a wider scaffolding being built across the UN family. The Global Digital Compact calls for AI governance that supports an inclusive, open, safe, and secure digital future. The UN General Assembly’s 2024 resolution emphasized the need for safe, secure, and trustworthy AI systems for sustainable development. The High-level Advisory Body’s report, Governing AI for Humanity, reinforced the case for globally inclusive governance that protects human rights while enabling innovation. Together, they are establishing the normative architecture within which national and sectoral rules are taking shape.

The UN framing has found resonance across other actors moving independently toward similar conclusions. In January 2026, Singapore’s Infocomm Media Development Authority released what it described as the world’s first comprehensive governance framework dedicated to agentic AI. Announced at the World Economic Forum in Davos, the Model AI Governance Framework for Agentic AI provides structured guidance across four dimensions: assessing and bounding risks at the outset by placing limits on agents’ autonomy and access to tools and data; defining meaningful human oversight checkpoints; implementing technical safeguards throughout the agent lifecycle; and enabling end-user responsibility through transparency and training.

Industry bodies are converging on the same grammar. The TM Forum’s autonomy governance framework calls for organizations to move beyond model governance to govern agents’ action space, requiring human sign-off for irreversible or legally binding decisions. The practical language emerging from Singapore, the UN system, and industry frameworks aligns on defining the scope of agent action before deployment, building human checkpoints into the architecture, and treating monitoring and intervention capabilities as non-negotiable rather than optional features.

“Bounded autonomy” is becoming the shared vocabulary that allows policymakers, engineers, and communicators to discuss the same underlying problem from different angles, a common grammar for governing AI that acts.

Alongside these frameworks, one regulatory instrument is now creating hard deadlines that concentrate minds. The EU AI Act, which entered into force in August 2024, is moving through a phased implementation schedule with a critical milestone approaching on 2 August 2026. On that date, the bulk of the regulation’s remaining provisions become enforceable, including full requirements for high-risk AI systems under Annex III, transparency obligations under Article 50, and the activation of national AI regulatory sandboxes across all Member States.

The Act’s treatment of agentic AI carries specific weight. While the legislation does not define “AI agent” as a separate legal category, the European Commission has confirmed that existing definitions fully cover agentic systems. An agent that receives input, processes it, and takes actions in the world falls under the Act’s definition of an AI system. If it runs on a foundation model, that model triggers General Purpose AI model obligations as well. For high-risk agentic systems, those operating in employment, credit, healthcare, education, or critical infrastructure, the Act mandates six compliance areas from August 2026: continuous risk management (Article 9), technical documentation (Article 11), traceability and record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), and incident reporting (Article 73).

Researchers have identified a structural challenge at the heart of the Act’s application to agents: the regulation assumes that the roles of provider, deployer, and operator are stable. In agentic systems, they frequently are not. A deployer who configures an agent with broad tool-calling rights or the ability to spawn sub-agents may carry provider-level obligations regardless of how contracts are written. Recent analysis concludes that high-risk agentic systems with untraceable behavioral drift currently cannot satisfy the Act’s essential requirements, and that the foundational compliance task is an exhaustive inventory of the agent’s external actions, data flows, connected systems, and affected persons. Non-compliance carries significant financial exposure: fines of up to EUR 15 million, or 3% of global annual turnover, for high-risk violations, and up to EUR 35 million, or 7%, for prohibited AI practices.

The technology to build capable AI agents exists. The governance infrastructure to deploy them at scale, with confidence, largely does not. KPMG’s most recent Global AI Pulse Survey finds that 91% of business leaders say data security, privacy, and risk will actively shape their AI strategies. The same survey shows that 65% of leaders cite agentic system complexity as the top barrier to demonstrating return on investment for two consecutive quarters. Governance maturity, rather than budget or technical capability, is the constraint.

The more revealing finding is directional: organizations that have embedded governance into their deployment pipelines, model cards, automated output monitoring, explainability tooling, human-in-the-loop escalation for low-confidence decisions, are the ones operating with the confidence to scale. As KPMG’s Global Head of AI and Digital Innovation puts it: “There is no agentic future without trust, and no trust without governance that keeps pace.” Governance infrastructure, in other words, is what converts AI agents from a capability into an organizational asset.

This recognition is driving a new category of offerings. ISO/IEC 42001:2023, the first internationally recognized management system standard for AI, provides 38 structured controls across nine governance areas designed to help organizations operationalize responsible AI across the full system lifecycle. It functions as what some practitioners call the Rosetta Stone of AI governance, translating abstract legal and ethical requirements into concrete, auditable processes, roles, and documentation. The arrival of certifiable standards signals that governance has graduated from PowerPoint principles to operating architecture.

The practical implication of this moment is that governance cannot be applied as a compliance layer after deployment. It must be built into the systems themselves. This means defining what an agent is authorized to do before it acts, not reviewing what it did after it acts. It means designing kill switches and approval gates as core features. It means ensuring that human oversight is meaningful, not a checkbox, particularly for decisions that are irreversible, legally significant, or capable of affecting people’s rights.

Singapore’s framework explicitly warns against automation bias in supervisory roles: the presence of a human reviewer is insufficient if that reviewer lacks the context, tools, or authority to intervene effectively. The TM Forum governance framework adds a related discipline: tiered autonomy, where agents operating in low-stakes, reversible environments require lighter oversight, while those capable of moving money, changing records, or triggering legal consequences require mandatory human checkpoints and documented accountability chains.

The shape of a responsible agentic operating model is becoming clearer. It involves cognitive layers (reasoning and planning), coordination layers (multi-agent orchestration), control layers (monitoring, logging, intervention), and governance layers (accountability, policy alignment, human sign-off) — each aligned so that the system functions reliably and the organization can explain, audit, and correct it.

This governance evolution carries direct consequences for communication functions, which sit at the intersection of operational decisions and the narratives those decisions generate. Communication leaders are no longer simply explaining AI to stakeholders; they are part of the infrastructure that makes governance visible, credible, and coherent.

Three roles have become concrete and urgent.

  1. Translating global norms into internal language. UN principles, EU regulation, and Singapore’s framework are written for policymakers and engineers. Someone inside the organization must convert them into the rituals, expectations, and training that shape how employees and partners actually behave. Communication leaders who understand the normative landscape can build internal cultures of bounded autonomy rather than waiting for a compliance crisis to force the conversation.
  2. Making governance visible. Governance builds trust only when stakeholders can see it in action. The move from disclosure of AI-generated content to disclosure of AI-involved decisions, specifying which workflows involved agents, under what constraints, and with what human oversight, represents a fundamental shift in transparency practice. This is the work of communication functions, and it requires investment in the design of disclosures, not just in their legal review.
  3. Acting as an early-warning system. The most dangerous gap in agentic AI deployment is the one between what an organization says it does and what its agents actually do. Communication leaders who are close enough to operations to detect when the narrative diverges from reality provide an internal audit that legal and technical teams alone cannot provide.

For organizations that want to move ahead of regulatory pressure rather than race to catch up, several design principles are coming into focus.

Boundary-first storytelling starts with what agents cannot do and who is accountable before leading with capability claims. This mirrors the UNU principle of bounded autonomy and positions the organization’s public narrative in honest alignment with its operational reality.

Action-focused transparency shifts disclosure from “this content was AI-generated” to “this decision involved AI agents operating under the following constraints and with the following human oversight.” This is the disclosure architecture that the EU AI Act’s Article 50 transparency obligations are pushing organizations toward.

Multi-level governance narrative connects the global layer (UNESCO, UNU, UN General Assembly resolutions), the sectoral layer (EU AI Act, ISO 42001, Singapore’s MGF, TM Forum frameworks), and the internal layer (policies, training, accountability structures) into a coherent operating story that employees and external stakeholders can actually navigate.

A read-first, write-rarely posture defaults agentic autonomy to observational and low-risk functions, reserving higher autonomy for contexts where governance architecture has been explicitly designed to support it. When agents can change records, move money, or affect individual rights, human checkpoints and documented accountability chains are prerequisites, not enhancements.

Organizations that align their communication operating model with this emerging ecosystem of norms will do more than satisfy auditors. They will earn something more durable: legitimacy in a world where AI systems act on people’s lives rather than simply talk to them.

The regulatory clock is ticking. The EU AI Act’s August 2026 deadline applies full high-risk requirements to agentic systems operating in sensitive domains, with penalties designed to concentrate executive attention. Singapore’s framework is a living document, already drawing ASEAN-wide engagement. ISO 42001 is moving from early adoption to market expectation. The normative architecture being built by the UN system will outlast any individual regulation and shape the standards by which organizations are judged by governments, civil society, and the public for years to come.

The organizations that treat governance as architecture, designed in, made visible, and narrated coherently, will be positioned to deploy agentic AI with the confidence and legitimacy that this moment demands.

This article draws on original research, customer discussions and interviews, and primary source analysis. Intradiegeticsupports leaders and organizations in translating evolving UN and multilateral guidance on agentic AI into concrete narratives, governance playbooks, and decision frameworks.

This is an extract of a larger report that is available upon request.

References

Why Agentic AI Needs Boundaries Before Freedom – As AI agents evolve from chat tools to actionable systems, the core challenge is how they can be con…

Prepare for EU AI Act High-Risk Obligations in 2026 – The EU AI Act’s high-risk obligations become enforceable on 2 August 2026. For development teams and…

Singapore Launches World’s First Governance Framework for … – Minister for Digital Development and Information Josephine Teo announced the Model AI Governance Fra…

New Model AI Governance Framework for Agentic AI to guide … – The framework covers four key steps: assessing risks and setting limits on agents’ powers, ensuring …

AI autonomy governance (a governance framework for agentic AI ) – The framework supplements existing enterprise policies relating to information security, data privac…

Singapore debuts world’s first governance framework for agentic AI – Singapore has launched a governance framework for agentic artificial intelligence (AI) systems, whic…

AI Act | Shaping Europe’s digital future – European Union – The AI Act entered into force on 1 August 2024, and will be fully applicable 2 years later on 2 Augu…

Implementation Timeline | EU Artificial Intelligence Act – This page lists all of the key dates you need to be aware of relating to the implementation of the E…

Timeline for the Implementation of the EU AI Act | AI Act Service Desk – The EU’s AI Act legislation applies progressively, with a full roll-out foreseen by 2 August 2027. 0…

A Deep Dive in the European AI Act: What it Means for Your … – Nestr – The most significant deadline for organisations running AI agents is August 2, 2026, when the rules …

Frequently Asked Questions | AI Act Service Desk – The AI Act entered into force on 1 August 2024. It follows a staggered entry into application, with …

AI Agents and EU AI Act: What’s Changing? – eyreACT – If your agent touches any of these use cases, Chapter III applies from August 2026 regardless of whe…

[2604.04604] AI Agents Under EU Law – arXiv – We conclude that high-risk agentic systems with untraceable behavioral drift cannot currently satisf…

EU AI Act August 2026: key obligations for high-risk AI systems – EU AI Act deadline August 2, 2026: high-risk AI obligations, conformity assessment, fines up to EUR …

AI at Scale: How 2025 Set the Stage for Agent-Driven Enterprise … – Insights from the KPMG Q4 AI Pulse Survey reveal business leader priorities that will drive agents i…

Investment and AI Agent Deployment Surge as Execution Becomes … – This quarter, KPMG International launched its first Global AI Pulse Survey, expanding the lens to ca…

KPMG: Inside the AI agent playbook driving enterprise margin gains – “The survey makes clear that sustained investment in people, training and change management is what …

ISO 42001 Controls AI Governance: 9 Key Areas for 2025 – Master ISO 42001 controls AI governance in 2025 with 38 essential controls to ensure trust, complian…

From the AI Act to NIST to ISO 42001: The State of GenAI … – LinkedIn – Discover why autonomous Agentic AI breaks traditional compliance and how to build a dynamic, future-…

Governing AI That Acts: Singapore’s New Framework for Agentic AI – On 22 January 2026, Singapore’s Infocomm Media Development Authority (IMDA) released its Model AI Go…

ASEAN leaders convene to operationalise Singapore’s new Model … – “Singapore’s Model AI Governance Framework for Agentic AI (MGF) recognises what we’ve been telling c…


Leave a Reply