Panoramic: Automotive and Mobility 2025
On 22 January 2026, Singapore unveiled the Model AI Governance Framework for Agentic AI (MGF) at the World Economic Forum 2026, reinforcing its commitment to keeping pace with rapid advances in artificial intelligence (AI). The framework represents the world's first dedicated governance model for agentic AI systems, which are AI agents capable of independently reasoning, planning, and executing tasks on behalf of humans.
Although the MGF does not impose binding legal obligations, it provides a strong indication of Singapore's regulatory trajectory and establishes practical best practices for industry adoption. The framework builds upon Singapore's established suite of AI governance initiatives, including the 2019 Model AI Governance Framework, the AI Verify testing framework, and the Global AI Assurance Pilot launched in 2025. What sets this new framework apart from previous initiatives, however, is its focus on the unique risks posed by increasingly autonomous AI tools, such as unauthorised actions, data misuse, biased decision-making, and systemic disruptions.
Agentic AI refers to systems that can plan, reason, and act across multiple steps to achieve objectives with minimal human intervention. Unlike generative AI, which produces outputs in response to prompts, agentic AI can initiate actions, adapt to new information, and interact with other agents or systems to complete tasks autonomously.
At the core of many agentic systems are language models that act as the central brain of the agent. These models interpret natural language instructions, devise strategies to achieve goals, and then activate connected tools such as calculators, calendars, and application interfaces. These tools enable the agent to perform tasks like updating records, processing payments, or controlling devices.
Agentic systems may be deterministic, producing consistent outputs for identical inputs, or non-deterministic, where results vary even with the same input. The latter introduces unpredictability, requiring stronger oversight and governance. In practice, agentic AI often involves multiple agents working in parallel, each specialising in a task, which increases efficiency but also compounds risks if errors cascade across the system
While risks such as hallucination and bias are already associated with AI, they can cause greater harm in an agentic AI context because errors may replicate across multiple outputs and processes.
The autonomous nature of agentic AI introduces several unique risk categories that organisations must address. The MGF identifies five categories:
Because agentic AI systems are adaptive and capable of acting directly on their environment, organisations must evaluate whether a proposed use case is appropriate before deployment. Risk assessment should consider both the impact (severity if something goes wrong) and the likelihood (probability of error). Key factors include:
Once a suitable use case is identified, risks should be bounded through design. This includes limiting agents to the minimum tools and data required, enforcing standard operating procedures for workflows, and creating mechanisms to disable agents if they malfunction. Organisations should also implement agent identity management, assigning unique identities to each agent, linking them to accountable human supervisors, and ensuring permissions cannot exceed those of the human user.
The framework makes clear that responsibility ultimately lies with the organisations and individuals overseeing agentic AI. Accountability should be distributed across teams:
To ensure meaningful oversight, organisations should establish checkpoints requiring human approval before sensitive or irreversible actions (for example, payments, deletions, or unusual behaviour). Oversight should be audited regularly, with training to help humans recognise common failure modes and avoid automation bias.
Technical safeguards must be embedded at every stage:
Organisations must empower end users to interact responsibly with agentic AI. Transparency is critical, which means users should be informed when they are engaging with an agent, what actions it can perform, what data it can access, and how to escalate issues.
For external users, such as customers, clear communication about limitations and human escalation points is essential; for internal users, such as employees integrating agents into workflows, training should cover best practices, oversight techniques, and common pitfalls. As agents take over routine tasks, organisations should ensure employees continue to develop and maintain core skills through training and exposure.
The MGF sets out clear parameters for the responsible use of agentic AI, offering organisations practical guidance to build trust in the deployment of advanced AI technologies. While the framework is not legally binding, it signals Singapore’s regulatory direction and establishes best practices that companies can adopt today.
Organisations should begin by reviewing their governance structures against the framework’s four dimensions, addressing any gaps, and strengthening policies and oversight mechanisms. As the MGF is designed to evolve, businesses should stay engaged with future updates and industry developments. For support in assessing the impact of this framework, including aligning vendor contracts, governance protocols, or user policies, please contact the authors or your usual Hogan Lovells representative.
Authored by Charmian Aw and Ciara O'Leary.