This weekly update highlights major developments in how countries and international organizations are building legal frameworks for AI agents - sophisticated computer systems that can make decisions and take actions without constant human control.

The biggest news came from the United Nations, which announced the creation of the first truly global governance system for artificial intelligence. The UN established two important new bodies: the Global Dialogue on AI Governance and the Independent International Scientific Panel on AI. This matters because, until now, most international AI rules only included wealthy countries. In fact, only seven developed nations participated in all existing AI agreements, while 118 countries were left out completely.

The UN's new approach includes all 193 member countries, giving every nation a voice in how AI gets regulated around the world. The Scientific Panel will provide expert advice about AI risks and capabilities, helping governments make better decisions based on facts rather than guesses. For organizations using AI across different countries, this could eventually mean clearer, more consistent rules. However, experts think it will take about 5 to 7 years before these international standards become official requirements that companies must follow.

Singapore provided detailed answers about how it plans to regulate agentic AI systems. When a lawmaker asked about the risks of AI that can act autonomously and behave in unexpected ways, government officials explained their two-part strategy. First, they pointed out that many existing laws already apply to AI agents. These include the Personal Data Protection Act, the Workplace Fairness Act, and special rules for healthcare, finance, and legal work. The principle is simple: humans must remain accountable for what AI systems do, and organizations must put proper safeguards in place.

Second, Singapore is building new capabilities specifically for agentic AI. The government is running careful experiments to understand how these systems work in real situations. For example, GovTech, Singapore's technology agency, is testing agentic AI for government services. Officials emphasized they are doing this slowly and carefully, learning as they go while watching what other countries are doing.

A groundbreaking framework called "The Agentic State: Rethinking Government for the Era of Agentic AI" was released at a major digital summit. This comprehensive guide, written by more than 20 global digital government leaders, explains how autonomous AI systems can transform how governments operate. The authors include Luukas Ilves, who previously served as Chief Information Officer of Estonia, and Manuel Kilian from the Global Government Technology Centre Berlin.

The framework introduces a twelve-layer model showing how AI agents could handle core government functions. Unlike earlier technology that just automated existing paperwork, agentic AI can understand complex situations, think through problems, and take action within set boundaries. Contributors to the report include Ukraine's First Deputy Prime Minister, Mykhailo Fedorov, who noted that Ukraine is already moving toward a system where citizens can get government services with just one request or voice message.

For professionals working in cybersecurity, information governance, and legal discovery, this framework highlights both opportunities and challenges. The report addresses security risks like adversarial attacks where bad actors try to trick AI agents, and situations where AI agents might be impersonated. It emphasizes the need for transparency, accountability, and digital trust as governments begin pilot programs using agentic AI.

The Bank of England in the United Kingdom published its approach to innovation in artificial intelligence, along with other emerging technologies. While specific details weren't fully described in available reports, this signals that financial regulators are actively working on frameworks for AI in banking and finance.

These developments show a clear pattern: governments worldwide recognize that agentic AI represents a fundamental shift requiring new thinking about regulation. As one report noted, 2025 has been dubbed "the year of the AI agent" by industry leaders. Unlike chatbots that just answer questions, these systems can set their own goals and act with much less human supervision. They can book appointments, make purchases, write computer code, and interact with external systems through special programming interfaces.

The challenge for regulators is that AI agent actions can easily cross borders since they operate online. This makes it difficult for any single country to manage the risks alone, which is why international cooperation through bodies like the UN's new groups becomes so important. Experts emphasize that while new AI-specific rules are being developed, existing international law and principles about transparency, safety, and human rights already provide an important foundation for governing these technologies.

Weekly Highlights