Agentic AI on the Global Regulatory Radar

Artificial intelligence is changing rapidly, and governments around the world are working hard to keep up. Agentic AI is a new type of artificial intelligence that can think, plan, and take action without someone telling it exactly what to do at every step. Unlike regular AI that waits for instructions, agentic AI acts like a helpful coworker who notices problems, makes plans, and starts fixing them. This power makes agentic AI very useful for business, but it also creates new worries about safety and fairness.

The United Kingdom Takes Action

The United Kingdom just made big moves to help businesses understand the new rules for agentic AI. On October 29, 2025, UK regulators announced they would create regulatory sandboxes—special safe spaces where companies can test new AI technology while getting helpful guidance from the government. The Digital Regulation Cooperation Forum (DRCF), which includes four major UK regulators, asked businesses and experts to share their thoughts about challenges with agentic AI rules. The DRCF wants to understand what parts of current regulations help or hurt agentic AI development, and what new rules might be needed. This consultation closes on November 6, 2025, so companies have a short time to share their feedback. The UK learned lessons from helping 20 early-stage AI businesses during the past year, discovering that "regulation isn't a roadblock—it's a roadmap," meaning that following rules actually saves time and builds trust with customers.

Europe's AI Act Moves Forward

The European Union continues making progress on its groundbreaking AI Act, the world's first complete AI law. The EU divided AI into different risk levels, from minimal risk to unacceptable risk, and set different rules for each level. For high-risk AI systems, the EU AI Act bans dangerous uses like social scoring (ranking people by their behavior) and manipulative AI (systems that trick people). Companies that break these rules can face fines of up to €35 million or 7% of their global revenue, whichever is larger. In July 2025, the European Commission introduced three helpful tools to support companies: guidelines explaining the rules clearly, a voluntary Code of Practice offering practical steps, and a template for companies to explain what data they used to train their AI models. These tools are designed to work together and reduce paperwork burden while keeping people safe. The full AI Act will officially apply to all companies starting August 2, 2026, though some rules, like prohibitions and requirements for general-purpose AI models, started earlier.

Singapore Releases Agentic AI Guidelines

Accross the world in Singapore, regulators released draft guidelines specifically for agentic AI and invited businesses to give feedback through December 31, 2025. Singapore's approach focuses on accountability, data quality, security, and human oversight—making sure that humans stay in charge of important decisions. Singapore's framework is voluntary, meaning companies choose to follow it rather than being forced by law. This is different from the EU's mandatory approach, which shows that different countries are trying different strategies to balance innovation and protection.

Different Approaches Around the World

Countries are taking very different paths to regulate agentic AI. The United States moved toward lighter regulation, with the new 2025 AI Action Plan focusing on sector-specific oversight rather than one big rule for all AI. This means banking regulators make rules for banking AI, healthcare regulators make rules for healthcare AI, and so on. Meanwhile, Canada is still waiting for its proposed Artificial Intelligence and Data Act to become law, though it might align with EU rules if it passes. Brazil is developing a three-tiered system that classifies AI as banned, regulated, or low-risk based on how dangerous it could be. These different approaches mean that companies operating in multiple countries must follow many different rules.

The Challenge of Moving Fast Enough

The biggest problem regulators face is that agentic AI is advancing faster than laws can adapt. When AI can make decisions and take actions on its own, new questions arise about who is responsible if something goes wrong. Governments are experimenting with three main options: traditional government regulation that sets clear rules and punishes bad actors, industry self-regulation where companies create their own codes of conduct, or co-regulation which mixes both approaches. Each approach has pros and cons, and the right answer may be different for different types of agentic AI. What's clear is that business leaders have an important role in making sure agentic AI is developed safely and fairly.

Weekly Highlights