Legal & Regulatory Frameworks Weekly AI News

November 3 - November 11, 2025

This week in AI regulation, the world is discovering a big problem: agentic AI doesn't have clear rules yet. Agentic AI is different from regular AI because it can make decisions on its own and do multiple tasks without a person telling it what to do every single step. Companies want to use this technology, but they don't know exactly what rules they have to follow.

What is agentic AI? Agentic AI systems can think about what they need to do, plan out steps, and complete tasks by themselves. For example, an agentic AI might be able to read emails, write answers, and send them without a human doing each step. Companies like Microsoft, Google, Amazon (AWS), and OpenAI are all building tools so other businesses can create their own agentic AI systems. Spending on agentic AI is expected to grow to over 50 billion dollars by 2028.

The regulatory gap is a real problem. Currently, when companies build regular AI, they can follow rules like Europe's AI Act. But agentic AI is so new that regulators haven't written specific rules for it yet. This means companies are in a "no man's land" where they don't know for sure what they have to do. Some experts say this gap creates safety risks because agentic AI can do things without anyone watching.

Europe's AI rules are changing. Europe has the world's first big comprehensive AI law called the AI Act, which came into force in August 2024. This law groups AI into different risk levels - some AI is banned completely, some needs strict checking, and some has fewer rules. However, this week there are reports that Europe might be weakening these rules because of pressure from big tech companies. These rules could become very important for agentic AI in the future.

What experts are recommending. Government groups and AI experts are telling lawmakers that they need to create plans for agentic AI right away. They suggest that governments should have national strategies that say how they want to support agentic AI while also keeping people safe. They also say companies should have to tell people when agentic AI is making decisions that affect them.

Australia's approach. In Australia, the government released new guidance in October 2025 called the "Guidance for AI Adoption". This guidance helps companies use AI responsibly, including by having humans check what AI is doing and making sure AI doesn't hurt people. Australia is also working on a National AI Strategy that will be finished by the end of 2025 and might include specific rules for agentic AI.

The global picture. Different countries are taking different approaches to AI rules. The United States released an AI Action Plan, and California created a law about frontier AI models. Canada, Brazil, and the United Kingdom are also creating their own AI rules. However, it's hard for all these countries to agree on exactly what the rules should be because they have different values and different ideas about what matters most.

What comes next. As more companies start using agentic AI in 2025 and 2026, the need for clear rules will become very urgent. Regulators will probably need to write specific guidance for agentic AI to make sure it is safe, trustworthy, and doesn't hurt people. Companies are hoping that governments will create clear rules soon so they know what to do. The challenge is doing this fast enough to keep up with how quickly AI technology is changing.

Weekly Highlights