Legal & Regulatory Frameworks Weekly AI News
November 3 - November 11, 2025The artificial intelligence regulatory world is facing a major challenge with agentic AI. Agentic AI is a type of AI that can work by itself and make decisions without someone watching it every moment. Right now, there is no clear set of rules specifically for how agentic AI should work or be kept safe. This is creating confusion for companies trying to use this new technology.
Europe's AI Act is the world's first big rule book for AI, and it became stronger this week. However, some people are worried that the rules might be getting weaker. Europe's rules say that high-risk AI must be tested carefully and have humans watching over it. As more companies start using agentic AI, these rules will become very important.
Companies around the world are racing to use agentic AI because it can do more work automatically. Major tech companies like Microsoft, Amazon, Google, and OpenAI are all creating tools to help other businesses use agentic AI. But without clear rules about agentic AI, companies don't always know what they need to do to stay safe and legal.
Experts are telling governments that they need better plans for agentic AI. They say governments should help companies safely use this technology while also protecting people. Australia is working on a National AI Strategy that will be finished by the end of 2025, and it could include rules for agentic AI. The problem is that different countries have different ideas about how strict these rules should be.