This weekly update brings important news about new rules and laws for AI agents around the world. These are special computer programs that can make decisions and take actions on their own.

In Italy, a new law about artificial intelligence started working on October 10, 2025. This Italian law works together with the European Union's AI Act to keep people safe when using AI. Italy set up two special groups to watch over AI. One group called AgID will check and approve AI companies. Another group called ACN will make sure companies follow the rules.

Across Europe, the EU AI Office launched a new plan called the Apply AI Strategy in October 2025. This plan helps countries use AI in smart and safe ways. The European Union already has the world's first big law about AI, which started in August 2024. This law sorts AI into different risk levels. Some AI uses are banned because they are too dangerous. Other AI systems need special checks before companies can use them.

Experts are talking about new challenges with agentic AI. These are AI programs that can do many tasks by themselves, like searching for information, making choices, and even controlling other computer systems. About 26% of risk and compliance workers are now using agentic AI in their jobs. But there are worries about keeping these AI agents safe and under control.

Regulators want companies to focus on three main things: data privacy, accountability, and transparency. This means keeping information safe, knowing who is responsible when AI makes mistakes, and making sure people can understand what AI is doing. Companies need to have clear rules and human oversight to make sure AI agents don't cause problems.

The rules are different in different countries. Japan's approach lets companies try new things with fewer strict rules. The United States is focusing on helping AI grow while keeping an eye on safety. Europe has the strictest rules with big fines for companies that break them.

Extended Coverage