This weekly update covers major developments in laws and rules for AI agents and agentic AI systems. These are special computer programs that can work by themselves to complete tasks and make decisions.

Italy made big news this week by starting its new artificial intelligence law on October 10, 2025. The Italian government worked on this law to help keep people safe while allowing AI to grow. The law works hand-in-hand with the European Union's AI Act, which means Italy follows the same basic rules as other European countries. Italy picked two government groups to be in charge. The first group, called the Agency for Digital Italy or AgID, will approve and check AI companies to make sure they follow the rules. The second group, the National Cybersecurity Agency or ACN, will watch the market and make sure companies keep following the rules after they start selling their AI products.

The European Union also made progress this week. On October 7, 2025, the EU Commission launched something called the Apply AI Strategy. This new plan helps European countries use AI in ways that help people and businesses while staying safe. The plan works together with the AI Continent Action Plan that came out earlier in 2025. Europe is trying to be a world leader in AI safety and trust.

The EU AI Act is the world's first complete set of laws about artificial intelligence. It started on August 1, 2024, and different parts of the law began working at different times. Some rules about banned AI uses started on February 2, 2025. Rules about general purpose AI models began on August 2, 2025. Most other rules will start by August 2, 2026. Companies that break these rules can be fined huge amounts of money - up to 35 million euros or 7% of their total worldwide sales.

Countries around the world are taking different approaches to AI rules. Japan passed its own AI law in May 2025. The Japanese law is very different from Europe's approach. Japan wants companies to be free to try new ideas without too many rules getting in the way. The law has no strict categories of risk and doesn't ban any AI uses. Instead, it relies on companies to follow good practices on their own. Japan set up special testing areas called regulatory sandboxes where companies can try new AI ideas safely.

The United States is taking yet another path with its AI Action Plan. America wants to speed up AI development and help U.S. companies lead the world in AI technology. The plan focuses on working with private companies, giving money for research, and reducing rules that might slow down progress. The U.S. government wants American AI standards to be used around the world.

A big topic this week is agentic AI and the new problems it creates for regulators. Unlike simple chatbots that just answer questions, agentic AI can plan ahead, use tools and programs, work with other AI agents, and take real actions in computer systems. These AI agents can now fill out forms, update records, search for information, and complete whole business processes from start to finish. Big technology companies like OpenAI, Google, and Amazon now offer tools that let companies build and use these AI agents.

A survey of 600 risk and compliance professionals around the world found that 40% now know about agentic AI and 26% are actively using it in their work. Companies are using agentic AI mainly to automate boring, repetitive tasks. But they're also using it to help human workers make better decisions by providing information and suggestions.

The biggest worry about agentic AI is keeping it under control. When AI agents can take actions by themselves, mistakes can cause bigger problems. Experts say the old zero trust security model isn't enough anymore for agentic AI. Zero trust means checking every person or program that tries to access systems. But with AI agents, we also need to understand why the AI wants to do something and what it's trying to accomplish.

Regulators and companies agree on three main priorities for agentic AI. First is data privacy and protection. AI agents work with sensitive information, so strong safeguards are needed. Second is accountability - knowing who is responsible when an AI agent makes a mistake or causes harm. Third is transparency and explainability - making sure people can understand how AI agents make decisions and what they're doing.

Experts say companies need to act now, not wait for perfect rules to be written. They recommend five steps: test AI agents in safe environments, track what agents do, write rules about what agents can and cannot do, work with standards groups, and prepare to prove to auditors that AI is being used safely. Organizations that prepare now will avoid costly problems later when regulators demand proof of safe AI use.

The message from regulators is clear: AI agents offer great opportunities but only if companies address challenges around oversight, transparency, and following rules. As one expert put it, companies deploying agents without proper governance will likely face expensive fixes in the coming years. The key is keeping human expertise at the center of decision-making, even as AI becomes more capable of acting on its own.

Weekly Highlights