Legal & Regulatory Frameworks Weekly AI News
September 8 - September 16, 2025This week marked a turning point for AI agent regulations worldwide. Several major developments showed how governments and organizations are working to balance innovation with safety.
The biggest news came from the United States on September 10th, when Senator Ted Cruz unveiled his plan for AI rules. Cruz, who leads an important Senate committee, wants America to take a light-touch approach to AI regulation. His plan includes five main ideas for keeping America ahead in AI development.
The most interesting part of Cruz's plan is something called the SANDBOX Act. This would create a special program where AI companies can test their agents with fewer government rules. Companies could apply to waive certain regulations for up to two years, with the possibility of extending this for up to ten years total. The idea is that innovation happens faster when there are fewer barriers.
Under this sandbox program, companies would still need to report any problems within 72 hours. They would also need to tell customers when they are using experimental AI systems. While companies would be protected from government punishment, regular people could still sue them if something goes wrong.
Europe is taking a different path. The EU AI Act officially started this year, creating strict rules for AI agents. The European system sorts AI into different risk levels. AI agents used in healthcare, transportation, and education are considered high-risk and need special approval before companies can use them.
Companies that break EU AI rules face serious consequences. The fines can be as high as 6% of a company's total yearly income. This has made many businesses pay close attention to following the rules properly.
The EU approach focuses on building trust through certification. The World Economic Forum explained this week that companies with proper AI certificates are getting more investment money. Government buyers are also starting to require proof that AI systems are safe before they will purchase them.
Auditing AI agents is becoming a major challenge for organizations. On September 2nd, the professional group ISACA warned that traditional auditing methods don't work well for AI agents. Unlike regular computer programs that follow set rules, AI agents make their own decisions based on complex reasoning that's hard to trace.
This creates problems when something goes wrong. If a regular computer program makes a mistake, auditors can usually figure out exactly what happened by looking at the code. But when an AI agent makes a bad decision, it's much harder to understand why it happened or who should be responsible.
Businesses are adapting to these new challenges. NTT DATA described how AI agents are changing business services on September 1st. Instead of following rigid scripts, these agents can make decisions and adapt on their own. This is making some traditional business processes much more efficient.
However, this also creates new regulatory questions. The COUNTER organization started a working group on September 5th to figure out how AI agents might change digital content tracking. When AI agents download articles or access databases on behalf of users, it's not clear how this should be counted or reported.
Healthcare is getting special attention from regulators. Many countries are creating specific rules for AI agents in medical settings. The US Food and Drug Administration announced two new AI councils in July to handle the growing number of medical AI applications. Japan has also created a comprehensive framework for AI in healthcare through its AI Promotion Act.
The focus on governance and trust is changing how companies build AI systems. Instead of just trying to make AI agents as powerful as possible, companies now need to make them trustworthy and explainable. This means building in ways to track decisions, explain reasoning, and ensure compliance with various regulations.
Looking ahead, the difference between American and European approaches could shape the global AI industry. The US sandbox model emphasizes innovation and flexibility, while the EU certification system prioritizes safety and trust. Companies operating globally will need to navigate both systems successfully.