Legal & Regulatory Frameworks Weekly AI News
March 24 - April 1, 2025The United States saw mixed progress on AI agent rules this week. Texas lawmakers introduced the Texas Responsible AI Governance Act (TRAIGA), which would ban AI systems from being used for social scoring (like rating people’s behavior) and require companies to file risk assessments for AI tools used in jobs or housing. However, experts think this bill might not pass because Texas usually supports business-friendly laws. Meanwhile, California began testing its new AI disclosure tools that will help people spot computer-made images and videos in ads starting in 2026.
In Europe, big car companies like Volkswagen shared their first AI safety reports showing how they’re checking self-driving car systems for errors, as required by the EU’s upcoming AI Act. The reports focus on preventing accidents and keeping user data safe. The UK’s National Cyber Security Centre also released tips for making AI chatbots less vulnerable to hacking.
China updated its AI voice laws this week, making apps like TikTok add special watermarks to videos that use synthetic voices. Game companies now have to tell players when AI agents (like non-human characters) are making decisions during gameplay. This follows recent cases where fake celebrity voices were used in scam ads.
Major tech firms are reacting to these changes. IBM launched a free AI Governance Toolkit with templates for risk assessments and bias checks, aiming to help small businesses follow new state laws in Colorado and California. Meanwhile, Walmart reported training over 10,000 workers on AI monitoring tools to avoid legal issues when using AI for inventory management.
Lawyers warned that hiring algorithms now face stricter rules. A new Colorado law starting in 2026 requires companies to notify job applicants when AI scores their resumes and lets them request human reviews. Similar laws are being discussed in New York and Illinois, causing many HR departments to slow down their AI adoption plans.
Finally, the United Nations held its first meeting about global AI agent standards, with over 50 countries agreeing to share data about AI safety incidents like chatbot failures or biased decisions. However, they couldn’t agree on rules for military AI robots, which will be discussed again in June.