Legal & Regulatory Frameworks Weekly AI News
May 26 - June 3, 2025This weekly update covers major developments in legal frameworks for AI agents, focusing on new rules and challenges faced by governments and businesses.
European Union Tightens Agentic AI Rules The EU AI Act is expanding to address autonomous AI systems that can make decisions without human input. New guidelines stress proactive security measures and real-time monitoring for AI agents used in healthcare, finance, and transportation. For example, AI tools that manage patient treatments or stock trades now require stricter audits to prevent harmful actions. The European Commission will release a voluntary AI Code of Practice by mid-2025 to help companies prepare.
UK Balances Innovation and Safety The UK’s AI Opportunities Action Plan takes a different approach, using sector-specific guidelines instead of broad laws. Regulators like the FCA (Financial Conduct Authority) will soon issue rules for AI agents in banking, focusing on transparency and consumer protection. The new Code of Practice for Cyber Security highlights the need for secure design in AI systems, especially those that learn and adapt over time.
Legal Risks in the U.S. and Beyond American lawmakers are debating how to handle bias and discrimination in AI agents. A recent proposal in California would require companies to fix unfair AI hiring tools and explain how decisions are made. Similar discussions are happening in Canada and Australia, where AI agents used in government services face scrutiny for privacy violations.
Accountability Challenges One major debate is who’s responsible when AI agents cause harm. Courts in Germany recently ruled that both developers and users could be liable if an AI agent makes a mistake in contract negotiations. This has led to calls for insurance requirements and legal waivers specific to agentic AI.
Cybersecurity Threats With AI agents becoming targets for hackers, the UK’s cybersecurity code advises companies to limit what AI can access and regularly test for vulnerabilities. For example, AI customer service agents should have strict controls to prevent data leaks or unauthorized purchases.
Global Collaboration The UN and OECD held meetings this week to create international standards for agentic AI. Key topics included banning AI from controlling weapons and setting ethics rules for AI in education. While no agreements were reached, countries agreed to share best practices on risk assessments and emergency shutdown protocols.