Legal & Regulatory Frameworks Weekly AI News
June 16 - June 24, 2025The European Union's AI Act continues to shape global rules for artificial intelligence. Starting in August 2025, new requirements will take effect for general-purpose AI models. These are the foundation for many agentic AI systems that can act independently. Companies must ensure transparency in how these systems operate and comply with copyright laws. For models carrying potential systemic risks—those that could cause widespread harm—providers must conduct risk assessments and implement safety measures.
Significant uncertainty persists as the EU's AI Office has not yet clarified which AI systems fall under "unacceptable risk" categories. This ambiguity has caused major technology companies to delay deploying advanced AI agents in European markets. Businesses eagerly await a forthcoming Code of Practice, expected to provide detailed compliance guidelines before the August deadline.
EU member states are establishing national enforcement frameworks aligned with the AI Act. Ireland has announced its Regulation of Artificial Intelligence Bill, which will create specific governance rules and penalties. This development is crucial as Ireland hosts the European headquarters of tech giants like Meta, TikTok, and Google. Penalties for violations could reach €35 million or 7% of global revenue.
Globally, regulatory approaches remain fragmented. At least 69 countries have proposed over 1,000 distinct AI policy initiatives. This patchwork of regulations complicates compliance for companies operating across borders. Some nations prioritize innovation, while others emphasize control—creating a challenging environment for deploying agentic AI internationally.
In the United States, state-level legislation dominates AI governance. The 2025 legislative session has seen proposals targeting AI bias in hiring, personal data privacy, and government AI applications. Multiple states are establishing task forces to study AI's impact and recommend safeguards, particularly for high-risk uses like law enforcement and financial services.
For agentic AI developers, the EU's upcoming rules mean increased scrutiny of autonomous decision-making capabilities. Systems using general-purpose models must demonstrate robust risk management protocols, especially for applications in healthcare, finance, and critical infrastructure. The requirement for human oversight mechanisms will directly affect how autonomous agents operate in real-world scenarios.
The Code of Practice under development by the EU's AI Office will be critical for clarifying compliance pathways. This document will outline practical steps for implementing the AI Act's requirements, serving as the primary reference for companies building agentic AI systems. Timely publication is essential to avoid disruptions in AI innovation across Europe.
Looking ahead, the August 2025 deadline creates urgency for regulatory alignment worldwide. Companies developing agentic AI must navigate varying international standards while preparing for the EU's stringent framework. Clear guidance from the AI Office in the coming weeks will determine how smoothly these new rules take effect.