Legal & Regulatory Frameworks Weekly AI News
June 23 - July 1, 2025The European Union is getting ready for new rules about general-purpose AI models. Starting August 2nd, companies must follow strict safety requirements for AI tools like chatbots and virtual assistants. The EU is creating a Code of Practice to help companies understand these new rules. Companies that don't follow the rules could pay fines up to €35 million.
Right now, there is confusion about which AI systems are banned in Europe. The bans started in February 2025, but many companies are unsure what counts as "unacceptable risk". Big tech companies like Meta, TikTok, and Google are being careful about using AI in Europe until they get clearer instructions.
Countries around the world are making their own AI rules. South Korea passed an AI Basic Act that looks similar to Europe's law. At least 69 countries have created over 1,000 AI policy plans to address safety concerns.
The EU AI Act requires companies to provide AI literacy training to workers. This means teaching employees how AI works and how to use it safely. These training rules started in February 2025 and are now active across Europe.
High-risk AI systems (like those used in hospitals or banks) have more time to follow the rules. They don't need to meet all requirements until August 2027. The EU is setting up special groups to help enforce these rules fairly across all 27 member countries.
Many companies worldwide are watching these rules carefully. The new laws will affect how AI agents (like customer service bots or smart assistants) are built and used. Companies that make AI tools must now think more about safety and rules before releasing new products.