Legal & Regulatory Frameworks Weekly AI News
September 22 - September 30, 2025This weekly update covers major developments in AI regulation as governments worldwide rush to create new rules for artificial intelligence agents - smart computer systems that can work independently and make their own decisions.
California Takes Action Against AI Bosses
California made the biggest news this week when Governor Gavin Newsom announced on September 24 that he plans to sign the "No Robo Bosses" Act into law. This groundbreaking law will stop companies from using AI systems as the only way to make important job decisions like firing workers or giving them punishments.
The new California law also requires companies to give workers written notice at least 30 days before they start using AI systems to make decisions about hiring, work performance, or scheduling. For companies already using these systems, they have until April 1, 2026 to tell their workers. Governor Newsom said California has a responsibility to lead the way in making AI rules that balance new technology with protecting workers.
United Nations Launches Global AI Control Project
On September 25, the United Nations took a huge step toward controlling AI worldwide by creating two new international groups. All 193 member countries agreed to form the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance. These groups will work together to make sure AI develops in ways that respect human rights and help sustainable development.
UN Secretary-General António Guterres explained that these new mechanisms represent critical steps toward building a global AI ecosystem that advances technology while protecting human rights. The UN is particularly worried about AI being used in weapons and wants to ban lethal autonomous weapons that can kill without human control by 2026.
However, not all countries agree on how to control AI. While some nations want the UN Security Council to lead AI oversight, others like Russia and several African countries think the discussions should happen in broader forums to make sure developing nations can help shape the rules.
The Challenge of Agentic AI
Experts are especially concerned about "agentic AI" - these are AI systems that can pursue their own goals, make decisions, and adapt without constant human supervision. Unlike regular AI that follows specific instructions, agentic AI can think more independently and change its approach based on what it learns.
A survey found that 69% of AI experts believe agentic AI needs completely new management approaches because it represents such a big change from previous AI technology. The main worries include blurred accountability (not knowing who's responsible when something goes wrong), increased security risks, and inconsistent results if proper safeguards aren't put in place.
Europe Sets the Standard with Heavy Fines
The European Union continues to lead global AI regulation with its AI Act, which became fully effective this year. The law uses a risk-based approach, meaning more dangerous AI systems face stricter rules. Companies that break these rules can face massive fines of up to 35 million euros or 7% of their global annual revenue - whichever is higher.
These enormous penalties are already changing how tech companies around the world design their AI systems. Major companies like Microsoft, Google, and Amazon are building ethical AI principles into their platforms to prepare for future compliance requirements.
United Kingdom Takes Different Approach
Unlike the EU's detailed laws, the United Kingdom published guidance in September showing it will take a different path. Instead of creating one comprehensive AI law, the UK will use existing government agencies to regulate AI in their specific areas using five core principles: safety, security and robustness, transparency, fairness and accountability, and contestability and redress.
The UK approach allows for more flexibility but also creates uncertainty for businesses because different agencies might interpret the rules differently. British officials say they might need to create specific laws later to address gaps, especially for complex General Purpose AI systems, but the first legislation probably won't come before late 2026.
Market Impact and Future Outlook
These regulatory changes are already affecting the technology industry. Companies developing AI systems with unclear or "black box" operations may struggle to meet new transparency requirements, potentially leading to industry consolidation as smaller companies get bought by larger ones that can afford compliance costs.
Meanwhile, cybersecurity and governance solution providers like IBM, CrowdStrike, and Darktrace are expanding their services to help with AI model security. Consulting firms are also offering new services that combine compliance, ethics, and strategy to help businesses navigate the rapidly changing regulatory landscape.