Ethics & Safety Weekly AI News
September 29 - October 7, 2025This weekly update covers major developments in AI agent safety and rules that will change how smart computer systems work around the world.
The most important news comes from California in the United States. On September 29, 2025, Governor Gavin Newsom signed Senate Bill 53 into law. This makes California the first place in America to require companies to be open about how they build and control their most powerful AI systems. The law targets what experts call frontier AI models - these are the smartest AI systems that can think and act like humans in many ways.
The new law affects very large companies that make money from AI. Any company that earns more than $500 million each year must follow the new rules. This includes famous companies like OpenAI (which makes ChatGPT), Google, Meta (Facebook), and many others. These companies must now write detailed reports about how they keep their AI systems safe and share these reports with everyone.
The rules are quite strict about timing. When something dangerous happens with an AI system, companies have only 15 days to tell the California government about it. If the problem could hurt people right away, they have just 24 hours to report it. Companies that don't follow these rules can be fined up to $1 million for each mistake they make. If they keep breaking the rules, the fines can go as high as $10 million.
Agentic AI systems are at the center of these new safety concerns. Unlike regular AI that just answers questions, agentic AI can take real actions in the world. These systems can send emails, buy things online, control other computer programs, and make decisions without asking humans first. This makes them very powerful but also much more dangerous if something goes wrong.
Safety experts explain that agentic AI creates new types of risks that we haven't seen before. Regular AI might give a wrong answer, but agentic AI might take a wrong action. For example, a regular AI might suggest buying expensive software, but an agentic AI might actually buy it without permission. This is why companies need special safety rules and ways to stop these systems quickly if they start doing harmful things.
The new California law requires companies to create something called a frontier AI framework. This is like a safety manual that explains how the company will prevent their AI from causing big problems. Companies must also make their AI systems work with something called CalCompute - a special computer system that lets researchers study AI safety. This helps make sure that smart people outside the companies can check if the AI systems are really safe.
Workers at AI companies also get new protections under this law. If an employee sees that their company's AI might be dangerous, they can report this to the government without getting fired. This is called whistleblower protection, and it helps make sure companies can't hide safety problems.
Not everyone is happy about the new law. Some people think it will help keep AI safe and prevent serious accidents. They believe that when AI systems can act on their own, the public deserves to know how companies control them. Other people worry that the law might make it too expensive and difficult for companies to build new AI systems. They think the rules might slow down helpful AI development or make companies move away from California.
This California law will affect the whole world, not just people living in California. Many of the biggest AI companies have their main offices in California, so they will need to follow these rules no matter where they sell their AI systems. When companies publish their safety reports online, people everywhere will be able to read them. Other states in America and other countries around the world are watching California's new law carefully and might create similar rules for their own areas.
Experts believe this is just the beginning of new safety rules for agentic AI systems. As these AI systems become more powerful and common, governments everywhere will likely create more laws to make sure they stay safe and helpful rather than harmful.