Ethics & Safety Weekly AI News

September 29 - October 7, 2025

This weekly update brings important news about keeping AI agents safe and following good rules.

The biggest news comes from California in the United States. On September 29, 2025, the state's leader signed a new law called Senate Bill 53. This law is the first of its kind in America. It makes companies that build very smart AI systems tell everyone how they keep these systems safe.

The law focuses on something called frontier AI models. These are the most powerful AI systems that can think and act on their own. Companies like OpenAI, Google, and Meta must now follow new rules. They have to write reports about how they stop their AI from causing problems.

If companies don't follow the rules, they can be fined up to $1 million for each mistake. When something dangerous happens with their AI, they must tell the government within 15 days. If the danger is very bad, they only have 24 hours to report it.

Agentic AI systems are different from regular AI because they can take actions by themselves. This makes them more helpful but also more risky. They can do things like send emails, make purchases, or change computer settings without asking a person first.

Experts say these smart AI systems need special safety rules. They can cause bigger problems than regular AI because they work on their own. Companies need to watch them carefully and have ways to stop them if something goes wrong.

Not everyone likes the new California law. Some people think it will help keep AI safe. Others worry it might make it harder for companies to build new AI systems. They think the rules might be too strict and cost too much money.

This law will affect AI companies around the world, not just in California. Many big AI companies have offices there, so they will need to follow these rules. Other states and countries might make similar laws in the future.

Extended Coverage