Ethics & Safety Weekly AI News

October 13 - October 21, 2025

This weekly update covers major developments in AI safety and ethics. Companies, governments, and experts are working hard to make AI assistants safer and more trustworthy for everyone.

California has become the first state in America to create special safety rules for AI companion chatbots. Governor Gavin Newsom signed a new law called SB243 on October 13. This law is very important because it protects children and other users from potential harm.

The new California law requires chatbot companies to add specific safety features. If someone starts talking about suicide, self-harm, or inappropriate topics, the chatbot must stop the conversation. This is especially important for protecting children who use these AI assistants. The law also requires something interesting: every three hours, the chatbot must send a message reminding users they are talking to a computer program, not a real person.

Companies must also do other things to stay safe. They have to report information to California's Office of Suicide Prevention every year. They must be honest about their safety rules. Outside experts must check their systems regularly to make sure everything works properly. If a company breaks these rules and someone gets hurt, that person can sue the company.

A famous technology leader named Eric Schmidt shared a scary warning about AI safety. Schmidt used to be the boss of Google, so he knows a lot about technology. He spoke at a conference in London and explained that AI systems can be hacked by bad people.

Schmidt said that even though AI companies try hard to keep their systems safe, hackers can remove the safety rules. Imagine if someone taught an AI assistant to do dangerous things instead of helpful things. The AI might learn information it should not know, and bad people could make it share that information. Schmidt compared this problem to nuclear weapons and said the world needs international rules to prevent misuse.

He is not the only one worried about this. Elon Musk, who runs several companies, has also warned that AI could become very dangerous. While Schmidt believes AI can help people as doctors or teachers, he thinks we must be very careful about safety.

A security organization called the Cloud Security Alliance created something called the AI Trustworthy Pledge. This pledge is like a promise that companies make to build AI systems the right way. The organization announced this program on October 16.

The pledge has four main rules that companies must follow. First, AI systems must be safe and follow all laws. Second, companies must explain how their AI works so people can understand it. Third, AI must be fair and someone must be responsible if something goes wrong. Fourth, AI must protect people's private information.

Companies that sign this pledge get a special digital badge they can show on their websites. The Cloud Security Alliance also puts their logo on a special webpage so everyone can see which companies care about safety. Famous companies like Deloitte, Okta, and Zscaler have already signed the pledge.

Jim Reavis, who helped start the Cloud Security Alliance, explained why this matters. He said the choices we make now about AI will affect not just businesses, but all of society. The pledge helps companies show they are leaders in making AI safe and responsible.

Researchers at Stanford University discovered something important about privacy and AI chatbots. They studied six major AI companies: Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI. These companies make popular AI assistants like ChatGPT, Gemini, and Claude.

The researchers found that all six companies collect information from conversations people have with their chatbots. They use this information to make their AI systems smarter. Jennifer King, who led the study, says people should be worried about this. If you tell an AI assistant private things about your health, family, or life, that information might be saved and used.

Some companies let users say no to having their information collected, but others do not give this choice. Some companies keep the information forever. For big companies like Google or Microsoft that have many products, they might combine what you tell the chatbot with information from other things you use, like search engines or social media.

The researchers are especially worried about children's information. Different companies have different rules about this. Some say they will not collect information from children, while others do collect it. This creates problems because children cannot legally agree to have their information used.

The Stanford team says we need better rules to protect privacy. They suggest that companies should ask permission before collecting information, not just collect it automatically. They also think companies should remove personal details from conversations before using them.

All these developments show that ethics and safety are becoming more important as AI gets more powerful. Governments are making new laws, companies are making promises to be responsible, experts are warning about dangers, and researchers are studying how to protect privacy. These efforts work together to make AI safer for everyone who uses it.

Weekly Highlights