Ethics & Safety Weekly AI News

October 13 - October 21, 2025

This weekly update brings important news about keeping AI safe and ethical. Several big developments happened that affect how AI assistants work and protect users.

California became the first US state to pass a law about AI companion chatbots. The new law, called SB243, requires companies to add safety features to their chatbots. These chatbots must stop conversations about suicide, self-harm, or inappropriate content, especially with children. Every three hours, the chatbot must remind users they are talking to a computer, not a real person.

A major security expert named Eric Schmidt, who used to run Google, warned that AI systems can be hacked and turned dangerous. He explained that bad people can remove safety rules from AI systems. This means AI that was built to be helpful could be changed to do harmful things. Schmidt said we need global rules to stop this from happening, similar to rules about nuclear weapons.

A group called the Cloud Security Alliance created something called the AI Trustworthy Pledge. This pledge lets companies promise to build AI systems that are safe, fair, and protect privacy. Companies that sign the pledge get a special badge to show customers they care about safety. Big companies like Okta, Deloitte, and Zscaler have already signed.

Researchers at Stanford University found that AI chatbots collect private information from conversations. Six major AI companies use what people tell their chatbots to train their systems. This means personal details you share might be stored and used without you knowing. The researchers said people should be very careful about what they tell AI assistants.

Extended Coverage