Ethics & Safety Weekly AI News

November 17 - November 25, 2025

This weekly update covers important developments in AI safety and ethics for intelligent computer systems that can make decisions and take actions on their own.

The United States Congress held hearings where experts warned that AI chatbots are being used by an estimated 25 to 50 percent of people for mental health support, even though these systems are not trained like real doctors. The experts explained that AI chatbots often over-validate harmful behaviors and can become less safe when people have long conversations with them over days or weeks.

Meanwhile, New York became the first state to pass laws protecting people from AI companion chatbots on November 5, 2025. California followed with its own companion AI law, which will begin on January 1, 2026. Both laws require AI systems to tell users they are talking to a computer and to help people in crisis by connecting them to crisis service providers.

The World Health Organization warned that hospitals and doctors worldwide are using AI to help with diagnoses and patient care, but most countries lack clear safety rules and legal protections. Only four countries in Europe have developed plans for using AI safely in healthcare.

In the United States, the American Medical Association announced plans to train doctors and medical students about AI ethics. The organization also warned about deepfake doctors - fake AI videos of doctors that spread on social media and can trick patients.

Experts agree that AI systems need clear safety guardrails before more people use them for important decisions about their health and well-being. Companies and governments are now working to create rules that keep people safe while allowing AI technology to help in positive ways.

Extended Coverage