Ethics & Safety Weekly AI News
November 10 - November 18, 2025This week brought major developments in AI safety and ethics around the world. The biggest news came from New York, which passed new laws to keep AI companion apps safer. These apps, which are types of AI agents that talk with people, now must detect when someone might hurt themselves and help them find crisis services. The state also passed a law making companies tell customers when they use AI to set prices based on personal information like where someone lives or how much money they make.
Beyond New York, many countries worked on making AI safer. UNESCO created the world's first global set of rules about neurotechnology, which is technology that connects to brains. The European Union started writing detailed instructions on how AI companies must show when content is made by AI instead of people. India released guidelines that say AI systems should be fair, safe, and understandable. The United Kingdom updated its rules about online safety, especially protecting children and stopping harmful content.
One important discovery this week involved AI in hospitals. Doctors found that AI systems sometimes give wrong medical advice that could hurt patients if doctors followed it without checking first. This showed why people need to watch AI carefully and check what it does, rather than trusting it completely. Many hospitals and healthcare leaders are now focusing on having real people review AI decisions before using them with patients.
All these changes show that countries and organizations worldwide are taking AI safety seriously. They're making rules to protect people, especially kids. They're also making sure AI companies are honest about what their AI systems do and when AI is making decisions that affect our lives. Experts say we need to find a balance between letting AI help us and protecting people from possible harms.