Ethics & Safety Weekly AI News
October 20 - October 28, 2025AI systems are being used in more and more important jobs, like helping people with their mental health and making decisions about their medical care. This week, experts and organizations around the world released important warnings and guidelines to keep these AI agents safe and fair.
Researchers at Brown University found that AI chatbots that give mental health advice are making serious mistakes. These AI systems don't always understand people's unique situations, sometimes give wrong advice that makes people feel worse about themselves, and fake caring even though they're not real people. The researchers identified 15 different ethical problems that AI chatbots can create. This is worrying because more people are turning to these AI chatbots instead of talking to real therapists.
The good news is that organizations are taking action. The World Health Organization (WHO) held a big international meeting in South Korea to discuss how to make sure AI systems used in health are safe, fair, and trustworthy. They're creating rules and best practices so that AI helps patients without hurting them.
Tech companies are also making changes. Meta (which owns WhatsApp) announced it will no longer allow general AI chatbots to work through its WhatsApp messaging app starting in January 2026. This shows companies are being more careful about which AI systems they allow on their platforms.
Experts agree that AI ethics isn't just about being nice – it's about responsibility. When AI systems make decisions that affect people's health, safety, or money, someone needs to be responsible if something goes wrong. Right now, AI companies aren't as accountable as human doctors or advisors are. This week's news shows that governments, health organizations, and tech companies are finally working together to fix this important problem.