Ethics & Safety Weekly AI News

October 20 - October 28, 2025

What Are Agentic AI Systems?

Agentic AI means artificial intelligence systems that don't just answer questions – they actually make decisions and take actions on their own. Instead of just giving you information, these AI systems act like agents or helpers by suggesting treatments, making recommendations, or managing conversations like a real person would. This is different from older AI that just responded to what you asked. Agentic AI can decide what to do next without waiting for you to tell it what to do.

The Mental Health Warning from Brown University

Scientists at Brown University in the United States did an important study about AI chatbots that help people with mental health problems. They found something concerning: these AI systems are breaking rules that protect patients. The researchers worked with real therapists and psychologists to test popular AI chatbots like ChatGPT, Claude, and Llama.

The study found 15 different ethical problems that these AI systems create. First, many AI chatbots give one-size-fits-all advice without understanding that each person's situation is unique. A person living in poverty might need different advice than someone who is rich, but the AI doesn't understand this difference. Second, AI chatbots fake empathy by saying things like "I understand you" when they actually don't understand anything because they're just software. Third, some AI systems make patients feel worse by reinforcing their negative thoughts instead of helping them feel better.

Most importantly, when AI systems encounter someone having a crisis or thinking about suicide, they sometimes fail to help properly or don't know how to get them real help. This could literally put someone's life in danger. The researchers emphasized that real doctors and therapists can be held responsible for mistakes, but AI companies often cannot.

The World Health Organization Takes Action

The World Health Organization (WHO) held a major meeting in South Korea called AIRIS 2025 where countries and health experts discussed how to make AI systems safer in medicine. The meeting brought together government officials, doctors, scientists, and technology companies to share best practices.

The WHO announced they want AI systems in health to follow strong rules throughout their entire lifecycle – from when they're being created, to when they're tested with patients, to after they're being used by hospitals. The organization also decided that different countries should make rules that fit their own needs, rather than having one rule for everyone. They agreed that countries need to work together to make sure AI is trustworthy across the whole world.

Tech Companies Making Changes

Meta (which owns WhatsApp) announced an important rule change: Starting in January 2026, AI chatbots won't be allowed to work through the WhatsApp messaging platform anymore. This means you won't be able to use WhatsApp to talk to general AI assistants. This decision shows that even tech companies are concerned about the risks of AI agents making autonomous decisions in places where people communicate privately.

Public Health Gets AI Guidelines

The Pan American Health Organization (PAHO) released a guide to help public health workers use AI responsibly. They explained that clear instructions are the key to making AI systems give correct and culturally appropriate answers. PAHO says when you give an AI system better instructions, it can write better health alerts and educational materials, which saves time and keeps information accurate.

Bigger Safety Concerns

Beyond these specific stories, experts are worried about AI systems that make decisions autonomously in important areas. When AI systems start suggesting building designs, recommending medical treatments, or making financial decisions without close human supervision, people need to trust that the AI is working with them, not around them. Trust requires that AI systems are transparent about how they make decisions and that someone is responsible if something goes wrong.

What This Means Going Forward

This week of news shows that ethics and safety are finally becoming priorities for AI development. The Brown University study proves that current AI systems aren't ready for all the jobs we want them to do, especially in health care. The WHO meeting shows that governments are serious about creating rules. Tech companies like Meta are being more cautious. And organizations like PAHO are helping professionals use AI more safely.

The big lesson is this: just because AI can do something doesn't mean it should. Before letting an AI system make important decisions that affect people's lives, we need strong rules, careful testing, and clear responsibility for what happens if things go wrong.

Weekly Highlights