Ethics & Safety Weekly AI News

August 4 - August 12, 2025

The world of AI agents and ethics saw major developments this week, with new rules, research findings, and safety concerns taking center stage across multiple countries and industries.

The most significant change happened in Europe, where the EU AI Act's second phase became active in August 2025. This landmark law focuses on what experts call "high-risk AI systems" - these are AI agents that can seriously affect people's rights and safety. The new rules require companies to do detailed risk checks, keep humans in charge of important decisions, and make their AI systems easy to understand. Any AI agent that makes recommendations about hiring, loans, or other life-changing choices must now follow these strict rules.

A troubling discovery emerged from medical researchers at Mount Sinai Hospital in New York. Their study found that popular AI chatbots are dangerously easy to trick with false medical information. When researchers slipped fake disease names or symptoms into questions, the AI agents confidently created detailed explanations about conditions that don't exist. This poses serious risks as more doctors and patients turn to AI for health advice. The good news is that researchers found a simple solution - adding just one warning sentence to prompts reduced these dangerous mistakes by nearly half.

Healthcare organizations in the United States are fighting for stronger AI oversight. The American Medical Association released their position on the government's 2025 AI action plan, demanding better protection against AI bias in medical care. They worry that unfair AI systems could lead to worse treatment for certain groups of patients. The doctors want stricter rules about how medical data is collected and used to train AI agents, plus clearer privacy protections for patients.

The workplace safety community made important decisions about AI's role in protecting workers. At their annual Safety 2025 conference, the American Society of Safety Professionals announced findings from their AI Task Force. They concluded that AI agents will become essential tools for preventing workplace injuries and saving lives. However, they stressed that these systems should work alongside human safety experts, not replace them. The organization is developing training programs to help safety professionals learn to work effectively with AI agents.

In Africa, researchers achieved a breakthrough in using AI to speed up ethical reviews. Francis Kombe and his team at EthiXPERT created an AI agent that helps review research proposals much faster than traditional methods. In many African countries, researchers wait months or even years for ethics approval before starting their studies. This new AI system could cut that time dramatically while still keeping human experts in charge of final decisions. The system had to overcome challenges with data privacy and trust, but early results show it can make research reviews more consistent and efficient.

Experts across different fields emphasized similar themes this week: AI agents need careful oversight, human judgment must remain central, and bias prevention is crucial. Whether it's European regulators, American doctors, workplace safety experts, or African researchers, everyone agrees that AI agents offer tremendous benefits but require strong ethical frameworks to use safely and fairly.

Weekly Highlights