Ethics & Safety Weekly AI News

February 16 - February 24, 2026

This weekly update covers three major stories about AI safety and ethics. First, technology company Axon shared important thoughts about how AI should help people make decisions, not make decisions for them. They explained that AI should never decide if someone is guilty or innocent, because those are serious choices that need human judgment. Instead, AI works best when it helps people see information more clearly and faster.

Second, scientists from the University of Surrey discovered a real problem: people trust AI systems too much without questioning them. These AI systems make important choices about how ambulances travel, how packages get delivered, and how drones work. The scientists showed that when AI makes a strange or risky choice, people often just accept it without asking why. They created a new method to explain what AI is thinking and why it made its choice, so humans can understand and challenge it if something seems wrong.

Third, the United States Department of Defense created a secure platform where military workers can practice using AI safely. This helps soldiers and office workers learn what AI tools can do well and what they cannot do. These three stories show that experts worldwide are thinking carefully about how to use AI safely while keeping humans in control of important decisions.

Extended Coverage