Ethics & Safety Weekly AI News
February 16 - February 24, 2026AI Must Help People, Not Replace Them
A major technology company called Axon released an important statement about the future of AI in public safety. The company's leader, Rick Smith, explained his thoughts after watching a science fiction movie called "Mercy." In the movie, an AI system acts like a judge and decides if people are guilty or innocent. However, Smith explained that in the real world, this would be wrong. He said that AI should never decide guilt, innocence, or punishment because these decisions involve morality and fairness that only humans can truly understand.
Instead, Axon believes AI should be a helper, not a boss. The company uses AI to help police officers write reports faster and to help people who speak different languages understand each other during emergencies. But in every case, humans still make the final decision. Officers review and approve reports themselves. Real people check every choice that AI makes. This approach keeps humans firmly in control of important decisions while letting AI handle tasks like finding patterns in information or making things faster.
The Problem with Trusting AI Too Much
Scientists at the University of Surrey discovered something troubling: people often trust AI without questioning it. Many AI systems worldwide make major decisions every day. These systems decide how ambulances should be sent to sick people, how delivery trucks should plan routes, and how drones should carry packages. These are called optimization algorithms, which means they are programs that decide the best way to do something by looking at rules like speed, cost, and weight.
The problem is that these AI choices are mathematically correct but hard to understand. When an AI system decides to do something unusual—like sending an ambulance a longer route or leaving a package undelivered—people often cannot explain why. Scientists worry that this "blind trust" creates serious safety and accountability risks. If something goes wrong, nobody can explain what the AI was thinking.
To fix this problem, the scientists created a new tool that uses machine learning to explain AI decisions in simple language. Instead of showing confusing math, the system can say things like: "We chose this option because heavy items would use too much battery power." This lets humans see the logic, question it, and stop it before something bad happens. For example, with delivery drones, regulators and companies can now understand and defend the choices the AI makes about which packages to carry.
The Military Learns to Use AI Safely
The United States Department of Defense created something new: a secure computer space where military workers can practice with AI. This is important because soldiers and military office workers need to learn what AI tools can do well and what they cannot do. According to expert Emelia Probasco, this secure space lets people experiment with AI tools and really understand their strengths and weaknesses.
This approach matches what other experts are saying: AI is most helpful when people understand it and use it carefully. By giving military workers a safe place to learn, the Department of Defense is helping ensure that when AI tools are used in real situations, the people using them will know how to use them responsibly.
Why This Matters
These three stories show that experts everywhere are thinking hard about AI safety. From police work to package delivery to military operations, people are learning that AI works best when it helps humans make better decisions, not when it makes decisions for them. Whether it is explaining what AI is thinking or letting people practice with AI safely, the goal is the same: keep humans in charge while getting the benefits of AI's speed and pattern-finding power.