Ethics & Safety Weekly AI News

July 28 - August 6, 2025

This weekly update covers major ethics and safety concerns about AI agents that emerged in late July and early August 2025.

The United States released a major AI Action Plan on July 23rd that focuses on making AI systems safer and more secure. The plan tells companies they must build safety into AI systems from the start, not add it later. Security experts say this is very important because AI agents can now do many tasks on their own.

California courts are also getting ready for new rules about AI use that will start in September 2025. These rules focus on keeping information private, preventing unfair treatment, and making sure AI outputs are accurate. The state is worried about AI "hallucinations" - when AI systems give wrong information that looks real.

Customer service companies are learning that AI agents can cause big problems if not watched carefully. These systems can make mistakes very quickly and spread those mistakes to many customers at once. Companies need strong rules to stop AI agents from doing things they shouldn't do.

Security researchers found that AI agents face new types of attacks that regular AI doesn't. Bad actors can trick AI agents through fake add-ons or by attacking the tools they use. As AI agents get smarter and work together, these security risks will grow.

Experts warn that we're creating AI systems that we don't fully understand. This "black box" problem makes it hard to predict when AI will fail or cause harm. The challenge is making sure AI agents want the same things humans want, which is much harder than it sounds.

Companies like IBM are working on ways to make AI agents more trustworthy. They say accountability is key - someone must always be responsible when AI makes decisions. The goal is to build AI that helps humans while staying safe and fair.

Extended Coverage