Ethics & Safety Weekly AI News

May 12 - May 20, 2025

This week saw major developments in AI ethics and safety across government policies and workplace applications. In the U.S., the Federal Trade Commission (FTC) warned companies about deceptive AI marketing claims and unfair hiring tools, signaling stricter oversight despite relaxed federal rules. Meanwhile, the European Union moved closer to passing its AI Act, which will require transparency and human oversight for high-risk AI systems like those used in healthcare or law enforcement.

A U.S. lawsuit against GitHub Copilot over copyright issues highlighted legal risks for developers using AI coding tools, even after Microsoft won an earlier case. Researchers also published new methods to detect AI-generated deepfakes, which can spread fake images or videos to harass individuals.

In workplace safety news, experts argued that using AI for hazard detection isn’t just efficient—it’s a moral obligation to protect workers. Companies reported fewer injuries after adopting AI tools that predict equipment failures. However, critics warned that biased safety algorithms could unfairly target certain employee groups if not properly tested.

Extended Coverage