Ethics & Safety Weekly AI News
May 12 - May 20, 2025U.S. Shifts AI Policy While EU Tightens Rules The U.S. government scaled back some federal AI regulations this week but maintained key safeguards. The National Institute of Standards and Technology (NIST) continued developing AI risk frameworks with partners like OpenAI, focusing on public safety protections. The FTC specifically targeted AI-powered hiring tools and marketing claims, vowing to penalize companies that mislead consumers. However, these steps contrasted with the European Union’s progress on its AI Act, which classifies AI systems by risk level. High-risk applications like facial recognition or medical diagnostics will need detailed documentation and human oversight—a challenge for global businesses.
Legal Battles Over AI Copyrights Microsoft faced renewed scrutiny as developers sued GitHub Copilot, claiming the AI coding tool violates open-source licenses by replicating protected code. While a previous lawsuit against Microsoft was dismissed, this case raises questions about liability for partners and resellers using AI tools. Legal experts noted similar copyright disputes are emerging worldwide, particularly around AI-generated art and music.
AI Safety in the Workplace Dr. Julia Penfield published a widely shared essay arguing that AI-driven safety systems are ethical necessities, not just business tools. She highlighted how AI reduced workplace injuries by 80% since the 1970s when combined with human oversight. Companies like construction firms and manufacturers reported using AI to predict equipment failures and alert workers to hazards in real time. However, Ethics Unwrapped researchers cautioned that poorly designed algorithms could reinforce biases—for example, falsely flagging certain workers as safety risks based on flawed data.
Combating AI-Generated Harassment New studies revealed alarming growth in deepfake attacks, where AI creates fake videos or images to damage reputations. A European cybersecurity firm demonstrated tools that detect 98% of AI-generated faces, but warned that harassment campaigns are becoming harder to spot. Meanwhile, AI governance startups launched services to verify content origins, helping social media platforms remove harmful deepfakes faster.
Global Calls for Ethical AI Design Citing parallels to building safety codes, over 200 AI engineers signed an open letter urging companies to treat safety testing as a moral duty. The letter referenced historic failures like the Boeing 737 MAX crashes, arguing AI systems need similar rigorous checks. In response, Microsoft and OpenAI announced expanded partnerships with the U.S. AI Safety Institute to develop transparency standards for medical and financial AI models.