Ethics & Safety Weekly AI News
May 5 - May 14, 2025Global AI policy shifts dominated this week’s ethics and safety landscape, blending regulatory action with proactive initiatives. U.S. policymakers advanced the RAISE Act, which mandates third-party safety audits and public incident disclosures for companies developing high-impact AI systems. This approach avoids creating new regulatory bodies while focusing on severe risks—like over $1 billion in potential damages or large-scale injuries. The bill also protects whistleblowers reporting unsafe practices, addressing concerns about profit-driven shortcuts.
Across the Atlantic, the EU AI Act neared final approval, categorizing systems into risk levels. High-risk AI (e.g., in critical infrastructure or law enforcement) must meet stringent transparency, auditability, and human oversight requirements. This divergence from lighter U.S. regulations complicates compliance for multinational enterprises, particularly those in regulated sectors like healthcare and finance.
In industry applications, AI-driven workplace safety tools gained ethical momentum. Researchers argue that deploying AI for injury prevention isn’t just practical—it’s a moral duty. Advances like real-time hazard detection and predictive risk modeling aim to further reduce occupational injuries, which have already dropped 80% since the 1970s. Proponents emphasize these tools enhance human oversight rather than replace it, balancing efficiency with accountability.
Legal uncertainties lingered as GitHub Copilot faced renewed scrutiny over copyright use. Though recent lawsuits were dismissed, the enduring ambiguity raises risks for developers and partners using AI generation tools. This highlights the need for clearer guidelines on ownership and liability in AI-assisted creation.
On the research front, the Center for AI Safety (CAIS) continued its February-to-May training program, offering free compute access to study AI risks. Their focus spans technical challenges—like aligning advanced systems’ goals with human values—to philosophical inquiries into societal threats. Such initiatives emphasize collaboration to address risks like misaligned incentives or scalable harm from powerful AI.
These developments underscore a pattern: while innovation pushes boundaries, sustained pressure from regulators, researchers, and ethical advocates shapes AI’s responsible deployment. The tension between permissive U.S. policies and restrictive EU regulations reflects broader debates about balancing progress with caution, while sector-specific use cases illustrate localized ethical imperatives.