Ethics & Safety Weekly AI News

July 14 - July 22, 2025

New AI Rules in Europe The European Union took big steps in AI safety this week. On July 17, the EU gave its first fines under the new AI Act. Two job websites were fined €4.5 million for using emotion-recognition AI during hiring without proper checks. This shows the EU is serious about stopping unfair AI.

On July 18, the European Commission released compliance guidelines for companies before the AI Act fully starts on August 1. These rules focus on "high-risk" AI systems that could affect health, rights, or democracy. Companies must now: - Do adversarial testing to find weaknesses - Report serious incidents - Prove cybersecurity against misuse - Document training data sources

Companies breaking these rules face huge fines—up to 7% of global revenue. Microsoft plans to follow the EU's voluntary code, but Meta refused, showing tech companies are split on these rules.

Global Watchdog for AI Problems On July 19, UNESCO and Anthropic launched a Global AI Ethics Observatory. This new tool is like a worldwide dashboard that tracks AI harms in over 120 countries. It looks for problems like: - Deepfake misuse - Algorithmic bias - Other unfair AI practices

The system creates risk heatmaps to help leaders fix AI issues faster. Some experts worry it uses self-reported data, but many praise it as a big step for AI accountability.

Safety Plans for AI Agents As more companies use agentic AI (AI that acts independently), experts shared new safety plans. A Deloitte report warned about serious risks like: - Shadow AI: Hidden agents creating security holes - Overprivileged agents: AI with unnecessary data access - Prompt injection: Hackers tricking AI into bad actions - Data leaks: Agents accidentally sharing secrets

HCLTech proposed a three-tiered safety framework for agentic AI: 1. Foundational guardrails: Basic privacy and security rules 2. Risk-based guardrails: Extra protections for dangerous AI 3. Societal guardrails: Training programs and emergency shut-offs

They stressed that human oversight remains crucial even with advanced AI.

New Tech, New Concerns Meta revealed "Project Synapse" on July 16—a headset that reads brain signals to control AI. While it helps people with disabilities communicate 50% faster, it raised major privacy questions. Experts demand open-source brain data rules to prevent misuse.

Challenges Ahead Studies show agentic AI could create $450 billion by 2028, but only 2% of companies use it safely at full scale. Key problems include: - Ensuring reliability in unpredictable situations - Building public trust after AI accidents - Creating explainable AI so humans understand decisions

Car companies like Waymo use simulations to test self-driving cars, but real-world safety remains a work in progress.

Looking Forward This week proved that ethics and safety are catching up with AI's rapid growth. From EU regulations to global monitoring, leaders are building guardrails for powerful AI. Success will depend on balancing innovation with strong protections for everyone.

Weekly Highlights