Meta's AI Training Controversy Meta will start using public posts from European Facebook and Instagram users to train AI on May 27. Instead of asking permission (opt-in), they're using a "legitimate interest" claim under GDPR. Privacy group noyb is threatening lawsuits, arguing this violates European privacy laws. They note even 10% user consent would give Meta enough data to learn European languages.

New Privacy Rules Down Under New Zealand updated its privacy laws to require specific warnings when companies get personal data from third parties. Starting May 2026, vague privacy notices won't be enough – companies must clearly tell people how their indirectly collected data gets used. This helps New Zealand keep its E.U. data sharing approval.

Brazil Fights AI Abuse Brazil created new penalties for AI-assisted violence against women. Using deepfakes or altered media now adds up to 50% to jail sentences. This matches global efforts like the E.U. AI Act to control harmful AI uses.

AI Security Risks Grow Check Point's report shows AI chatbots cause data leaks – 1 in 13 prompts contains sensitive info. Companies face risks from both approved and unauthorized AI tools, which might share data with outsiders. Experts recommend using AI security systems to fight AI-powered hackers.

These developments show governments scrambling to control AI privacy risks while companies balance innovation with user protection. From Europe's GDPR battles to Brazil's deepfake crackdown, 2025 is proving crucial for AI regulation worldwide.

Weekly Highlights