Data Privacy & Security Weekly AI News

September 8 - September 16, 2025

This weekly update shows big changes in AI security and privacy. Google released a new AI model called VaultGemma that keeps private data safe. This model uses special math to make sure it cannot remember or leak sensitive information.

Meanwhile, AI-powered attacks are getting worse. Criminals used fake videos to trick a UK company into losing $25 million. These deepfake attacks fooled workers who thought they were talking to their real bosses on video calls.

The US government is taking action too. The FTC made Disney pay $10 million for collecting children's data without permission. They also started looking into AI chatbots that talk to kids and teens.

Companies are struggling to keep their AI systems secure. A new report found that most businesses use AI for important work, but their security has not kept up. About one-third of companies using AI have already been attacked.

Experts are worried about new types of AI vulnerabilities. Hackers can trick AI systems by feeding them bad data or asking tricky questions. They can also steal private information that was used to train the AI models.

The UK government is thinking about new ways to protect AI systems. They want to treat AI security problems like regular computer bugs that get reported and fixed.

Extended Coverage