Data Privacy & Security Weekly AI News

August 18 - August 26, 2025

This weekly update covers major developments in AI security and privacy that happened around the world. These stories show how quickly AI technology is changing and the new challenges this creates.

The most significant news came from the United States military. An AI agent called GARY, made by a company called CORAS, received the highest security clearance from the Department of Defense. This clearance is called "Impact Level 5" and it means GARY can work with the most sensitive military secrets. This is groundbreaking because it's the first time any AI agent has been trusted at this level. Military experts say this shows that AI is now considered safe enough for the most important government work.

New browser AI assistants are creating serious privacy concerns. These are AI helpers that live inside web browsers and can see everything users do online. Privacy experts worry because these assistants can watch users browse websites, read their emails, and see their personal information. The concern is that this data might be sent to AI companies without users really understanding what's happening.

Cloudflare, a major internet company, launched a new tool called "Crawl Control" during their AI Week. This tool helps website owners stop AI companies from automatically copying their content. Many AI companies have been taking articles, images, and other content from websites without asking permission. They use this stolen content to train their AI models. Content creators have been very upset about this practice, so Cloudflare's new tool gives them a way to fight back.

American workers are secretly using AI tools against their company rules. A survey found that about half of office workers use AI even when their companies forbid it. The survey showed that 28% of workers put sensitive company information into AI tools to help with their jobs. This includes workers in important industries like banking and security. Many workers said they don't feel guilty about breaking these rules because AI makes their work much easier.

Meta, the company that owns Facebook and Instagram, faced new problems about AI privacy. A survey in Germany found that only 7% of Meta users actually want the company to use their personal data for AI training. However, Meta has been using this data anyway, claiming they have a "legitimate interest." Most users said they either didn't see Meta's announcement about this or didn't understand what it meant.

U.S. Senators are investigating Meta's AI chatbots and how they might harm children. Senator Hawley asked Meta to provide documents about their AI safety policies. There are worries that AI chatbots might give children harmful advice, especially about health topics. Meta says they focus on user protection, but lawmakers want to see proof.

AI security experts identified major vulnerabilities in 2025. These include "adversarial inputs" where hackers trick AI systems into making wrong decisions or revealing secret information. Another problem is "data poisoning" where bad actors corrupt the information used to train AI models. These attacks are becoming more common as more companies use AI.

The European Union published a list of AI companies that signed up for new safety rules. Big companies like Amazon, Google, Microsoft, and OpenAI agreed to follow the EU's Code of Practice for AI. This code requires companies to be more transparent about how their AI works and to respect copyright laws.

Application security surveys showed that companies are struggling to keep up with AI development. Many organizations now manage hundreds of applications, making it very hard to maintain security. The survey found that 39% of security leaders are most worried about ensuring AI is used safely in software development.

These stories reveal that AI privacy and security are becoming critical issues as AI tools become part of daily life for millions of people around the world.

Weekly Highlights