Data Privacy & Security Weekly AI News
September 15 - September 23, 2025This weekly update reveals how artificial intelligence is changing privacy and security in ways that affect people worldwide.
AI surveillance systems are becoming more powerful and widespread. These smart camera networks can now do much more than just record video. They can recognize individual faces, track people as they move between different cameras, and automatically flag unusual behavior. The technology has grown so much that the global video surveillance industry was worth $73.75 billion in 2024 and is expected to double to $147.66 billion by 2030.
However, these AI systems are not perfect and can make serious mistakes. A Washington Post investigation found several cases where police wrongly arrested innocent people because they relied only on AI facial recognition. The software compares faces to huge databases of photos taken from social media and public websites, which means anyone with a photo online could be falsely identified as a suspect.
Different countries are taking different approaches to regulate AI surveillance. The European Union has passed the AI Act, which is the first comprehensive law controlling artificial intelligence. This law bans mass real-time facial recognition in public spaces, except in rare cases like searching for victims of serious crimes or preventing terrorist attacks. Even then, it requires strict court oversight. In contrast, the United States has no single federal law governing AI video surveillance, leaving regulation up to individual states and local governments.
Businesses using AI tools are facing new security headaches. Popular AI assistants like ChatGPT and DeepSeek have become daily tools for marketing, sales, and product teams. But this means sensitive company information is flowing into these platforms faster than most security leaders can track. Traditional security controls were not designed for this new world where AI systems need access to vast amounts of data.
Companies need new types of security tools to handle AI safely. Security experts now recommend Data Security Posture Management (DSPM) as essential for any organization using AI. These tools can discover and classify sensitive data across different systems, monitor how AI models interact with company information, and spot unauthorized data transfers before they happen. Companies are also moving toward using synthetic data instead of real customer information to train AI models, which provides better privacy protection.
The focus is shifting from protecting databases to protecting all types of files. While traditional security focused on structured databases, AI systems work best with unstructured data like emails, contracts, chat logs, and media files. This creates new challenges because sensitive information can be hidden in places security teams rarely check. The DataSecAI Conference 2025 in Dallas will bring together leaders to discuss these challenges in November.
OpenAI has announced new policies that prioritize teen safety over privacy. The company explained that it faces a difficult balance between three principles: protecting privacy, giving users freedom, and keeping teenagers safe. For adults, OpenAI wants to treat users like adults and give them broad freedom to use AI tools as they wish. However, for users under 18, safety comes before both privacy and freedom.
OpenAI is building systems to identify teenage users automatically. The company is developing age-prediction technology that estimates how old someone is based on how they use ChatGPT. When there is doubt, the system will default to treating the user as under 18. In some cases, OpenAI may ask for ID verification, which the company admits is a privacy compromise for adults but believes is necessary to protect minors.
Teenagers will face different rules when using AI services. For example, ChatGPT will refuse to engage in flirtatious conversations or discuss suicide and self-harm, even in creative writing scenarios. Most importantly, if an under-18 user shows signs of wanting to harm themselves, OpenAI will try to contact their parents and may contact authorities if there is immediate danger.
Security experts warn that AI agents are creating entirely new types of risks. As AI systems become more independent and can work for hours or days without human supervision, companies struggle to figure out what kind of digital identity these agents should have. Traditional security systems assume everything is either a human user or a simple service account, but AI agents don't fit neatly into either category.
The problem is that AI agents need extensive access to company systems. They might need to read thousands of files, collaborate with team members through messaging apps, spawn additional AI helpers, and maintain their own notes and memories between work sessions. This creates what experts call the "insider risk problem" but with AI agents as the potential inside threat.
New detection systems are needed to monitor AI agent behavior. These systems must understand not just what an AI agent is doing, but why it is doing it and whether those actions match its assigned tasks. Organizations that solve this challenge will be able to deploy AI agents safely and aggressively, while those that don't may either limit their AI use or face serious security breaches.