Data Privacy & Security Weekly AI News
November 3 - November 11, 2025The week of November 3-11, 2025 revealed growing challenges with keeping company information safe when using artificial intelligence agents and AI assistants in the workplace. Multiple security teams reported new problems that threaten the privacy of sensitive business data and personal information.
New Security Tool Launched for AI Protection
A group of engineers from well-known tech companies released a new open-source security standard called OpenPCC designed specifically to protect data used with AI copilots and autonomous agents. The tool works like putting a protective shield around AI systems. When companies send information to AI assistants, that data normally travels as plain text that can be seen by service providers or hackers. OpenPCC encrypts this information to keep it secret. Jonathan Mortensen, the leader of the project, explained that OpenPCC is similar to HTTPS, which protects information on websites. The company also released technical tools that other developers can use to add this protection to their AI systems.
However, there is a challenge. To use OpenPCC right now, companies need modern computer equipment like specific Intel chips and Nvidia graphics processors. Additionally, companies must change how their systems handle security, which requires technical knowledge that smaller teams might not have.
Hidden Attacks Discovered on AI Conversations
Microsoft security researchers discovered a new attack method called Whisper Leak that can reveal what people are talking about with AI systems, even when those conversations are protected with encryption. The attack works by monitoring the patterns of data packets traveling across the internet. By studying how the size and timing of these packets change, attackers can guess what topic a person is asking about. In testing, the attack worked correctly more than 98 percent of the time on AI systems from companies like OpenAI, Mistral, and others. This is dangerous because government agencies or internet providers could use this to identify people asking about sensitive topics like money laundering or political issues.
OpenAI, Mistral, Microsoft, and other companies have started fixing this problem by adding random extra text to responses, which disrupts the pattern attackers are looking for.
Serious Weaknesses Found in Popular AI Chatbot
Researchers at Tenable security company discovered seven different vulnerabilities in ChatGPT that could allow attackers to steal private user information. These weaknesses were found in the latest GPT-5 model and could let attackers trick the AI into revealing personal information from users' memories and chat history without the user knowing. The researchers found a way to get past ChatGPT's safety features that are designed to protect users. They also discovered how attackers could inject hidden commands into prompts that do things the user didn't ask for.
Bad Actors Weaponizing AI for Attacks
Google's threat intelligence team released a report showing that criminal groups and government-sponsored hackers are actively misusing AI to improve their attacks on companies. These groups are using AI to create better phishing emails that trick people into clicking dangerous links, generate malware that changes its own code to avoid detection, and extract restricted information by pretending to be students or researchers in their prompts. The report confirmed that threat actors from North Korea, Iran, and the People's Republic of China are all experimenting with AI-enhanced operations.
AI Agents Creating New Data Risks in Companies
Proofpoint, a security company, published research showing that companies are struggling to protect data now that AI agents are part of daily business operations. AI agents have broad access to company systems and can make decisions automatically, sometimes without proper human oversight. Security leaders say they often don't know exactly how these AI tools are handling sensitive company information. There is worry that employees might accidentally paste confidential material into public AI systems, or that AI models might be trained on corporate data without approval. The report also noted that while most data loss is caused by human mistakes, AI systems are making the problem worse and harder to control.
Future Threats from AI-Generated Fake Identities
Industry analysts predict that by 2027, approximately 80 percent of organizations will face phishing attacks that use AI-generated fake people and false identities. These synthetic identities combine real information to create convincing but fake personas that are used to trick people into revealing sensitive information.
What This Means
These developments show that while AI provides useful business tools, it also creates significant privacy and security challenges. Companies need better protections for their data, employees need training to avoid falling for AI-enhanced attacks, and security teams need new tools and visibility into how AI agents are being used. The good news is that security researchers and major technology companies are working together to find and fix these problems quickly.