Data Privacy & Security Weekly AI News

December 22 - December 30, 2025

AI Chatbot Conversations Being Stolen

One of the biggest privacy scares this week involved a browser extension called Urban VPN Proxy that millions of people used. Security researchers discovered it was secretly collecting every single message users typed into AI chatbots, including ChatGPT from OpenAI, Claude from Anthropic, and Google Gemini. The extension collected these private conversations from more than 7.3 million users, and nobody knew it was happening. This shows how important it is to check what permissions you give to apps and extensions on your computer.

Agentic AI Creating Office Security Problems

Agentic AI refers to AI systems that can work independently without asking humans for permission each time. These tools can be very helpful, like Claude Code, which helps computer programmers write code faster. However, security experts warn these same tools can accidentally cause huge problems. When employees use AI assistants at work, they sometimes accidentally share secret company information, passwords, or customer data. The danger is that once an AI has this information, it could be used by hackers or bad actors. Companies need to teach employees which AI tools are safe to use and which data is okay to share.

Criminals Using AI to Attack Faster

Bad actors are using large language models—powerful AI systems—to make their hacking campaigns work much faster and affect more people. AI can help criminals write convincing phishing emails in many different languages, create malware code, and search through stolen data to find the most valuable information. What used to take days or weeks for hackers to do manually, AI can now do in hours. This means the barrier to entry for cybercrime is getting lower, meaning even small-time criminals can now launch attacks that only big hacking groups could do before.

FBI Warns About AI-Generated Voice Messages

The Federal Bureau of Investigation (FBI) in the United States warned about a new threat this week: criminals using AI-generated voice messages that sound like government officials. These voice messages, combined with fake text messages, trick people into giving away their passwords and personal information. The way the scam works is criminals call someone pretending to be an official, build trust by talking about something the person cares about, then ask them to switch to an encrypted messaging app like WhatsApp. Once there, they convince the person to share passwords or personal documents.

New York State Creates AI Safety Rules

New York State passed an important new law called the RAISE Act to make AI companies take safety more seriously. This law affects the very largest, most powerful AI models—ones that cost over $100 million to build. Companies that build these advanced AI models must write down their safety plans and share them publicly, and they must report any serious safety problems to the New York Attorney General within 72 hours. The law takes effect on January 1, 2027. This is one of the first state-level laws in America specifically designed to control the most advanced AI systems.

Encryption Under Attack in the United Kingdom

In the United Kingdom, the government reportedly ordered Apple to weaken its security protection called end-to-end encryption. Apple makes iPhones and other devices, and encryption keeps people's private information safe by scrambling it so only the owner can read it. Instead of fighting the order, Apple disabled a strong security feature called Advanced Data Protection for UK users. This decision is concerning because it makes it easier for hackers or government agencies to spy on people's private information.

What Companies Need to Do Now

Businesses need to prepare for stricter privacy rules coming in the future. Companies should make sure employees understand which AI tools are approved at work and teach them not to share sensitive information with AI assistants. Organizations must also strengthen their security measures, keep careful records of how they use data, and have quick plans to respond to security problems. The lesson is clear: as AI becomes more powerful and independent, companies need stronger defenses, better employee training, and clear rules about how AI can be used safely.

Weekly Highlights