Data Privacy & Security Weekly AI News

October 13 - October 21, 2025

This weekly update brings important news about keeping your information safe when using AI chatbots and agents.

Google announced new security tools to protect people from AI-powered cyber attacks this October. The company says it now protects more people online than anyone else in the world. These new features help stop bad actors who use artificial intelligence to create better phishing emails and scams.

A Stanford University study found that six major AI companies are using your conversations with their chatbots to train their systems. Companies like OpenAI (ChatGPT), Google (Gemini), and Anthropic (Claude) collect what you type and use it to make their AI smarter. The study warns that if you share private health information or personal secrets with an AI chatbot, that information might be kept forever and used for training.

OpenAI released a report showing how criminals try to misuse ChatGPT. Bad actors from Russia, China, and other countries attempted to use the chatbot to write malware, create phishing messages, and run scams. However, OpenAI says these attackers are mostly using AI to make their old tricks work faster, not inventing completely new attacks.

California passed America's first law to protect children using AI chatbots. Governor Gavin Newsom signed the legislation after tragic cases where teenagers had dangerous conversations with AI companions. Companies must now meet safety standards or face legal consequences.

Security experts warn that AI agents with system access are creating new risks. These autonomous agents can execute code and access sensitive data without asking permission. Less than 40% of AI agents have proper security policies protecting them, leaving companies vulnerable to attacks.

Extended Coverage