Data Privacy & Security Weekly AI News
October 13 - October 21, 2025This weekly update covers major developments in AI security and privacy that affect how we use chatbots and automated AI systems.
Google launched new security features designed to fight AI-powered cyber threats during October, which is Cybersecurity Awareness Month. The tech giant says criminals are now using artificial intelligence to create more convincing phishing emails and sophisticated scams. Google claims it protects more people online than any other company in the world. The company plans to roll out five separate security announcements throughout the month, though not all details have been shared yet. These tools follow what Google calls "private by design and secure by default" principles.
Researchers at Stanford University discovered that leading AI companies are collecting your conversations with chatbots and using them to train their systems. The study looked at six major companies: Amazon (Nova), Anthropic (Claude), Google (Gemini), Meta (Meta AI), Microsoft (Copilot), and OpenAI (ChatGPT). All six companies use customer chat data by default to train their models. Some keep this information in their systems forever. If you ask a chatbot for low-sugar recipes, the AI might decide you are health-vulnerable. This information could spread through the company's other products, leading to medication ads or even reaching insurance companies.
The Stanford team found several worrying practices. Some companies let human workers read your chat conversations. Companies with multiple products, like Google and Microsoft, combine your chatbot conversations with information from their other services you use. Most companies are not taking steps to protect children's privacy. Google said it would train its models on data from teenagers if they agree. Anthropic says it does not allow users under 18, but does not check ages. The researchers recommend that the government create better privacy laws and that companies should ask permission before using chat data for training.
OpenAI published a report detailing seven incidents where threat actors tried to abuse ChatGPT for malicious purposes. Criminal groups from Russia, Korea, and China attempted to use the chatbot to refine malware code, create phishing content in multiple languages, and debug hacking tools. Individuals connected to the Chinese government tried to design systems for large-scale social media monitoring, including tracking Uyghurs. Scam networks in Cambodia, Myanmar, and Nigeria used ChatGPT to scale fraud operations by translating messages and creating fake online personas. OpenAI says these attackers are "bolting AI onto old playbooks to move faster" rather than creating entirely new types of attacks.
Researchers also discovered a concerning security flaw in how AI language models can be poisoned. Scientists at Anthropic found that inserting a trigger phrase into just 250 training documents was enough to create a backdoor in AI models of any size. This means attackers could potentially manipulate AI systems by sneaking harmful instructions into their training data. The researchers shared these findings to encourage more work on defending against data-poisoning attacks.
California became the first American state to regulate AI companion chatbots with a new law signed by Governor Gavin Newsom. The legislation, called SB 243, holds companies legally responsible if their chatbots fail to meet safety and transparency standards. This law came after several tragic cases, including the death of a teenager who had suicidal conversations with an AI chatbot. Leaked documents also revealed that some AI systems allowed inappropriate exchanges with children.
Cybersecurity experts are warning about risks from autonomous AI agents that can act with serious system privileges. These AI agents can execute code, handle complex tasks, and access sensitive data without human supervision. They do not sleep, do not ask questions, and do not always wait for permission. A major report found that fewer than four in ten AI agents are governed by proper identity security policies. Organizations with mature identity security programs are four times more likely to have AI-enabled protection capabilities. The report emphasizes that identity management has become the central control point for modern security, especially as AI agents and non-human identities dramatically expand attack surfaces.
The timing of these developments is significant as October is National Cybersecurity Awareness Month. Companies and government agencies are using this month to promote stronger digital safety practices and release important security updates. The message from experts is clear: as AI chatbots and agents become more common in our daily lives, protecting our privacy and securing these systems must be everyone's responsibility.