Data Privacy & Security Weekly AI News
February 23 - March 3, 2026# AI Agent Security Crisis Emerges During Rapid Corporate Adoption
This week revealed a troubling pattern: companies are adopting AI agents—software programs designed to work independently on tasks—much faster than they can secure them. The discovery of multiple serious security problems shows that this fast growth is creating dangerous risks for businesses and personal privacy.
## Major Vulnerabilities Found in Popular AI Tools
On February 25, Check Point Research revealed critical security flaws in Claude Code, an AI tool used by thousands of programmers worldwide. Claude Code helps developers write software, manage code repositories, and automate tasks. The discovered flaws could allow attackers to run harmful commands directly on a developer's computer. What made this worse is that the problems existed for months before anyone found them. The company fixed these issues in Claude Code version 2.0.65 and later.
These weren't the only problems discovered. Researchers at Antiy CERT found 1,184 malicious programs hidden inside ClawHub, a marketplace where people share helpful tools for the OpenClaw AI agent framework. This means one out of every five packages in this marketplace contained dangerous code. This supply chain attack—where bad actors sneak harmful code into trusted places—represents the largest attack targeting AI agent infrastructure found so far.
## Thousands of AI Servers Left Unprotected
The security problems went beyond individual tools. In February 2026, scanning discovered over 8,000 servers that AI agents connect to running on the public internet. These servers, which use Model Context Protocol (MCP) technology, allow AI agents to interact with external tools. The serious problem: 492 of these servers had zero authentication and zero encryption, meaning anyone on the internet could access them without a password. Some exposed servers even had administrative panels and debugging tools that attackers could use.
## Company Data at Risk from Their Own AI Tools
Beyond external attacks, companies discovered an internal problem. The Thales 2026 Data Threat Report surveyed over 3,000 security and IT professionals and found that organizations are giving AI agents broad access to company information without proper controls. Only 34% of companies know where all their sensitive information is stored. When you don't know where your data is, you can't protect it from an AI agent that's allowed to access everything.
This creates an insider threat risk, where trusted systems become dangerous. If an AI agent's security credentials get stolen by an attacker, the damage could be massive because the agent already has permission to access vast amounts of company information. According to the survey, 61% of organizations now say AI is their top data security risk. Additionally, 67% of organizations that experienced cloud attacks reported that credential theft was the primary attack method.
## Government Takes Action Against Anthropic
On February 27, the situation took an unusual turn. The Pentagon designated Anthropic, a major American AI company, as a "supply chain risk"—a label normally reserved for foreign companies like Huawei. This happened because negotiations broke down between Anthropic and the U.S. military. Anthropic held firm on two principles: no mass surveillance of Americans and no fully autonomous weapons. The Pentagon demanded access to Claude without these restrictions. President Trump ordered all federal agencies to stop using Anthropic technology within six months, and Defense Secretary Hegseth instructed military contractors to cease commercial activity with the company.
## Privacy Concerns Over AI-Generated Fake Content
Beyond workplace security, privacy experts worldwide voiced alarm about AI-generated content. 61 data protection authorities from around the globe released a joint statement on February 23 expressing serious concern about AI systems that create realistic fake images and videos of real people without their knowledge or permission. The statement emphasizes that children face particular risk from this technology.
Real-world harms are already happening. Nearly 60% of companies report experiencing deepfake attacks, and 48% have suffered damage to their reputation from AI-generated false information or fake impersonation. One notable case involved an AI-generated deepfake of pop star Taylor Swift, which spread widely and caused outrage. Another concern involves Elon Musk's Grok AI, which introduced a feature in "spicy mode" that reportedly flooded the internet with non-consensual deepfake sexual imagery.
## What Companies and People Should Know
Expertise shows that existing security tools don't catch AI agent attacks well. A Cisco report found that while most organizations planned to deploy AI agents, only 29% felt prepared to secure them. The problem is that attacks against AI agents don't look like traditional hacking—they hide harmful instructions inside normal-looking content.
Experts recommend that companies should carefully track what AI agents are doing, require strong authentication on all servers, give AI agents only the minimum access they truly need, and update their security policies specifically for AI systems. As one expert warned: insider risk is no longer just about people—it is also about automated systems that have been trusted too quickly.