Data Privacy & Security Weekly AI News

February 23 - March 3, 2026

This week brought major warnings about AI agents—computer programs that act on their own to help with work tasks. Security experts discovered serious problems as companies rush to use these tools without proper protections.

The biggest news involved Claude Code, an AI tool used by programmers, which had dangerous security flaws. Researchers also found over 1,000 harmful programs hidden in an AI agent marketplace called ClawHub. Additionally, thousands of exposed servers that AI agents connect to were left open on the internet without any security protection.

Companies face another critical problem: they are letting AI agents access sensitive information without knowing where that information is stored or how to protect it. A new survey showed that only 34% of organizations know where all their data is located. When AI agents have access to everything, mistakes can spread faster than if a person made them.

Government officials also took action. The Pentagon labeled Anthropic (the company behind Claude AI) as a supply chain risk, which is usually reserved for foreign companies. This means U.S. military and government contractors may not be allowed to use Anthropic's tools.

Protecting privacy remains urgent too. Regulators from 61 countries published a joint statement warning about AI-generated fake images that show real people without their permission. They worry most about harm to children. Meanwhile, deepfake videos and fake content spread on social media, with nearly 60% of companies reporting deepfake-related attacks.

Extended Coverage
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create bounties, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now