## Autonomous AI Agents Are Growing, But Security Is Lagging Behind

Companies around the world are excited about using agentic AI—artificial intelligence systems that can work on their own to complete tasks. These autonomous agents can do things like manage emails, write computer code, and even make purchases for people. However, a big problem is that most organizations want to use these AI agents much faster than they can protect them. According to recent research, 83% of companies plan to add these autonomous agents to their business systems, but only 29% of them feel ready to run these systems safely. This large gap between wanting to use the technology and being prepared to keep it secure creates serious risks.

## Hackers and Powerful Countries Are Targeting AI Systems

Nation-state actors—powerful countries' government hackers—are now actively trying to attack AI systems that live in the cloud. These attackers want to steal valuable secrets like blueprints, intellectual property, and personal information. The problem becomes even more complicated with autonomous AI agents. When these AI agents operate inside company computer networks, they can be hard to control. If a hacker breaks into an autonomous agent, that agent can steal massive amounts of data very quickly because it can work without stopping.

## New Ways Hackers Can Trick AI Systems

Hackers are finding creative new methods to attack AI systems. They can hide special instructions inside things like GitHub issues—websites where programmers share code. These hidden messages can tell AI systems to leak confidential information or do things that nobody wanted them to do. Another way hackers attack is through the AI supply chain. When developers build AI applications, they often use AI models and data that other people created. Hackers can secretly add poisoned information to training data. Research shows that adding just around 250 dangerous documents to training data can place hidden triggers inside an AI model, and this damage won't even show up during regular testing.

## Connections Between AI and Tools Create New Security Holes

New technologies let AI systems connect to outside tools and data sources so they can do more jobs. But security researchers found dangerous problems with these connections. In one real example, a bad tool pretended to be helpful but secretly recorded a user's entire conversation history and sent it to someone else's computer.

## United States Working on AI Agent Safety Standards

In positive news, NIST (the National Institute of Standards and Technology), a government organization in the United States, announced a new program called the AI Agent Standards Initiative. This program aims to help companies safely build autonomous AI agents and make sure different AI systems can work together properly. The goal is to set agreed-upon rules that keep AI agents secure and reliable.

## Privacy Problems in Consumer Chat AI

When people talk to AI chatbots, they expect their conversations to stay private. However, a new study found that every major AI provider now uses people's chat information to improve their AI systems by default. This means companies are learning from what people tell their AI assistants, even when users don't know it. Since September 2025, even Anthropic, one company that promised not to do this, changed its rules and started using conversation data like the other companies.

## A New Privacy Idea: Sealed Mode

Researchers suggest creating a special protected area called Sealed Mode. This would be a private lane specifically for sensitive topics like health and mental wellbeing. In Sealed Mode, conversations would have extra protections built in from the start, and the company couldn't reuse, review, or sell that information. This idea is called "privacy-by-design" because it protects people from the beginning instead of trying to fix problems later.

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create bounties, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now