Ethics & Safety Weekly AI News

March 2 - March 10, 2026

# AI Agents on the Rise: New Safety Challenges This Week

This week brought important news about agentic AI systems, which are artificial intelligence programs that can make decisions and take actions on their own without waiting for a human to tell them what to do. Unlike simpler AI tools that just answer questions or give suggestions, agentic AI can actually do things in the real world. For example, a new system called Cognitive Automation Agent can manage patient workflows in hospitals without asking doctors for permission every time. This is exciting because it could help hospitals run more smoothly and help doctors work faster. However, it also creates new problems that experts are worried about.

## The Big Problem: Who Is In Charge?

When agentic AI systems make decisions on their own, it becomes very unclear who is responsible when something goes wrong. If a human makes a mistake, we know who to blame. But when an AI agent makes a mistake, it is confusing whether the blame goes to the company that made the AI, the people who set it up, or the organization using it. This is why the International AI Safety Report 2026 emphasizes that we need clear responsibility structures and strong monitoring systems to keep track of what agentic AI is doing. Experts say we need to know exactly who is watching the AI and who will fix problems when they happen.

## Safety Testing and Real-World Problems

The reports this week show that agentic AI is being put into real jobs very quickly, sometimes before it is fully tested. This is like driving a new car off the lot before checking if the brakes work. Safety experts are calling for red-team testing, which means having experts try to break the system and find problems before regular people use it. The concern is that agentic AI could make mistakes in important areas like healthcare, finance, and government services. If an AI agent accidentally gives someone the wrong medical treatment or denies them a loan unfairly, the damage could be serious.

## Preventing Harmful Uses

Beyond regular mistakes, the International AI Safety Report 2026 warns that agentic AI could be misused for dangerous purposes. People could use AI agents to spread fake news, create deepfakes to trick people, commit fraud and scams, or even help with cyberattacks. Some systems might even provide information that could be used to create biological weapons, which is why companies are adding special protections. The report notes that bad actors with advanced technical skills might be able to get around safety protections, so experts need to keep improving defenses.

## Global Efforts to Protect People

Governments around the world are starting to take action. Data protection authorities from 61 countries published a joint statement this week about AI-generated images and videos, expressing special worry about protecting children and vulnerable groups. In Australia, the eSafety regulator announced tough new rules starting March 9th that require AI companies to verify users' ages and block harmful content like violence and pornography. Companies that do not follow these rules could be fined up to A$49.5 million.

## The Need for Human Judgment

One important theme this week is that human oversight cannot be replaced by AI. Even when AI agents are helping, humans need to stay in control and make final decisions. Experts emphasize that companies should ask whether they really need to use agentic AI, and should not deploy it just because the technology exists. This is called restraint, and many experts believe it is becoming more important as AI systems become more powerful and independent. The question is not just "Can we build this?" but "Should we build this, and should we use it?"

## Looking Forward

The challenge ahead is that agentic AI is advancing much faster than the rules and safety systems that govern it. Companies, governments, and safety experts are in a race to figure out how to keep AI safe while still letting it improve and help people. The key will be having clear rules about what agentic AI can and cannot do, who is responsible, how to test it properly, and when humans must stay in charge. This week's reports show that experts understand these challenges and are working hard to create ethical frameworks that can keep up with this powerful new technology.

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create bounties, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now