Ethics & Safety Weekly AI News

February 23 - March 3, 2026

This week saw major developments in AI safety and ethics across two countries. In Canada, government officials are considering new laws to make AI companies like OpenAI tell police when they find dangerous content. This comes after OpenAI blocked a person from using ChatGPT because of posts about gun violence, but didn't tell authorities before the person hurt others. Meanwhile, in the United States, there is a major disagreement between the military and Anthropic, a company that makes AI tools. The military wants to use the AI for anything they want, but Anthropic says they want to keep safety rules in place. Anthropic's leader, Dario Amodei, announced his company will stick to its safety values even if it means losing the military as a customer. These stories show the world is wrestling with a big question: should companies be required to report dangerous behavior found by their AI, and how much should military organizations be allowed to control how AI is used? The debates highlight growing concerns about what happens when AI systems find information about potential harm but don't tell the right people.

Extended Coverage
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create bounties, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now