Ethics & Safety Weekly AI News

February 23 - March 3, 2026

This week brought important news about keeping AI systems safe and responsible in two major stories that affect people around the world.

First, the government of Canada is thinking about making new rules for artificial intelligence companies. These rules would require AI companies to tell police when they discover dangerous posts or threats online. This idea came up after OpenAI, a major AI company, blocked a person from using its ChatGPT service because that person posted scary things about guns and violence. However, here's the problem: OpenAI did not tell the police about these dangerous posts until after the person had already hurt eight people in a small town called Tumbler Ridge, British Columbia. This delay has made people upset and has started conversations about what AI companies should do when they find threatening content.

In the United States, a different kind of safety problem is happening between the military and an AI company called Anthropic. The U.S. Department of Defense (which is the military leadership) wanted Anthropic to remove safety limits from their Claude AI tool so the military could use it for any purpose they wanted. The defense leader gave Anthropic just a few days to agree to this plan. If the company didn't agree, the military threatened to ban Anthropic from working with the government and might force them to help anyway using old war-time powers.

Anthropic's leader, Dario Amodei, did something brave on Thursday: he publicly said his company would NOT remove these safety protections, even if it meant losing military business. Amodei explained that Anthropic would not allow their AI to be used for fully automatic weapons that shoot on their own or for spying on American citizens without permission. This was a big decision because the military is a powerful and important customer, and losing them could hurt the company's money and growth.

Experts who study business and ethics say that Anthropic's decision shows real leadership because the company is choosing safety values over money. One expert from a famous business school explained that when companies have to choose between making more profit and doing the right thing, it is very important for leaders to think carefully about everyone involved — not just the company, but also employees, customers, and society. The expert also pointed out that the situation is complicated because AI can actually help make weapons and military tools safer, not just more dangerous.

These two stories show that the world is having big conversations about responsibility and safety with new AI technology. People are asking: Should companies be required to tell authorities about danger? Should governments be able to control how companies use their own technology? And most importantly, how do we make sure AI systems are used safely and ethically by everyone — whether they are regular companies or military organizations?

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create bounties, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now