Ethics & Safety Weekly AI News
February 23 - March 3, 2026This week saw major developments in AI safety and ethics across two countries. In Canada, government officials are considering new laws to make AI companies like OpenAI tell police when they find dangerous content. This comes after OpenAI blocked a person from using ChatGPT because of posts about gun violence, but didn't tell authorities before the person hurt others. Meanwhile, in the United States, there is a major disagreement between the military and Anthropic, a company that makes AI tools. The military wants to use the AI for anything they want, but Anthropic says they want to keep safety rules in place. Anthropic's leader, Dario Amodei, announced his company will stick to its safety values even if it means losing the military as a customer. These stories show the world is wrestling with a big question: should companies be required to report dangerous behavior found by their AI, and how much should military organizations be allowed to control how AI is used? The debates highlight growing concerns about what happens when AI systems find information about potential harm but don't tell the right people.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.