Ethics & Safety Weekly AI News
March 16 - March 24, 2026This week, the United States government made big changes to how AI systems will be controlled and kept safe. On March 20, 2026, the White House released a new plan for AI regulation that sets rules for the entire country instead of having different rules in each state. The plan focuses on protecting children, stopping harmful AI-generated fake videos, and making sure AI companies follow safety rules when creating their tools.
A major issue this week involved a company called Anthropic, which makes AI assistants. The U.S. military wanted Anthropic to let them use their AI for anything they wanted, but Anthropic said no because they worried about ethical concerns. When Anthropic refused, the government said they could not do business with Anthropic anymore. Anthropic then went to court to fight this decision, saying the government was punishing them for their safety beliefs.
The new White House plan also creates rules that AI developers must follow to prevent harm. Companies that make AI tools will need to be careful about how their AI might hurt people. The plan talks about AI-enabled systems being used in military and government work, and says these systems need strong cybersecurity protection. Overall, this week showed that the government is taking AI safety more seriously than ever before, but companies and the government do not always agree on what safe AI means.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.