Ethics & Safety Weekly AI News
March 16 - March 24, 2026## Big Changes Coming for AI Safety Rules
This week brought important news about how artificial intelligence will be controlled in the United States. On March 20, 2026, President Trump's administration released a new national plan for AI regulation. This plan is like a rulebook that tells Congress what new laws to make about AI. Before this plan, each state had different rules about AI, which made things confusing for companies.
## What the New Plan Says About AI Safety
The White House plan focuses on keeping AI safe and stopping bad things from happening. One big part of the plan is protecting children from harmful AI. The plan says AI companies must build in safety features to keep kids safe online. Another important part is about fake AI videos that look like real people but are actually made by computers. The plan wants rules so people know when a video is fake and made by AI.
The plan also says AI developers – the people who create AI tools – must think carefully about how their AI might hurt people. This is called duty of care. It means companies must design their AI to prevent problems before they happen.
## The Anthropic Story: When Ethics Meets Power
This week also showed that AI safety can cause big fights between companies and government. Anthropic is a company that makes AI assistants that millions of people use. The U.S. Department of Defense wanted to use Anthropic's AI, but they wanted to use it for "any lawful use". Anthropic said no. They were worried about ethical concerns – meaning they were worried that their AI might be used in ways that could hurt people.
When Anthropic refused, something unusual happened. The government said Anthropic was a security risk and banned it from all government work. This was a big punishment because Anthropic could lose billions of dollars. Anthropic did not give up. On March 9, the company went to federal court and said the government's decision was unfair. Anthropic argued that the government was punishing them because they had different beliefs about AI safety, not because Anthropic was actually dangerous.
## Why This Matters for Everyone
These events show that AI ethics – doing the right thing with AI – is becoming really important to governments and companies. The new White House plan says that AI-enabled systems used by the military and government need strong cybersecurity and safety checks. This means AI tools used for important work need to be very carefully designed and watched.
## Looking Ahead
Experts are watching to see if Congress will make the White House plan into actual laws. They are also watching the court case between Anthropic and the government to see how much power companies have to refuse using their AI in certain ways. This week made it clear that AI safety is not just a technical problem – it is a question about ethics, power, and who gets to decide how AI is used.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.