Ethics & Safety Weekly AI News

March 30 - April 7, 2026

Governments and organizations around the world are working hard to make sure AI systems are safe and fair for everyone. This weekly update shows major changes in how AI is being controlled and kept safe.

In March 2026, many countries released new rules about AI. These rules require companies to label AI-made content like pictures and videos so people know they are not real. Companies also must explain how they collect and use people's data when training AI systems. Some countries are even restricting dangerous AI uses like facial recognition without proper approval.

California took strong action by requiring companies that sell AI to the state to follow strict safety rules. China created new review processes to make sure AI systems are used ethically, including rules to prevent workers from being treated unfairly by AI algorithms. China also released rules for interactive AI chat systems to protect young people and elderly citizens from harm.

A major report found that most companies are not doing enough to keep AI safe. Only 12% of companies have policies ensuring humans check what AI systems do. Most companies also do not measure the impact of AI on the environment or human rights.

New rules also require AI watermarking by November 2026 and ban certain harmful AI systems that create fake inappropriate images. These changes show that ethics and safety are now top priorities for AI development worldwide.

Extended Coverage
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now