Ethics & Safety Weekly AI News

March 23 - March 31, 2026

## Major Federal AI Safety Framework in the United States

On March 20, 2026, the Trump Administration released a National Policy Framework for Artificial Intelligence. This is a big deal because it is the first unified federal law that could govern how AI works across the entire country. The framework brings together rules that were previously scattered across different states and agencies. It focuses on keeping children safe online, supporting data center power generation, and preventing bias in AI systems. The framework also includes protections so that companies cannot use people's voices, faces, or other identifiable features to create fake videos or audio without permission. However, there are exceptions for parody, satire, and news reporting, which means comedians and journalists can still use AI for their work.

## State-by-State Safety Bills Protect Children

Across the United States, many states are racing to pass their own chatbot and AI safety bills. In Arizona, a kids chatbot safety bill called HB 2311 passed the House in February and is moving through the Senate. Oklahoma passed two chatbot safety bills—HB 3544 and SB 1521—through their chambers. These bills all focus on one main goal: keeping children safe from AI chatbots that could encourage harmful behavior.

California is leading with multiple bills addressing different safety concerns. One bill, SB 1015, expands laws to include threats or extortion using AI-generated deepfake images, especially those targeting minors. Another bill, SB 928, specifies that teachers at California State University must be human, not AI. A third bill, SB 300, requires chatbot companies to prevent their products from creating or sharing sexually explicit material.

## Protecting Children Across Multiple States

Many other states are passing similar protective measures. Hawaii's SB 540 passed both the Senate and House, requiring AI companies to tell users that they are talking to AI, providing privacy tools, and having protocols to respond if someone mentions suicide or self-harm. Illinois also has chatbot safety bills, including HB 1263, which bans sexually explicit content, prevents AI from encouraging unhealthy emotional dependence, and stops the gamification of chatbot engagement for minors.

Kansas passed HB 2671, the Kansas Community Harmed by AI Technology Act, which requires age verification for AI chatbot access and parental consent for minors. Massachusetts has several bills in progress, including S 243 and S 264, which require companies to tell consumers when they are using software that simulates human conversation. Missouri passed HB 1913, a deepfake bill protecting against intimate AI-generated images, with a committee vote of 10-0.

Nebraska is considering LB 939, which prevents AI chatbots from looking human-like to minors, and LB 1185, which requires safety protocols in AI chatbots. New York introduced S 9051, another kids chatbot safety bill that is moving through committees.

## Ethical Frameworks and Research

Beyond laws, scientists and organizations are developing ethics frameworks for AI. Researchers at Carnegie Mellon University and the University of Michigan created a new approach called the capabilities approach-contextual integrity (CA-CI) framework. This framework helps evaluate whether AI systems respect people's privacy and dignity in different situations. For example, an AI system might be appropriate in one setting but not in another. The researchers show how this framework can help enforce the European Union's AI Act, passed in 2024, which requires fundamental rights impact assessments before using high-risk AI systems.

Canada is investing in AI safety research with over $1 million in funding through the Canadian AI Safety Institute. Eight research projects will examine AI safety through social sciences and humanities perspectives. These projects address challenges like Indigenous data sovereignty, certification of AI for professional roles, and protecting democratic processes from AI bias.

## Workplace and Professional Safety

Connecticut passed significant legislation called SB 5, which requires whistleblower protections for people who work at AI companies that create frontier AI models. The bill also requires companies to disclose when synthetic content is being used and expands rules for how AI is used in employment decisions. Additionally, subscription-based AI providers must clearly explain their pricing and terms.

California has a bill, SB 947, establishing worker protections regarding the use of AI and automated decision systems in employment. This protects workers from unfair automated decisions about hiring, firing, or other job matters.

## Global Standards and Healthcare

The World Health Organization released guidance on March 25, 2026, addressing ethics and governance of AI in healthcare, specifically focusing on large multi-modal models. This reflects growing concern about how AI systems are used in medical settings. Meanwhile, the IEEE Standards Association is developing certifications and standards for AI systems to ensure they meet cybersecurity, privacy, and ethical requirements. These certifications help prove that AI systems are trustworthy and secure, with protections against quantum computing threats and advanced privacy safeguards.

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now