The White House Released a Big Plan for AI Rules

On March 20, 2026, the White House released an important document called the National Policy Framework for Artificial Intelligence. This document does not create new laws right away, but it tells Congress what laws to create. The document talks about seven different areas where the government wants to make new rules. The main idea is that America needs one national set of AI rules instead of having different rules in every state. The White House says that if each state makes its own AI laws, it will be harder and more expensive for companies to do business.

Why One National Standard Matters

The White House explains that a patchwork of state laws would hurt America's ability to lead the world in artificial intelligence. Right now, some states like Colorado have already made their own AI rules. The federal government wants to make sure these state laws do not stop companies from building and testing new AI systems. The framework wants to give companies the freedom to innovate while still protecting important things like children's safety and creator rights.

What the Framework Wants for Innovation

The National Policy Framework wants America to remove barriers to innovation and become the world leader in AI. The government plans to help companies in several ways, including letting them test AI in special safe areas called regulatory sandboxes. The framework also wants to use existing government data to help companies build better AI systems. Instead of creating a brand new government agency just for AI, the government wants to use the agencies that already exist. The White House also trusts companies to create their own industry-led standards for safe AI.

Rules for AI That Help Customers

In the United Kingdom, the Competition and Markets Authority (CMA) released guidance on March 9, 2026 about AI agents that work with customers. An AI agent is an AI system that can do tasks like answering customer questions, processing refunds, or recommending products without a human telling it what to do each time. The CMA made it clear that companies cannot use AI as an excuse to break consumer protection laws. Companies must tell customers when they are talking to an AI instead of a human. Companies also need to make sure their AI agents follow all the rules about treating customers fairly.

Keeping Watch on AI Agents

The CMA's guidance says that putting an AI agent to work is not like a "set it and forget it" situation. Companies must have real people watching to make sure the AI agent is doing the right thing. If an AI agent makes a mistake or breaks a rule, the company must fix it very quickly because AI agents can affect thousands of customers at the same time. Companies that use AI agents should test them carefully before letting them help real customers. The guidance also says companies should make sure their AI agents have compliance by design, which means the rules about treating people fairly are built into how the AI works from the start.

The Big Problem: AI Agents Are Growing Too Fast

According to research from the consulting company McKinsey, companies are using more and more AI agents, but they are not ready to manage all the risks. Only about 30 percent of companies have good governance and safety plans for their AI agents. The research shows that security and risk concerns are the top reason companies are worried about using AI agents widely. Many companies are moving their AI agents into real business use very quickly, but they do not have enough people trained to handle the problems.

What Experts Say Organizations Need to Do

Experts who spoke at a big security conference in March 2026 said that companies need four main things to use AI agents safely. First, companies need to know exactly what AI agents are running in their business. Second, someone needs to be responsible for each AI agent and what it does. Third, companies need clear rules about what each AI agent is allowed to do. Finally, companies need to make sure their leaders at the highest level know about AI agent risks. Experts said that companies are putting AI agents to work much faster than they are building systems to control and manage them.

What Comes Next

The National Institute of Standards and Technology (NIST) is planning to hold a meeting in April 2026 where companies, government workers, and experts will talk about what rules and standards AI agents need. This meeting will help decide what rules should be changed so that AI agents can work better. Different industries like healthcare, food companies, and banks are all asking questions about how to use AI agents safely because their work is heavily regulated. Right now, it is not completely clear how all the old rules about human decision-making will apply to AI agents. Companies in these industries will need to figure this out as they start using more AI agents.

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now