Legal & Regulatory Frameworks Weekly AI News
February 9 - February 17, 2026Contracts May Not Protect Companies from AI Agent Mistakes
This week, legal experts at major law firms issued an important warning: most companies are using old contracts that do not protect them from the new risks of agentic AI. These contracts were written for software that worked in a simple way—a human told the computer what to do, and it did it. But AI agents work differently. These AI systems can make decisions on their own, take actions without being told to do so, and even cause problems that no one expected.
When problems happen, most contracts say the company using the AI (the customer) must pay, not the company that made the AI (the supplier). For example, if an AI agent makes a mistake and sends money to the wrong supplier, or charges the wrong price to customers, the company that bought the AI usually has to pay for the mistake. Many old contracts even say that companies should not trust what the AI does. This creates a big problem: companies are responsible for fixing mistakes made by AI they do not fully control.
Compliance Rules Are Becoming More Strict
Governments around the world are making new rules about how companies must use AI. In Europe, the EU AI Act is becoming a law that affects how companies build and use AI systems. The biggest deadline is August 2, 2026, when the strict rules begin for high-risk AI systems—like AI that helps decide who gets a job or a loan. Companies using AI for these important decisions must make sure their AI is transparent, meaning people can understand why the AI made its decision.
In the United States, different states are passing their own laws because the federal government has not made one rule for everyone. California passed a transparency law that went into effect on January 1, 2026. Texas passed a "Responsible AI Governance Act" that also started January 1, 2026. Colorado passed the AI Act, which starts on June 30, 2026. Each of these laws has different rules, which makes it complicated for companies that work in multiple states.
The SEC Is Watching for Companies That Lie About AI
In early February 2026, the SEC (which checks financial companies) announced that "AI washing" is now their top concern. AI washing means when companies say they use AI, but they really do not. Or they make AI sound more powerful and smart than it actually is. The SEC is now asking companies to prove that they really use AI, and they want to see the computer code and data to check if it is true. This is a big change from just trusting what companies say.
New Tools Help Companies Follow the Rules
Companies are developing new ways to make sure their AI agents follow the rules. One popular approach is "policy as code". This means writing rules in computer language that the AI agent must follow automatically. For example, a rule might say an AI agent can only approve spending up to $1,000, and for anything more, a human must say yes first. This helps companies make sure their AI stays within the boundaries. Companies like HSBC have shown that it is possible to run hundreds of AI systems while keeping strict control.
Another important idea is human-in-the-loop governance, which means important decisions must be approved by a human before the AI can do them. This is especially important for decisions that affect people's lives, like hiring decisions or credit decisions. Companies need to clearly write down which decisions are too important to leave to AI alone.
What Companies Should Do Now
Legal experts are giving companies clear advice: do not wait. Companies should look at their old contracts and see if they are protected from AI agent mistakes. They should also test their AI systems to see what could go wrong. Companies should limit what their AI agents are allowed to do—for example, not letting AI agents make very expensive decisions or access very private information. Finally, companies must update their contracts to clearly say who is responsible when AI makes a mistake, especially for decisions that affect customers or break the law.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.