Legal & Regulatory Frameworks Weekly AI News
December 15 - December 23, 2025# Weekly Legal and Regulatory Update on Agentic AI
## United States Creates New National AI Regulation Framework
The biggest news this week came from the United States government. On December 11, 2025, President Trump announced a new plan for controlling AI across the entire country. This plan is called the Executive Order on Ensuring a National Policy Framework for Artificial Intelligence. The government wants to create one set of rules that applies to every state, instead of letting each state make its own rules.
Why is the government doing this? According to the plan, having too many different state rules makes it very difficult for companies that build AI tools. When companies have to follow different rules in different states, it costs them more money and time. The government believes this slowness hurts the United States in the race to build the best AI technology in the world.
## How the New US Plan Works
The government's new plan does several important things. First, it creates a special team called the AI Litigation Task Force. This team's job is to look at state laws about AI and figure out which ones they should challenge in court. The team will argue that state laws are too strict and make it hard for companies to do business.
Second, the plan tells the Secretary of Commerce to make a list of state AI laws that seem too strict. This person has 90 days to finish the list. The plan pays special attention to laws that might force AI tools to give wrong answers or that might stop companies from sharing information about their AI.
Third, the government plans to make new federal rules that would apply to all states. This means states would not be allowed to make stricter rules than the federal government.
## The Problem with Agentic AI that Nobody Has Solved
While the government is making rules, there is another big problem that people are just starting to understand: agentic AI is very hard to control. Agentic AI is different from the AI tools most people know about. Regular AI, like ChatGPT, waits for a person to ask it a question and then answers it. Agentic AI, on the other hand, can work on its own. It can make decisions, take actions, and solve problems without a person telling it what to do.
The problem is that nobody has created good systems to make sure agentic AI follows the rules and does not make big mistakes. Companies are already using agentic AI to help with customer service, checking if work is done correctly, and managing other jobs. But without good control systems, these AI agents could break privacy laws, leak secret information, or make wrong decisions that hurt people.
Experts say we need new ways to track who gave permission for an AI agent to do something. They also say we need to keep records of every decision the AI agent makes so we can check if it did the right thing. This is especially important in industries that have strict rules, like medicine, money, and healthcare.
## Europe Takes a Different Approach
On the other side of the world, Europe is doing things differently. The European Union created very strict rules for AI called the EU AI Act. These rules went into effect in 2025 and will continue to be put into place through 2027. The EU wants to make sure AI is used safely and fairly, and they are willing to have stricter rules to protect people.
But here is something interesting: even though Europe created these strict rules, they are already changing them. Why? Because companies said following all the rules is too difficult and costs too much money. So the European government announced a plan called the Digital Omnibus to make the rules a little easier to follow. This shows that Europe is trying to find a balance between protecting people and letting companies do business.
## The FDA Uses AI to Check Medicine
Another important story is that the United States Food and Drug Administration, which checks that medicines are safe, announced it is using agentic AI to help with its work. The FDA said this AI will help check medicines faster and better. However, experts are worried because nobody is completely sure how to test that this AI works correctly.
## The Big Challenge: Countries Want Different Things
The most important thing to understand is that the United States and Europe want very different things when it comes to AI rules. The United States wants fewer rules so companies can create new AI faster. Europe wants more rules to protect people's rights and safety. This makes it very complicated for companies that do business in both places.
Companies now have to figure out how to follow the strict rules in Europe AND the looser rules in the United States. This is expensive and difficult. Experts say 2026 will be the year when we finally see if this problem can be solved.