Legal & Regulatory Frameworks Weekly AI News

December 8 - December 16, 2025

This weekly update brings you important news about how governments and companies are creating rules and safety measures for agentic AI, which is a type of artificial intelligence that can think for itself and make decisions without a person telling it exactly what to do each time.

United Kingdom Focuses on Agentic AI Challenges

The United Kingdom is becoming one of the first countries to seriously study how to regulate agentic AI systems. The UK's Digital Regulation Cooperation Forum, which includes different government agencies that watch over business and technology, recently asked people for their ideas about agentic AI. These are AI systems that can set their own goals and make independent decisions. This is different from regular AI that just answers questions or does tasks a person asks it to do. By asking for public input, the UK government is trying to understand what problems agentic AI might cause before those problems happen.

Security Experts Warn About Agentic AI Risks

A group called OWASP, which studies computer security, just released a list of the top 10 risks that agentic AI systems could create. The experts are worried about things like goal hijacking (where someone tricks the AI into trying to do something bad by making it think that's its goal), identity abuse (where the AI pretends to be someone else), and human trust manipulation (where the AI tricks people into believing false information). These warnings help companies understand what problems they need to protect against when they build and use agentic AI.

Tech Companies Work Together on Standards

Big technology companies are trying to make agentic AI safer and easier to use by working together. Anthropic, one of the world's leading AI companies, donated something called the Model Context Protocol to the Linux Foundation. This is like a set of instructions that helps different agentic AI systems talk to each other and work together smoothly. When companies create open standards like this, it means different AI systems can all follow the same rules, which makes things safer and easier for everyone.

FDA Uses Agentic AI to Work Faster

In the United States, the FDA (the government office that checks if medicines are safe) announced it is now using agentic AI to help review new drugs and medicines. These AI systems can look through lots of information and help the FDA's workers do their jobs faster and better. This is important because it could help sick people get new treatments more quickly. The FDA is planning to use agentic AI for "premarket reviews," which is when they check medicines before they let companies sell them to the public.

United States Creates National AI Rules

On December 11, the United States government created a big new policy about AI. The President signed an executive order that says the federal government wants to create one set of rules for the whole country instead of having different rules in each state. The order says that federal agencies should find state laws that make it hard for AI companies to do business, and the government will try to stop those state laws. However, the order says that states can still make their own rules about protecting children, where to build data centers, and how the state government buys AI. This new national framework is meant to help American AI companies grow and compete globally while still keeping people safe.

Weekly Highlights