Legal & Regulatory Frameworks Weekly AI News
November 17 - November 25, 2025Major Government Action on AI Laws
The most significant development this week comes from the United States government. President Trump is preparing to sign an executive order that would change how artificial intelligence is regulated across the country. The order is called "Eliminating State Law Obstruction of National AI Policy," and it is designed to stop individual states from creating their own rules about AI systems, especially autonomous agents that can act on their own.
This is important because several states have already passed laws requiring AI companies to be transparent about their safety practices. California and Oregon have led the way with these rules, asking companies to explain their risk management plans. The new executive order would fight against these state laws by directing the U.S. Attorney General to create a new team specifically to challenge state AI regulations in court. The order says these state laws should be challenged because they supposedly harm interstate commerce, go against existing federal rules, or are unlawful in other ways.
The order also mentions withholding federal grants from states that have strict AI rules. These grants are meant to help build internet infrastructure across America, so removing this funding would seriously hurt states that created their own AI protections. The U.S. Commerce Department would have 90 days to identify state AI laws that don't match what the federal government wants. Both the Federal Trade Commission and the Federal Communications Commission would also help with this effort by looking for ways to override state laws with federal rules.
Legal Battles Over AI Agents Begin
Another critical story shows that companies are starting to use courts to fight back against AI agents they don't like. Amazon, one of the world's largest online retailers, sent a cease-and-desist letter to Perplexity AI about an AI agent called Comet. Comet is an autonomous shopping agent designed to automatically search for products on the internet. Amazon claims this violates their terms of service and possibly their intellectual property rights.
This legal action raises an important question: Who has the power to say what AI agents can and cannot do online? As more companies develop AI agents that act independently, traditional companies like Amazon are using the legal system to protect their interests. This suggests we will see more of these legal battles as AI agents become more common.
Growing Need for Governance and Rules
Businesses are racing to create AI governance policies to protect themselves legally and to use AI responsibly. An AI governance policy is a formal set of rules that explains how a company will use artificial intelligence safely, honestly, fairly, and within the law. These policies should cover how AI tools are developed, bought, and put into use by the company.
Experts recommend that companies follow seven steps when creating these policies. First, companies need to audit their AI usage—that is, find out where and how their business is already using AI. This might include automated tools in marketing, systems that screen job applicants, AI-generated financial reports, or chatbots that help customers. Companies should write down who is using the AI, what information it uses, and what decisions it affects.
One critical rule in these policies is that humans must always have control of important work done by AI. The policy should clearly state that employees need to check and approve any content created by AI, review automated reports, and make sure human judgment is part of every decision. Companies also need to set standards about how data used by AI systems is collected, stored, and shared, paying special attention to who owns the information and intellectual property.
New Security Frameworks for Autonomous AI
AWS, a major technology company, released a detailed Agentic AI Security Scoping Matrix that helps companies understand and protect themselves from risks created by AI agents. This framework explains that AI agents are very different from regular AI systems. While regular artificial intelligence answers questions in a back-and-forth conversation, agentic AI systems can work for long periods, make decisions, and change things on their own.
The framework describes four different levels of AI agent freedom. At the lowest level, agents are very limited and humans must approve everything. At higher levels, agents can make more decisions on their own. At the highest level, called Scope 4, AI agents can start their own activities based on what is happening around them and can execute complex tasks with almost no human involvement.
Each level of freedom requires different security protections. At lower levels, companies need strong approval systems and ways to prevent agents from getting around human approval. At higher levels, companies need advanced monitoring systems, artificial intelligence-powered anomaly detection, and automated responses when agents behave strangely. Companies also need to keep detailed records of everything agents do and be able to explain why agents made their decisions.
Problems and Industry Warnings
However, there is a warning sign about agentic AI adoption. Gartner, a company that studies technology trends, predicts that over 40 percent of agentic AI projects will be cancelled by the end of 2027. The reasons include rapidly increasing costs, unclear business value, and inadequate risk controls. This suggests that many companies are rushing into AI agents without proper planning, safety measures, or clear understanding of what benefits they will receive.
New Legal Protections in Contracts
Finally, the legal world is adapting by adding AI-specific clauses to contracts. These are special sections in legal agreements that deal specifically with artificial intelligence. These clauses include promises about how AI will be used, how the company will keep data safe, who is responsible if the AI makes mistakes or harms someone, and whether employees can use AI tools from outside the company. As AI systems become more powerful and more independent, contracts are becoming an important way for companies to protect themselves and clarify who is responsible when things go wrong.