Legal & Regulatory Frameworks Weekly AI News

November 17 - November 25, 2025

This weekly update covers important legal and regulatory changes happening around autonomous AI agents, also called agentic AI. These are artificial intelligence systems that can make decisions and take actions on their own with less help from humans.

The biggest news is that the United States President is planning to sign an executive order that would stop individual states from making their own rules about AI. This is a big deal because states like California and Oregon have already created laws requiring AI companies to explain how they keep people safe. The executive order would create a new task force to challenge these state laws in court. It would also have the government stop giving certain types of money to states that have strict AI rules.

Another major story involves Amazon, the big shopping company, taking legal action against Perplexity AI. Amazon sent a cease-and-desist letter about an AI agent called Comet that automatically searches for products on the internet. This shows that companies are worried about AI agents acting without permission on their websites.

Meanwhile, businesses are scrambling to write AI governance policies—which are rules about how companies should use artificial intelligence safely and legally. These policies help companies decide where AI is being used, check if vendors are following rules, and make sure humans still control important decisions.

Technology companies are also creating security frameworks to protect AI agents. For example, AWS released a new guide that explains different levels of AI agent freedom, from agents that need human approval for every action to agents that work almost completely on their own. These security plans include things like watching what agents do, making sure they stay within their allowed limits, and keeping detailed records of their actions.

Experts warn that many AI projects might fail. Gartner predicts that more than 40 percent of agentic AI projects might be stopped by the end of 2027 because they cost too much, don't provide clear benefits, or don't have good safety controls.

Finally, companies are adding special AI clauses to contracts to protect themselves. These legal agreements now include promises about how AI will be used, how data will be kept safe, and who is responsible if something goes wrong with the AI. This shows that as AI agents become more powerful and independent, the legal world is racing to catch up with new rules and protections.

Extended Coverage