Legal & Regulatory Frameworks Weekly AI News
November 10 - November 18, 2025The global community is working hard to create fair rules for agentic AI systems - a new kind of artificial intelligence that can think through problems step-by-step and make decisions without a human telling it exactly what to do at every moment. This is important because these AI systems can do a lot of good, but they can also cause problems if they are not managed carefully.
The Information Technology Industry Council, which represents major technology companies, published an important paper this week warning policymakers about potential dangers of agentic AI. The paper explains that even though these AI systems are trained on huge amounts of information, they can fail unexpectedly at simple tasks - a problem called jagged intelligence. When an AI system makes a mistake, it can create a chain reaction of errors in automated work processes. Additionally, agentic AI can be targeted by hackers who use techniques like prompt injection (tricking the AI into doing something bad) and data poisoning (feeding the AI false information). The ITI recommended that governments use a risk-based approach to make rules, which means focusing extra attention on the most dangerous uses of AI.
Across different parts of the world, countries are taking different approaches to regulating agentic AI based on their values and goals. In North America, including the United States and Canada, the focus is on encouraging companies to innovate and develop new AI technology while making sure there are safety guardrails in place. The U.S. government issued an Executive Order 14179 to promote responsible development of agentic AI, and organizations like the National Institute of Standards and Technology (NIST) are creating standards that companies can follow. Europe is taking a stricter approach with the EU AI Act, which is the world's first comprehensive law about artificial intelligence. European rules focus on making sure AI systems are trustworthy and transparent, and that people understand how AI is making decisions that affect them. Countries in Asia-Pacific, including Japan, China, and India, are pushing forward quickly to develop agentic AI technology while gradually creating governance frameworks to keep these systems safe and responsible. Meanwhile, Middle Eastern countries like Saudi Arabia and the United Arab Emirates are using government funding to make their nations leaders in AI technology, and African countries are focusing on education and building skills so their people can participate in the AI revolution.
A significant legal decision happened in the United Kingdom on November 4, 2025, when a court made its first major ruling about whether artificial intelligence systems can copy and use artwork and text from other sources without permission - a case called Getty v Stability AI. This judgment will help clarify important questions about intellectual property rights and artificial intelligence. Court decisions like this one help establish legal precedents that guide how companies can legally develop and use AI systems.
Organizations working in finance are creating practical guidelines to help companies use agentic AI safely. The FINOS AI Governance Framework version 2.0 is an updated set of rules and best practices that financial institutions can follow. This framework helps risk managers and compliance teams understand how to protect their organizations when using agentic AI. The framework covers 46 different types of risks and shows concrete ways to protect against each risk. By using frameworks like this, companies can develop new AI technology while making sure they are not breaking laws or putting their customers at risk.
Experts agree that successful governance of agentic AI must include strong cybersecurity measures to prevent hackers from manipulating these AI systems. This means using techniques like zero-trust architecture (never automatically trusting any user or system) and adversarial testing (actively trying to break the AI to find weaknesses). Additionally, companies need to make sure that AI systems are transparent about how they make decisions and accountable when things go wrong. Establishing strong governance frameworks with clear policies and rules helps organizations use agentic AI responsibly while driving innovation and efficiency.
The overall message from this week's regulatory developments is that the world is taking agentic AI seriously and working together to make sure these powerful technologies help humanity rather than harm it. Whether through government regulations, industry standards, or company policies, the focus remains consistent: create rules that allow innovation while protecting safety, ethics, and human values.