Legal & Regulatory Frameworks Weekly AI News

November 24 - December 2, 2025

## Major Regulatory Changes in the European Union

The European Union made important updates to its AI Act this week. On November 19, 2025, the European Commission announced it would delay the rules for high-risk AI systems until December 2027. This gives companies more time to understand what they need to do. Previously, these rules were supposed to start on August 2, 2026. However, the Commission can make the date earlier if they think companies are ready. The delay is part of a bigger plan called the "Digital Omnibus Regulation" that tries to simplify AI rules and make them easier to understand.

The EU also recommended better ways for companies to follow AI rules. They suggested creating standardized templates to reduce confusion and having clearer guidelines about marking content made by AI systems. Additionally, the EU is trying to make sure different digital laws work well together, including the GDPR (which protects personal information) and the Data Act (which deals with data rights).

## United States Proposes National AI Standards

The White House proposed a new executive order that would create uniform national AI standards across all United States states. This is important because some states like California and Colorado have already created their own AI laws. The proposed federal law would override state regulations to create a consistent approach nationwide. The White House plans to form an "AI Litigation Task Force" to challenge state-specific AI rules that they believe slow down innovation.

The proposed order focuses on security, data access, and governance for AI systems used by the government and private companies. It emphasizes protecting national security and ensuring that AI development aligns with American leadership in technology. However, some experts worry this could make it harder for states to protect their residents from AI risks.

## Financial Sector Gets New AI Rules

In the European Union, the Digital Operational Resilience Act (DORA) became official on January 17, 2025. This law requires all EU banks and financial companies to have strong cybersecurity and protect against digital attacks. On November 18, 2025, the European Supervisory Authorities named the critical technology providers that will be under extra supervision.

The financial sector is adopting AI rapidly, with about 85% of global banks expected to use AI by the end of 2025. AI systems help banks detect fraud, manage risks, and serve customers better. However, only 38% of AI projects are meeting their expected financial goals, and over 60% report delays. The European Parliament adopted a resolution on November 25, 2025 to study how AI affects banks and financial stability.

## New Tools for Building AI Agents

AWS (a major cloud computing company) announced new Agentic AI categories for its technology partners on November 30, 2025. These categories help companies build "agentic AI" systems—AI that can work independently and make decisions without constantly asking humans for help. About 23% of organizations expect to have fully working agentic AI systems within the next year, and 65% expect to have them by 2027.

AWS is offering extra marketing support and faster validation processes to help technology partners succeed with agentic AI projects. Google also released the Agent Payments Protocol in September 2025, which allows AI agents to conduct secure transactions within limits set by users.

## Growing Security Concerns About AI Agents

Experts are worried about security threats from AI-driven cyberattacks. In 2025, about one in six data breaches involved AI-powered attacks. In one notable incident, hackers from China used AI to automate between 80-90% of multi-stage cyberattacks against 30 organizations. These attacks are becoming smarter and more autonomous—meaning AI can plan and execute attacks with less human direction.

The White House draft executive order specifically mentions concerns about agentic AI threats, including attacks involving permission escalation, hallucination (when AI makes up false information), and memory manipulation. Security experts recommend that companies add agentic AI risk to their cybersecurity strategies.

## Global Privacy and Oversight Focus

On November 25, 2025, global privacy regulators highlighted the importance of human oversight of AI systems that make important decisions. Regulators worldwide are asking companies to ensure that people understand how AI affects their rights and have ways to challenge AI decisions.

The Council of Europe created the first legally binding international AI treaty, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". This agreement requires countries to protect human rights while still encouraging innovation.

## Investment and Strategic Partnerships

Microsoft and NVIDIA announced a combined \$15 billion commitment to Anthropic, the company behind the Claude AI model. This investment shows how serious major technology companies are about developing reliable AI systems for business use. These partnerships are making AI into core infrastructure that businesses depend on.

Weekly Highlights