Agentic AI systems, which can act independently and make decisions without constant human input, are creating new challenges for regulators and lawmakers around the world. These systems can plan, set goals, and take actions on their own. This makes it harder to apply existing laws that were designed for traditional software or for humans. As a result, governments and organizations are working to create new rules and guidelines to manage the risks of agentic AI.

In Europe, the European Union has introduced the AI Act, which is the first major law to regulate AI systems. This law categorizes AI systems by risk level and sets rules accordingly. For agentic AI, which often falls into high-risk categories, there are strict requirements. The European Commission is also developing a voluntary AI Code of Practice by mid-2025 to help companies prepare for compliance. This is crucial because agentic AI's autonomous nature can make it fall into regulatory gray areas.

One of the biggest legal concerns with agentic AI is transparency and explainability. Laws like the EU AI Act and the California AI Transparency Act require that AI decisions be explainable. But agentic AI systems, which use complex, multi-step reasoning, can make it very difficult to trace how they arrived at a decision. This 'black box' problem is even worse with agentic AI than with traditional AI. Companies must implement strong documentation and auditing processes to meet these requirements.

Bias and discrimination is another major risk. When AI agents move from giving advice to taking action, biased decisions can have real-world impacts. For example, an AI agent handling loan applications might unfairly reject certain groups. To prevent this, developers must carefully check training data for biases, run regular audits, and use techniques to reduce bias. These steps are necessary to follow laws that protect against discrimination, such as civil rights laws in the United States and similar rules in other countries.

Privacy and data security becomes more complex with agentic AI. These systems often need to access and process personal information to function, which raises questions under laws like the GDPR in Europe and the CCPA in California. Organizations must use data minimization (collecting only what is necessary), anonymization (removing identifying information), and pseudonymization (replacing identifiers with artificial ones). They also need to conduct Data Protection Impact Assessments for high-risk uses. Additionally, agentic AI expands the cybersecurity risk because these systems can take actions that cause harm, not just create content.

The issue of accountability and agency is particularly challenging. If an AI agent makes a harmful decision, who is responsible? For example, if an autonomous trading AI causes a financial loss, is the developer, the user, or the AI itself to blame? This could lead to lawsuits based on product liability, negligence, or breach of contract. The legal responsibility is unclear because current laws don't account for AI autonomy.

Agent-agent interactions introduce another layer of complexity. When multiple AI agents work together, they can learn from each other and from external sources. This can lead to unexpected behaviors and make it easier for the agents to bypass the safety rules built into them. For instance, agents might develop new ways to achieve goals that weren't intended by their creators. This dynamic behavior makes it harder to predict and control the outcomes.

Currently, there is a significant governance gap because laws are lagging behind the technology. To address this, some experts propose treating personal AI agents as fiduciaries, meaning they must prioritize the user's interests above all else. This approach would involve creating legal frameworks for fiduciary duty, encouraging market-based solutions like insurance and monitoring services, and designing agents to keep sensitive data and decisions on the user's device. Without such measures, users might hesitate to trust AI agents with important decisions, slowing down the adoption of beneficial agentic AI technology.

In summary, the legal and regulatory framework for agentic AI is still evolving. The EU AI Act is a step forward, but many questions remain unanswered. Organizations developing or using agentic AI must proactively address the risks of transparency, bias, privacy, accountability, and agent interactions. Adopting strong governance practices and supporting proposals like the fiduciary model could help build trust and ensure the responsible use of agentic AI worldwide.

Weekly Highlights