This week saw important updates in AI agent regulations worldwide.

In Europe, the EU AI Act is getting new guidelines for agentic AI systems. These rules focus on security and human oversight because AI can now act on its own. Companies must check if their AI tools follow risk rules, especially when they work with other AI agents or make real-world decisions.

The UK shared its AI Opportunities Action Plan, asking businesses to do sector-specific risk checks for AI agents. Unlike the EU’s strict laws, the UK uses flexible guidelines about safety and fairness while encouraging new ideas. Both regions stressed the need for clear responsibility when AI causes problems, like financial harm or privacy breaches.

In the U.S., lawyers discussed bias risks in AI agents that handle tasks like hiring or loans. New state laws may soon require companies to audit AI tools for unfair treatment and explain how decisions are made.

Globally, experts agreed that cybersecurity is a top concern for agentic AI. The UK’s new Code of Practice for Cyber Security asks companies to build safeguards against hackers targeting AI systems.

Extended Coverage