This weekly update covers important decisions about artificial intelligence agents (also called agentic AI) around the world. An AI agent is a smart computer program that can plan, think, and do tasks all by itself with help from people.

The biggest news comes from the United States. On December 1st, the Food and Drug Administration (FDA) announced that all of its workers can now use agentic AI tools to help them do their jobs. These tools will help people review new medicines, check if products are safe, and handle other important work. The FDA says this tool is optional, and workers can choose to use it or not.

In Europe, the European Commission published new plans to make their AI rules simpler and easier to follow. The European Union has very strict rules about AI called the AI Act. The new plans say that companies will have more time to follow the rules and that smaller companies will get special help. The changes also let companies test AI in special practice areas without breaking the rules.

Another big story is about how regulators are starting to like AI agents. For example, in the banking and money business, AI agents can help catch bad behavior and fraud much better than old computer systems. These AI agents can explain why they make decisions, which makes people trust them more. Banks say these AI agents help them follow rules and catch problems faster and with fewer mistakes.

Also this week, groups from around the world met to talk about AI standards and what rules everyone should follow. UNESCO, a group that helps protect culture and education, also wrote new rules for using AI in courts and law.

In China, the government continues to have strict rules about AI. They want to make sure all AI-made content is labeled clearly so people know a computer made it.

All of this shows that governments around the world are working hard to make sure AI is used safely and fairly. They want people to understand how AI makes decisions, and they want to protect people from harm. At the same time, they want companies to be able to make new AI tools without too many rules stopping them. This balance between safety and new ideas is the biggest challenge everyone is facing right now.

Extended Coverage