Governments, companies, and health organizations around the world are working together to create safety rules for a powerful new kind of AI called agentic AI. Unlike regular AI that answers questions or creates pictures, agentic AI can understand situations, make plans, and then take actions all by itself to reach goals. This makes it more powerful but also more risky.

Singapore took an important step in protecting these advanced AI systems. The Singapore Cybersecurity Agency announced a new set of safety guidelines specifically for agentic AI. They are asking people from companies, schools, and governments around the world to help improve these guidelines by sharing their thoughts and suggestions. People have until December 31 to send in ideas. These guidelines explain how to spot dangers in agentic AI systems and what safety controls companies should put in place. The guidelines give examples using real-world situations like using AI to help with coding, checking if customers are trustworthy, and spotting fraud automatically.

Meanwhile, the United Kingdom released a new plan for how AI should be managed and controlled in their country. The Technology Secretary announced this blueprint to help businesses grow while keeping people safe. The new rules are supposed to make it faster for companies to get approval for building projects and could even help hospitals move through their patient waiting lists faster. The UK believes that having clear, fair rules actually helps innovation happen better because companies know what the rules are.

International health organizations also focused on AI safety this week. The World Health Organization held a major conference called AIRIS 2025 where leaders from many countries talked about how to use AI safely in healthcare. The message from this meeting was clear: countries need to work together on AI rules instead of each country making completely different rules. When countries collaborate, AI can be used in fair ways that help everyone, especially people in poorer countries.

Experts say that regular AI safety frameworks are not enough for agentic AI because it can do more on its own. The older rules were designed for simpler AI systems. New standards like ISO 42001 are being created to help companies manage agentic AI systems properly from start to finish. Companies are also using existing rules like PCI DSS (which protects payment information) but updating them for agentic AI systems. It is like how teachers created new rules for the internet when computers first connected to networks, but then had to make new rules again when smartphones became common.

Businesses that want to use agentic AI early need to think carefully about safety and follow regulations closely. Companies cannot just wait for new rules to be finished and then follow them. Instead, forward-thinking organizations are setting up strong safety controls now. They are treating AI systems almost like employees who need permission to access information and do certain jobs. They are also keeping detailed records of what their AI systems do, testing AI systems by trying to break them on purpose, and explaining how the AI made important decisions.

The real challenge is that AI technology is changing and improving much faster than governments can create new laws. A new rule might take months or years to create and put into place, but new AI capabilities might arrive in just weeks. This means organizations should not just follow the minimum rules—they should think about what safety steps make sense for their specific situation. Companies working in finance, healthcare, and other important areas face extra pressure because their mistakes could really hurt people. These industries are working hard to create safety standards that go beyond what governments currently require.

All of these announcements this week show that the whole world recognizes agentic AI needs special attention. No single country or company is working alone—everyone is sharing ideas about best practices and common safety checks. As agentic AI becomes more common in workplaces around the world, this teamwork on creating good rules will become even more important. The goal is to let this amazing new technology help businesses and help people, while making sure it stays under control and cannot cause harm.

Weekly Highlights