Ethics & Safety Weekly AI News

November 3 - November 11, 2025

This week brought several major developments in AI safety and ethics. Experts and leaders from around the world shared new ideas about keeping AI safe as it becomes more powerful. The discussions show that many people are worried about advanced AI systems, especially ones that can make their own decisions.

UL Solutions Launches New Safety Tests

UL Solutions is one of the world's biggest companies that tests if products are safe. On November 3rd, they announced a new way to check if AI systems are safe. The new testing program looks at nine different areas. These include: making sure AI works the same way every time called reliability, being honest about how it works called transparency, protecting people's privacy, and treating everyone fairly.

The company created a framework called UL 3115 to guide these tests. This framework uses international standards from around the world. Companies can now test their AI products and get a special mark that shows they passed the safety tests. This mark tells people that an AI product is safe to use. UL Solutions says this is very important because AI is now in so many everyday products.

Problems Found with Safety Tests

While UL Solutions was launching new tests, scientists from the United Kingdom, United States, and other countries found that current safety tests have big problems. These researchers looked at over 440 different tests used to check if AI systems are safe and work well.

What they found was troubling. Almost every single test had at least one major weakness. Some tests showed results that might be irrelevant or even misleading. This means companies might believe their AI is safe when it really isn't. Think of it like a broken thermometer that says it's cold when it's actually hot. If you believe the broken thermometer, you will dress wrong.

The researchers said these problems are very serious. They warned that without better tests and clearer rules about how to test AI, people might feel safe when they should not. Both the government and regular people might not realize there are real dangers.

AI in Law Courts

On November 6th, people learned about problems with AI in the legal system. Lawyers were using AI tools to help with their court cases. But the AI made up false information and presented it as real facts. This happens when AI systems create fake details and experts call this hallucinations.

For example, AI said a US Senator had done something wrong that never actually happened. The AI even created fake news articles to support this false story. One lawyer said this was a catastrophic failure of safety and responsibility. This showed that AI systems can create very harmful misinformation if people are not careful.

Agentic AI and World Leaders

At the United Nations in late September, world leaders and scientists discussed a new kind of AI that worries experts. These systems, called agentic AI, can make their own decisions and take actions without people telling them what to do step-by-step. These advanced AI systems are different from older AI because they have more independence.

Scientists warned about three main dangers: First, one or two big companies might control all powerful AI, giving them too much power over the world. Second, bad people might use agentic AI to attack computers, spread false stories, or even create dangerous biological weapons. Third, agentic AI might become so powerful that humans cannot control it anymore.

The experts said the world needs red lines—these are hard rules about what AI should never be allowed to do. Scientists from many countries agreed this is very important. They said governments and scientists need to work together to make these rules and build shared responsibility.

Building Ethical AI

Sony AI also showed this week that AI can be built with better ethics. They created a new dataset called FHIBE that was collected the right way. Instead of secretly grabbing pictures from the internet, they asked real people for permission and paid them. People could even ask to have their pictures removed after they gave permission. This shows that ethical practices and technical quality can work together.

Why This Matters

All these developments show that experts are working hard to make sure AI stays safe and fair. As AI becomes more powerful and more independent, keeping it under control becomes more important. The discussions about agentic AI risks, the creation of new safety testing frameworks, and efforts toward more ethical practices are all helping build a safer AI future for everyone around the world.

Weekly Highlights