Ethics & Safety Weekly AI News

December 1 - December 9, 2025

A major AI safety report was released this week by the Future of Life Institute, and it shows that the world's biggest AI companies are not doing enough to keep their powerful systems safe. The report looked at eight major AI developers that make tools millions of people use every day. These companies include the makers of ChatGPT, Gemini, Claude, and other popular AI assistants. The report examined how well each company is protecting people from dangerous AI risks.

When the report gave out grades, the results were not good. Only two companies got a C grade or higher, which is barely passing. Anthropic scored the best with a C+, Google DeepMind came next, and then OpenAI. The companies that scored the worst include xAI, Meta, Alibaba Cloud, DeepSeek, and Z.ai. The report shows there is a clear divide between top performers and everyone else. This means some companies are trying harder to be safe, but most still have a long way to go.

One of the biggest problems the report found is that companies are not sharing enough information about how they test their AI for safety. When companies keep their testing secret, nobody knows if their AI is really safe or not. The report says companies need to let independent experts from outside the company check their work. This is important because the companies themselves might not want to report bad news about their systems, since that could cause problems for their business.

Another big concern is that companies do not have strong whistleblower protections. This means that workers at these companies cannot safely report safety problems without getting in trouble. If a worker sees something dangerous happening with the AI, they should be able to tell someone without fear of losing their job. Many of the companies the report checked do not have clear rules protecting workers who report safety issues.

The report also says companies are not ready for really powerful AI that might be coming soon. Some AI companies say they could create superhuman AI or AGI (Artificial General Intelligence) in just two to five years. That is very soon. The report says companies need to have clear plans right now for how to keep these powerful systems safe. But most companies do not have these plans yet. The report warns that waiting until after super-powerful AI is created could be too late.

Experts say the speed of AI development is the real problem. Companies are making new AI models faster and faster, but safety is not keeping up with this speed. It is like driving a car that keeps getting faster while the brakes stay the same. The report says this widening gap between capability and safety leaves the sector structurally unprepared. This means the whole AI industry is not ready for what is coming.

The report mentioned some specific safety problems that the companies are not handling well enough. One concern is AI causing psychological harm, where AI systems confuse or hurt people's feelings so badly that they hurt themselves. Another issue is that some AI companies are spending money lobbying against new safety rules instead of supporting them. This makes it harder for governments to create rules that would make AI safer for everyone.

Finally, the report made suggestions for what companies should do to improve. They should publish clear safety plans that everyone can read. They should let outside experts test their systems. They should protect workers who report safety problems. They should be honest about the dangers their AI could cause. Until companies make these real changes, the report warns that people are using powerful AI tools that do not have enough safety protections.

Weekly Highlights