Ethics & Safety Weekly AI News

December 1 - December 9, 2025

A big new report about AI safety just came out, and it has some worrying news. The report looked at eight major AI companies that make popular tools like ChatGPT, Claude, and Gemini. These companies are creating powerful artificial intelligence systems that can do amazing things, but the report says they are not ready to keep these systems safe and under control.

The report gave each company a grade, like you get in school. Only two companies got a passing grade, and both barely passed with a C. Anthropic got the highest score with a C+, while Alibaba Cloud got the lowest with a D. The report says there is a big gap between the best companies and the rest, and none of them are doing enough to keep people safe from dangerous AI.

The main problems the report found are that companies are not sharing enough information about how they test their AI. Many companies do not have strong whistleblower protections, which means workers cannot safely report safety problems. The companies also do not have clear plans for what to do if their AI systems get too powerful or start causing harm. Some companies even spend money lobbying against new safety rules instead of supporting them.

Experts say the problem is getting worse because AI is getting more powerful very fast, but safety is not keeping up. The report warns that companies are racing to make the smartest AI possible without making sure it will be safe first. This means people around the world could be using powerful AI tools that do not have enough safety checks.

The report also mentioned concerns about AI causing harm to people, like contributing to cases of self-harm and confusion. It says companies need to be more honest about testing their systems and need to let outside experts check their safety work. Until companies make real changes, the report says the AI industry is unprepared for the risks it is creating.

Extended Coverage