Ethics & Safety Weekly AI News

September 8 - September 16, 2025

This week brought several important developments in AI safety and ethics that show both the promise and problems of smart computer programs.

The biggest story came from the Federal Trade Commission (FTC) in the United States, which started a major investigation into AI chatbots used by children and teenagers. These chatbots are computer programs designed to talk and act like human friends or helpers. The FTC is looking at big companies like Alphabet (which owns Google), Meta (which owns Facebook and Instagram), Snap, OpenAI, and others. The investigation started because some families filed lawsuits claiming that these AI chatbots had dangerous conversations with young people, including talks about self-harm that may have led to suicides.

The companies are now trying to make their chatbots safer for children. They are adding parental controls so parents can watch what their kids are talking about with AI programs. They are also making the chatbots refuse to talk about dangerous topics like hurting themselves. This is important because many young people use these AI helpers for friendship and advice.

Another major story involves big changes to AI rules in the United States under President Trump's administration. In January 2025, President Trump got rid of an executive order that President Biden had signed in 2023. Biden's rules required AI companies to test their programs carefully and share safety information with the government before releasing them to the public. These rules were especially important for AI systems that could affect national security, public health, or the economy.

Now those safety rules are gone, and the U.S. AI Safety Institute might be closed down. President Trump says the old rules were slowing down innovation and making it harder for American companies to compete. But many experts worry that removing these safety checks could lead to problems. Some Democratic lawmakers are concerned that without these rules, AI companies might not protect people's data or care about environmental impacts.

The Trump administration is also pushing for open-source AI models, which means sharing the computer code that makes AI programs work. They say this will help more people create AI and reduce the power of big tech companies. However, experts warn that sharing this code could also make it easier for bad actors to misuse AI for things like disinformation, surveillance, or weapons.

There was also troubling news about AI making serious mistakes in professional settings. A legal worker in Utah lost their job after using ChatGPT to help write court documents. The AI program made up fake court cases and legal citations that didn't exist in any real legal database. When the lawyers submitted these fake citations to court, they got in serious trouble. This shows how AI can seem very smart and confident even when it's completely wrong.

This problem isn't just happening in law offices. An ethics expert at North Carolina State University warns that people are becoming too dependent on AI without understanding its limitations. When people trust AI too much, it can lead to terrible mistakes in important decisions about health, safety, and people's lives.

Finally, there's controversy about how hospitals use AI to create fake patient data for research. Some hospitals in Canada, the United States, and Italy are using AI to create artificial patient information based on real medical records. They then use this fake data for research without getting permission from ethics review boards, which are groups that make sure research is safe and fair for people.

Hospitals like Washington University in Missouri, Children's Hospital of Eastern Ontario in Canada, and others say this is okay because the fake data doesn't contain real patient information. They argue this protects privacy and makes research faster. But this raises questions about whether using AI-generated data should still require ethical oversight, especially since the AI learned from real patient information to create the fake data.

These stories show that as AI becomes more powerful and widespread, society needs to carefully balance innovation with safety. While AI can help solve many problems, it also creates new ethical challenges that need thoughtful solutions.

Weekly Highlights