Ethics & Safety Weekly AI News
September 8 - September 16, 2025This week brought important news about AI safety and ethics that affects how smart computer programs work with people.
The Federal Trade Commission (FTC) in the United States started looking into AI chatbots that talk to children and teenagers. These are computer programs that act like friends or helpers, but some have caused serious problems. The FTC is checking companies like Google, Meta, and OpenAI to see if their chatbots are safe for young people. Some families have sued these companies because they say the chatbots talked about dangerous topics like hurting themselves.
The Trump administration made big changes to AI rules in 2025. President Trump got rid of safety rules that President Biden had made before. The old rules said AI companies had to test their programs and share safety information with the government before releasing them to the public. Now those rules are gone, which worries some people who think AI needs more safety checks.
There was also news about AI making mistakes in important jobs. A law office worker in Utah lost their job because they used ChatGPT to write legal papers, but the AI made up fake court cases that didn't exist. This shows how AI can seem smart but still make serious errors.
In hospitals, some places are using fake patient data made by AI without asking ethics boards for permission. These ethics boards usually make sure research is safe and fair for people. But some hospitals in Canada, the United States, and Italy say they don't need permission when using AI-created data instead of real patient information.