Ethics & Safety Weekly AI News

November 10 - November 18, 2025

This week has been an important time for AI safety and ethics around the world. Governments, companies, and organizations all announced new ways to keep AI agents and AI systems safer and more honest. AI agents are computer programs that can make decisions and talk with people on their own, like chatbots and AI companions. This week showed that many people are worried about these systems and want them to work fairly and safely.

The most exciting news came from New York State in the United States. New York passed two important AI laws and started a training program to keep AI safe and ethical. The first law is about AI companion apps, which are AI agents that people can have conversations with. These apps must now be able to notice when someone might be thinking about hurting themselves, and they must tell these people about crisis help services they can contact. This is important because people have actually used AI companions as friends, and some people have been hurt by these conversations. Companies that make these apps also have to tell users clearly that they are talking to a computer, not a real person.

New York's second new law is called the Algorithmic Pricing Disclosure Act. This law is about how companies use AI to decide what price to charge you. The law says that companies must tell customers when AI uses personal information like their location, age, or income to set prices. Sometimes different people see different prices for the same thing because the AI thinks some people will pay more. Companies that don't follow this rule can be fined $1,000 for each time they break the rule.

New York also told judges and court workers that they have strict new AI rules. The courts said that AI systems cannot use most personal information about people going through the court system. Also, everything an AI creates in court must be checked by a real person before anyone can use it. This is because AI can sometimes make mistakes that could hurt people's cases in court.

Other countries are also working on AI safety and ethics. UNESCO, which is a United Nations organization, announced the world's first global set of rules about neurotechnology ethics. Neurotechnology is technology that can connect to people's brains. UNESCO said that this technology must respect human rights and protect people's privacy, especially their thoughts. This might seem like science fiction, but brain-machine technology is being developed now.

The European Union is also making new rules for AI transparency. On November 5, the European AI Office started writing detailed instructions on how companies must show when content is made by AI. These rules will start in August 2026, and companies will have to use special computer codes that show AI made something. They also have to label deepfakes (fake videos or audio that look real) so people know they are AI-made. This helps stop misinformation and people being tricked by fake AI content.

India released new AI governance guidelines this week. These guidelines said AI systems should follow seven main principles, including being trustworthy, fair, safe, and understandable. The guidelines talk about how to build AI safely using good infrastructure and training for people who work with AI. India is creating an AI Safety Institute to help make sure AI systems work properly.

Some worrying news came from investigations into AI systems and online safety. In the United Kingdom, the Office of Communications is investigating an online forum where people discuss suicide. The office found that even though the forum tried to block people from the UK, some people were still able to access it. This shows how hard it is to control what people do online, even with safety rules. In Paris, France, police are investigating TikTok's algorithm. An algorithm is the set of rules an AI uses to decide what to show people. Police want to know if TikTok's algorithm shows content that makes people want to hurt themselves.

In healthcare, experts found something very concerning. They studied an AI system that writes messages to patients and found that about 7% of the time, if doctors followed the AI's advice, it could badly hurt the patient. Also, only about one-third of the AI messages were checked by doctors before being sent to patients, and many hospitals didn't even tell patients that AI wrote their message. This shows why people need to check AI's work instead of just trusting it.

Healthcare leaders are now talking about "Prudent Vigilance," which means checking how AI is really affecting patients after it starts being used, not just before. One hospital leader said that "quarterly board meetings don't match the speed that AI changes," which means boards need to check on AI more often than they normally meet. Experts say we need to find a balance between using AI to help and protecting people from AI mistakes.

The American Medical Association told the United States Congress that doctors must be part of decisions about AI in hospitals. They said that data used to train AI must be free from bias so the AI works fairly for all patients. They also said hospitals must be transparent with patients about how AI is being used and protect patient privacy.

All of these changes show that the world is taking AI safety and ethics very seriously. Countries are making laws, organizations are writing guidelines, and companies are being told to explain how they use AI and protect people. The pattern shows that AI safety is becoming as important as other kinds of safety, like airplane safety or medicine safety. Experts say that companies should plan ahead for AI safety problems instead of waiting for problems to happen.

Weekly Highlights