Ethics & Safety Weekly AI News
January 19 - January 27, 2026Major Changes Coming for AI Safety and Rules
This week showed that artificial intelligence safety is becoming more serious than ever before. Governments, companies, and experts are all focused on making sure AI systems — especially agentic AI that works on its own — don't cause problems. Agentic AI is different from regular AI because it can make decisions and take actions without a human telling it exactly what to do at every step. This makes keeping it safe even more important.
New Laws and Rules for AI Developers
In the United States, lawmakers are working on the TRUMP AMERICA AI Act, which would create new rules for anyone building AI systems. This law is very important because it would require companies to test their AI carefully and report what they find. If an AI system hurts someone — whether by stealing their personal information or giving them bad advice — the company that built it could face lawsuits. This is similar to how car companies are responsible if their cars have safety problems. The law also says that companies using AI for important decisions (like hiring people or lending money) must check their systems regularly to make sure they don't unfairly favor or hurt certain groups of people.
AI Gets a New "Constitution"
One of the most interesting developments this week was Anthropic's new constitution for Claude, their AI helper system. A constitution for AI might sound strange, but think of it like a guidebook that helps the AI understand what's right and wrong. The old version was only 2,700 words long — about the length of a short story. The new version is much longer: 84 pages and 23,000 words. Why so long? Because the company believes AI systems need to understand *why* something matters, not just follow rules. For example, an old rule might say "don't share passwords." The new approach teaches Claude to understand that privacy is important for people's safety, so Claude will make better choices when faced with new situations.
AI Helping Other AI Stay Safe
Security experts say that protecting AI systems requires both advanced technology and human brains working together. As companies use more agentic AI — AI that makes choices on its own — they need to watch out for new types of attacks. Bad actors can use AI to create more sophisticated attacks, while good defenders use AI to catch threats faster. However, experts warn that artificial intelligence alone cannot keep systems safe. When it comes to really important systems — like the computers that keep hospitals, power plants, or transportation systems running — humans need to stay in control and make the final decisions. A computer making the choice to turn off a laptop might be fine, but a computer deciding to shut down a hospital's main system could be dangerous.
Getting Ready for the Future
Organizations everywhere are realizing they need to get serious about AI safety and governance. Instead of just experimenting with AI, companies are now focused on how to use it reliably and responsibly. Experts predict that in 2026, agentic AI will start working in carefully controlled environments where companies can manage the risks. Teams will need training to work well with these new systems. Instead of AI doing everything automatically, humans will set goals and watch over what the AI does — a style called "human-on-the-loop".
Real-World Challenges with AI Ethics
Not all AI safety questions have easy answers. For example, engineers building self-driving cars are struggling with ethical questions about how to program cars to handle dangerous situations. Should a self-driving car protect the person inside or people walking on the sidewalk? There's no perfect answer, and different cultures around the world have different ideas about what's right. Scientists say we need to think carefully about these questions now, before self-driving cars become common on roads.
What This Means for Everyone
The bottom line is that AI safety and ethics are no longer optional for companies building and using AI systems. Whether it's new government laws, company policies, or fresh approaches to how AI systems think, 2026 is the year when AI safety became a core responsibility. Companies that prepare now by understanding their AI systems, testing them carefully, and planning for problems will be much better positioned for success in the future.