Ethics & Safety Weekly AI News

February 2 - February 10, 2026

This weekly update covers major developments in artificial intelligence ethics and safety that will impact how AI is used around the world. The most exciting news involves new agentic AI systems - AI programs that can think and act on their own to help keep workers safe. These smart systems work like digital twins, which are computer copies of real factory floors, allowing companies to spot problems before they happen.

In the United States, several important changes are happening. California just made a new law called SB 53 that requires companies building the most powerful AI models to share safety information with the public. Meanwhile, Colorado passed a law requiring employers to test AI tools used for hiring to make sure they don't unfairly reject certain groups of people. The federal government also released an Executive Order trying to create one set of AI rules for the whole country instead of different rules in each state.

A major court case shows that AI vendors can be held responsible for bias in hiring tools, even if they just sell the software. This means companies selling AI must be more careful about preventing discrimination. The Department of Justice updated its guidelines to say that companies need strong AI risk management programs to avoid legal trouble.

The International AI Safety Report was just published in February 2026, showing that experts worldwide are studying how to keep AI safe and ethical. Major universities like Marist are now teaching students about AI ethics to prepare them for jobs in this rapidly changing field.

These changes show that governments, companies, and experts are working together to make sure AI agents and automated systems help people safely and fairly.

Extended Coverage