Ethics & Safety Weekly AI News

February 2 - February 10, 2026

# This Weekly Update on AI Ethics and Safety

The week of February 2-10, 2026 brings significant developments in how artificial intelligence is being controlled and made safer. As AI technology becomes more powerful and used in more important jobs, governments and companies around the world are working hard to keep it ethical and safe.

## New AI Agents Help Keep Workers Safe

One of the biggest stories this week involves agentic AI systems - these are AI programs that can think and make decisions on their own to help companies stay safe. Unlike older AI that just answers questions, agentic AI can actually do work and suggest solutions. Companies are now using AI agents inside digital twins, which are like computer copies of real factory floors and work areas. These smart copies let companies watch what's happening in real-time and spot dangers before they hurt anyone.

These agentic AI systems for safety can do amazing things. They can look at lots of different information and recognize patterns that humans might miss. For example, an AI agent might notice that a machine is starting to act strangely and recommend fixing it before it breaks down and hurts someone. This type of predictive safety management moves companies from just learning what went wrong to actually preventing problems before they happen.

## United States: New Laws Require Careful Testing of AI

The United States is creating new rules to make sure AI is used fairly. California's new law, called SB 53, started on January 1, 2026, and it's the first major U.S. law focused on the safety of the most powerful AI models. This law requires the biggest AI companies to publicly explain how they keep super-powerful AI models safe, and they must report any serious safety problems to the California government.

In Colorado, a new law starting June 30, 2026, will require employers to be very careful when using AI to hire workers. Companies must regularly test these AI tools to make sure they don't reject job applicants just because of their race, age, or disability. If a company uses AI for hiring, they must also let a real person review the final decision - AI cannot make the choice alone.

Illinois also passed a law that bans employers from using AI that unfairly harms employees based on protected characteristics. These state-by-state rules show that governments are serious about preventing AI discrimination.

## Federal Government Tries to Create One National AI Rule

On December 11, 2025, President Trump signed an Executive Order saying that too many different state AI laws are confusing companies. The federal government now wants to create one set of AI rules for the entire country, which would replace the state laws. However, some states like Florida are saying they will create their own AI laws anyway, especially to protect children from harmful AI chatbots.

This shows there's still disagreement about whether the national government or individual states should make AI rules.

## AI Companies Can Be Held Responsible for Bias

An important court case called Mobley v. Workday shows that companies selling AI tools can be held legally responsible if their AI unfairly rejects job applicants. In this case, a person claimed that Workday's AI hiring tool rejected more older workers, Black workers, and disabled workers than fair. The judge decided the case should continue, which means the company might have to pay money if the claim is proven true.

This court decision is important because it shows that companies selling AI cannot just say "we're not responsible for what the AI does" - they can actually be sued for discrimination caused by their AI.

## Companies Need Strong AI Risk Management

The Department of Justice has told all companies that they need strong AI governance programs to avoid legal problems. Companies must understand the risks their AI systems create, put controls in place to stop bad things from happening, and train their workers about responsible AI use. Good AI governance can even help companies avoid fines and other punishments if something goes wrong.

Safety experts say that building good agentic AI systems requires four important things: First, high-quality data that the AI can learn from. Second, strong governance and ethics frameworks to make sure the AI is used responsibly. Third, alignment with business goals so workers will actually use the AI system. Fourth, making sure the AI works together with human workers instead of replacing them.

## World Studies AI Safety and Ethics

The International AI Safety Report, just published in February 2026, shows that scientists around the world are studying how to keep AI safe. Universities like Marist in the United States are now teaching students special courses about AI ethics so they understand how to build AI systems that treat people fairly.

These actions show that the whole world is focused on making sure AI agents and agentic systems help people while protecting them from harm. As AI becomes smarter and more independent, making sure it's safe and ethical is one of the most important jobs we have.

Weekly Highlights