Ethics & Safety Weekly AI News

February 9 - February 17, 2026

An important report about artificial intelligence safety was released, and it focuses on some big challenges with AI technology that the whole world needs to understand. The report is called the International AI Safety Report 2026, and it was written by more than 100 AI experts from more than 30 countries working together. This report will help leaders from around the world talk about AI safety at a big meeting called the India AI Impact Summit.

One of the biggest topics in the report is about AI agents - which are special computer programs that can think, plan, and solve problems on their own. AI agents are becoming a major focus for companies building AI. Right now, these agents can do some impressive things, like helping engineers write computer code and planning projects with limited help from humans. However, the report explains that agents are still not powerful enough to fully replace human workers and do everything on their own. For now, agents work best when they help humans do their jobs better, rather than replacing them completely.

But AI agents also create new safety challenges that scientists are worried about. When agents can act on their own without constant human watching, it becomes much harder for people to stop them if something goes wrong. The report explains that agentic systems might make mistakes before any human can notice and fix the problem. This is especially dangerous because as agents become smarter and more powerful, the mistakes they could make might become bigger and more serious.

The safety report also talks about other big challenges with AI that affect everyone. One challenge is about jobs and money. As AI becomes more common, it might take away some jobs, especially junior positions in writing and translation. The report says that AI might make rich countries richer while making poor countries fall further behind. Workers might earn less money because companies will prefer to use AI instead of people.

Another concern is about how people use AI tools every day. When people rely too much on AI to do their thinking for them, they might lose some of their own skills. The report gives an example: doctors who used AI to help find tumors became 6 percent worse at finding tumors by themselves after just three months.

This week, something important also happened inside AI companies. Several scientists and safety experts who work at big AI companies like OpenAI and Anthropic decided to quit their jobs. These workers left because they were worried that their companies were releasing new AI products too quickly without being careful enough about safety. One researcher at OpenAI published an article explaining that the company was building so many new products - over 20 updates, shopping tools, and now advertisement features - that safety was becoming less important. She worried that money and growth were becoming more important than keeping people safe.

Another safety leader at OpenAI was fired after she raised concerns about protecting children from bad content. This made many people in the AI field worry that companies were not taking safety seriously enough. Anthropic, a company that says it cares a lot about safety, released a new powerful AI agent tool, but their own safety report found that this new agent had "elevated susceptibility to harmful misuse" and could help people make dangerous things.

The big safety report explains that there are many things that could go wrong with AI. AI systems can be used for bad purposes like creating fake videos, spreading false information, or helping criminals. AI can also help people create harmful biological or chemical weapons because it can give them instructions. AI can help attackers break into computer systems by finding weak spots in their security. These are all big challenges that world leaders and companies need to work together to prevent.

The report also says that nobody really knows yet if we have good ways to keep AI safe as it becomes more powerful. Companies have tried to create safety frameworks and made promises to be careful, but there is not enough information about whether these promises are actually working. The report says that more research is needed to understand how well these safety plans work in the real world.

Experts agree that keeping AI safe will require many different approaches working together, not just one solution. Companies need good technical safeguards, strong monitoring systems, and clear rules about who is responsible for what. Governments and companies also need to work together to help people understand AI better and be prepared for changes it might bring. The safety report provides important information to help leaders make smart decisions about AI as this powerful technology continues to grow faster and faster.

Weekly Highlights