Ethics & Safety Weekly AI News

December 15 - December 23, 2025

AI agents are changing how companies work with rules and regulations, but safety is becoming a big concern this week. AI agents are computer programs that can make decisions and take actions on their own without waiting for a human to tell them what to do. Many companies are using them to check financial crimes and make onboarding faster, but problems are happening. Reports show that AI-related incidents went up by 21 percent from 2024 to 2025, which means these risks are real and not just something people worry about. One example was an AI agent that made up fake restaurant names when it didn't understand expense receipts.

Safety experts say we need better ways to keep AI agents under control. The top AI companies like OpenAI, Google, and Anthropic got grades of C on their safety work, while most other companies got Ds and Fs. This means there is still a lot of work to do. A new problem that companies just realized is that traditional security systems don't work for AI agents because these programs don't fit into the normal employee permission system. For example, an AI agent might give the same secret financial information to both a company leader and a new employee, which is dangerous.

Companies are also worried about losing control of AI agents because they work so fast and make thousands of decisions every minute. Leaders need to think about this now because more companies are planning to let AI agents make decisions in the next three years. The good news is that experts say companies can stay safe if they plan carefully, use human oversight, and check everything the AI does. Trust and transparency are the key words this week for making sure AI agents help companies without causing harm.

Extended Coverage