Ethics & Safety Weekly AI News
September 22 - September 30, 2025This weekly update covers major developments in AI agent safety and ethics as companies and governments work to control these powerful new tools.
A new report from JumpCloud warns that most organizations are not ready for agentic AI risks. While 82% of companies already use AI agents, only 44% have proper rules to manage them. This gap creates serious security problems.
Agentic AI acts like a digital coworker that can make its own decisions. Unlike regular software that follows exact commands, these AI agents can think and act on their own. This independence makes them very useful but also very dangerous.
Experts say the biggest problem is transparency. When AI agents make decisions by themselves, it becomes hard to understand why they chose certain actions. This creates problems for accountability and trust.
In California, new laws are being considered to control AI use in workplaces. These rules would require companies to tell workers when AI systems make decisions about hiring or firing.
Scientists in China created a new safety system called SciGuard. This tool prevents AI agents from helping with dangerous chemistry experiments while still allowing helpful research.
The main message from all these developments is clear: AI agents need stronger controls. Companies must treat AI agents like digital employees with proper oversight and rules. Without these safeguards, AI agents could cause data breaches, financial mistakes, or even break laws.
Experts recommend using identity-first governance where each AI agent gets a unique identity and continuous monitoring. They also suggest keeping humans involved in important decisions rather than letting AI agents work completely alone.