Ethics & Safety Weekly AI News
August 11 - August 19, 2025This week revealed major concerns about the safety and ethics of AI agents as companies rush to adopt these powerful new technologies. Several important reports showed that businesses are not prepared for the risks that come with letting AI systems make decisions on their own.
Salt Security released a groundbreaking study that highlights serious problems with how companies protect their AI agents. The report found that over half of organizations are already using AI agents to talk with customers, or plan to do so soon. However, most companies are not doing enough to keep these systems safe from cyber attacks.
The main problem is with Application Programming Interfaces (APIs) - the digital pathways that let AI agents communicate with other computer systems. These APIs are like doorways that hackers can use to break into company networks. Only 32% of companies check these doorways for problems every day, and just 37% have special security tools to protect them. This creates serious risks for both businesses and the people who use their services.
A major trust problem is growing between companies and their customers. People are becoming more worried about sharing personal information with AI systems because they don't trust that companies will keep their data safe. The Salt Security report suggests that once companies fix their security problems, people will feel more comfortable using AI services.
Security experts are sounding alarm bells about how quickly companies are adopting AI agents without proper safety measures. Unlike regular computer programs that follow set rules, AI agents can learn and adapt on their own, making them much harder to control. This creates two types of risks: internal risks from the AI tools companies use themselves, and external risks from criminals who might use AI to attack organizations.
In Asia, the situation is particularly concerning as organizations race to use AI for competitive advantage. Speed has become more important than safety, but experts warn that unprotected velocity is a liability. When AI systems fail, they don't just cause technical problems - they damage trust, bring government scrutiny, and hurt company reputations.
The most shocking findings came from Infosys, a major technology consulting company that surveyed over 1,500 business executives. Their research found that 95% of senior business leaders have experienced problems with AI systems in their companies during the past two years. These problems included privacy violations, unfair bias, wrong predictions, and failure to follow government rules.
Almost 40% of executives said the damage from AI problems was severe or extremely severe. About 77% of organizations reported financial losses, and 53% suffered damage to their reputation. Despite these serious problems, only 2% of companies have put in place the proper controls to prevent AI-related disasters.
The gap between AI adoption and safety preparation is enormous. While 86% of executives who know about AI agents believe they will create new risks, most companies are not investing enough in responsible AI practices. On average, companies think they are spending 30% less than they should on AI safety measures.
However, there is hope for companies that take safety seriously. The Infosys study found that organizations with strong responsible AI practices experienced 39% lower financial losses and 18% less severe damage when problems occurred. These AI leaders do several things better: they make sure their AI systems can explain their decisions, they actively look for and fix unfair bias, they test their systems carefully, and they have clear plans for when things go wrong.
New governance frameworks are being developed to help companies manage AI agents more safely. The year 2025 marks a critical shift from AI systems that simply help people to those that can act on their own. Organizations worldwide are deploying AI agents that can make complex decisions, execute multi-step plans, and work together in networks without human supervision.
Government regulations are also evolving quickly to keep up with AI developments. The European Union has established comprehensive requirements for autonomous AI systems, while the United States is implementing safety guidelines across different industries like finance, healthcare, and defense. International standards organizations are developing frameworks that will shape how companies around the world approach AI deployment.
The message from experts is clear: organizations that establish strong safety frameworks now will gain significant advantages, while those that wait face mounting risks and potential regulatory penalties. Companies must move beyond treating security as an afterthought and instead build it into the foundation of their AI systems from the very beginning.