Human-Agent Trust Weekly AI News

July 28 - August 5, 2025

This week brought major news about trust issues with AI agents that can act on their own. Companies are working hard to solve these problems because people are worried about letting AI make decisions without human control.

A company called HUMAN Security launched a new system called HUMAN Sightline on July 30th. This system can tell the difference between real people, computer bots, and AI agents when they visit websites. It helps companies know who is really using their services and stops fake AI from causing problems.

Another company called Cyata also released a similar tool. Their system watches over AI agents that work in companies and makes sure they don't do anything they shouldn't. This is important because 96% of business leaders plan to use more AI agents in 2025.

A big study by Deloitte found that trust is the biggest problem stopping companies from using AI agents. About 21% of people in finance and accounting jobs said they don't trust AI agents enough to let them make important decisions. Most people (60%) only want AI agents to work within strict rules, while humans still make the final choices.

Experts are saying that AI has broken old ways of checking if someone is real. Before, companies could use voice prints or other methods to verify people. Now, with AI being so good at copying human voices and behavior, these old methods don't work anymore.

The United States government also released a new plan for AI security. This plan focuses on Zero Trust methods, which means checking everything and everyone before allowing access to important systems.

All these changes show that as AI agents become more common, keeping them trustworthy and safe is becoming a huge challenge for businesses worldwide.

Extended Coverage