Human-Agent Trust Weekly AI News
July 28 - August 5, 2025This week saw major developments in the ongoing challenge of building trust between humans and AI agents. As AI agents become more powerful and independent, companies and governments are scrambling to create new systems that keep these digital workers safe and trustworthy.
The biggest news came from HUMAN Security, which announced a groundbreaking new system on July 30th. Their product, called HUMAN Sightline with AgenticTrust, is being called the first adaptive trust layer designed specifically for the age of AI agents. This system can identify and track three different types of digital actors: real humans, simple computer bots, and sophisticated AI agents.
What makes this system special is that it doesn't just block bad actors - it governs every digital interaction. The technology can tell when an AI agent is pretending to be human, prevent AI from copying or spoofing real identities, and manage situations where AI agents are doing too much or acting in ways they shouldn't. Dave DeWalt, a cybersecurity expert, explained that in today's world, not every digital interaction involves a real person, making it critical to tell the difference.
Just one day later, another company called Cyata launched a competing platform. Their system focuses on agentic identities - basically, digital IDs for AI agents that work in companies. With 96% of business technology leaders planning to increase their use of AI agents in 2025, according to Cloudera research, this type of identity management is becoming essential for workplace security.
The timing of these product launches makes sense when you look at research from Deloitte released on July 29th. Their study of finance and accounting professionals in the United States found that trust is the number one barrier preventing companies from adopting AI agents. About 21% of respondents said they don't trust AI agents enough to use them for important financial work.
The trust problem goes deeper than just general worry. When asked about how much independence AI agents should have, nearly 60% of people said they only trust AI agents to make decisions within a defined framework while humans should still make judgment calls. Only 2.7% of people said they would trust AI agents to make decisions completely on their own, including important judgment calls.
Court Watson from Deloitte explained that trust must be built into AI tools from the very beginning. This means creating clear policies, processes, and controls throughout the entire life cycle of AI systems, especially when dealing with financial statements and other critical business information.
The trust challenge isn't just about business operations - it's also about basic security and authentication. An article from Pindrop published on July 28th highlighted how AI has broken traditional authentication methods. The company pointed to comments from Sam Altman, who warned that AI has defeated most current ways people prove their identity, except for passwords.
This has created a fundamental shift in how we think about digital trust. Before AI became so advanced, the main question was "Is this the right person?" Now, companies first have to ask an even more basic question: "Is this a real human at all?". This change is forcing a complete rethinking of security systems for phone calls, video conferences, and other remote interactions.
The United States government is also taking action on AI trust issues. In August, the White House released "America's AI Action Plan," which emphasizes Zero Trust security as central to the country's AI strategy. Zero Trust means never automatically trusting any user or device, even if they're inside a company's network, and always verifying identities before granting access.
Despite all the concern about trust, there are signs that AI agents are starting to move from experimental projects to real-world applications. A PYMNTS Intelligence report from August found that while nearly all Chief Financial Officers know about agentic AI, only 15% are currently interested in deploying it. This suggests that while awareness is high, practical adoption is still limited by trust and other concerns.
The week's developments paint a picture of an industry in transition. As AI agents become more capable and autonomous, the infrastructure around them - including security, identity management, and governance systems - is rapidly evolving to keep pace. Companies that can successfully balance the benefits of AI agents with robust trust and security measures are likely to gain significant competitive advantages in the coming years.
What all these developments have in common is recognition that the relationship between humans and AI agents is fundamentally changing how we work, communicate, and conduct business. Building systems that maintain trust while allowing AI agents to operate effectively has become one of the most important technology challenges of our time.