Human-Agent Trust Weekly AI News

September 22 - September 30, 2025

This weekly update covers major developments in human-agent trust as AI systems become more common in our daily lives.

Trust in AI agents has dropped significantly over the past year. A comprehensive study by the Capgemini Research Institute found that trust in fully autonomous AI agents fell from 43% to 27% within just one year. This decline comes from growing concerns about data protection and ethical issues with AI systems.

Despite falling trust, business expectations remain high for AI agent adoption. Companies predict that AI agents will play important roles in most business processes over the next three years. Business leaders expect impressive improvements when humans work well with AI agents, including 65% more value-adding tasks, 53% more creativity, and 49% better employee satisfaction.

Current adoption levels show we're still early in the AI agent revolution. Only 2% of companies have fully scaled AI agents across their operations. About 24% are testing AI agents in pilot projects, while 14% are actively implementing them. Importantly, 93% of decision-makers believe they must scale AI agents within twelve months to maintain competitive advantage, yet almost half lack a clear implementation strategy.

Consumer protection groups are taking action to address trust concerns. Consumer Reports and the GliaNet Alliance hosted discussions in New York City about creating governance structures and trust signals for AI agents. They focused on a critical question: how do we ensure AI agents act in ways that align with our interests as they gain more ability to act on our behalf? The groups explored how to define "loyalty" in AI agent behavior and how to measure and encourage trustworthy actions.

The economic potential remains enormous despite trust challenges. Agent-based AI systems could generate up to $450 billion in economic value by 2028. However, this potential depends heavily on implementation maturity. Companies with well-developed AI agent systems could gain an average of $382 million over three years, while others might only see $76 million in gains.

Major infrastructure companies are emphasizing human oversight in their AI agent strategies. Kyndryl, which manages critical technology infrastructure, stresses that AI agents bring speed, scale and adaptability, while humans bring judgment, context and trust. In critical environments like hospitals, AI agents can predict patient needs and suggest bed assignments, but doctors and nurses still supervise and make final decisions.

The AI agent market shows explosive growth despite trust concerns. Market value reached $5.1 billion in 2024 with projections indicating it will exceed $47 billion within the next few years, growing at a remarkable 44% annual rate. Research firm Gartner predicts that by 2028, 33% of enterprise software applications will include AI agent capabilities, compared to almost none in 2023.

Different levels of AI agent sophistication are emerging in the market. Industry analysis identifies four levels: Level 1 focuses on information retrieval, Level 2 handles single-task workflows, Level 3 manages cross-system workflow coordination, and Level 4 involves multiple agents working together. Most companies successfully implemented Level 1 tools in recent years, while Levels 2 and 3 are where innovation and investment are now concentrated.

Building organizational trust requires intentional effort according to business leaders. Companies must prepare for both technological and cultural changes as AI agents move from experimental tools to core business functions. Success depends on establishing firm guardrails for AI use, emphasizing human control, and investing in training workers to collaborate effectively with AI agents.

The path forward emphasizes partnership over replacement. As noted by infrastructure management experts, AI agents are not meant to replace human workers but to amplify them. The future success of human-agent collaboration will depend on developing new communication norms, governance models, and ways of measuring success that honor both technological capabilities and human judgment.

Weekly Highlights