Human-Agent Trust Weekly AI News
February 9 - February 17, 2026# Weekly Update: Understanding Trust Between Humans and AI Agents
## The Growing Challenge of AI Agent Security
Artificial intelligence agents are becoming more common in businesses around the world, and this brings new challenges about trust and security. An article from CIO.com this week explains that by the end of 2025, more than 45 billion AI agents were deployed in companies—that is more than 12 times the entire human workforce on Earth. These agents perform many important tasks including processing employee payroll, managing supply chain deliveries, handling customer service, and even making financial decisions. Unlike simple chatbots that just answer questions, agentic AI systems can plan, make decisions, and take actions across multiple computer systems with very little human supervision.
The problem with having so many powerful AI agents is that they create new security risks. According to Okta, a company that studies cybersecurity, 23% of IT professionals reported that their AI agents had been tricked into revealing passwords and access information. Attackers can also steal credentials at scale, which means stealing many passwords at once very quickly. In a recent real-world attack, hackers created fake AI chat agents that pretended to be Salesforce's Data Loader, which is a real piece of software used by administrators. By using the real software's identification code, the attackers were able to skip security screens and get valid access tokens without anyone noticing. This attack affected more than one million customers, including customers of major cybersecurity companies.
## The Authentication Solution
Experts are pointing to a solution called authentication, which means verifying who someone or something really is before allowing them access. The CIO article compares this to how email security works today. Years ago, anyone could send an email claiming to be from a real bank like Wells Fargo, but now authentication standards help verify that emails are actually from the real bank. Experts suggest that AI agents need the same kind of protection. Before an AI agent performs any task, the system should first verify that the agent was created by someone trustworthy, and that the person giving the agent instructions is authorized to do so.
One way to do this verification is through DNS, which is like the internet's phone book that helps computers find each other. Every AI agent could be attached to a DNS record, which would create a natural and efficient way to verify who created the agent and whether that creator can be trusted. According to the CIO article, this authentication should happen first, before the system even considers what task the agent is supposed to perform. This is important because even if an AI agent is performing its job correctly, it might be performing that job for a bad person or attacker.
## Companies Taking Action on Trust
Several large technology companies are responding to these trust challenges by building new tools and protocols for AI agents. Coinbase, a major cryptocurrency company in the United States, introduced something called "Agentic Wallets" on its Base network this week. These special digital wallets are designed specifically for AI agents to make transactions safely. The wallets keep private keys—which are like super-secret passwords for money—inside secure areas where AI agents cannot reach them directly. The wallets also have spending caps and restrictions on what actions agents are allowed to perform, similar to how a parent might give their child a debit card with a limited amount of money. This design helps prevent AI agents from being tricked through prompt injection attacks, where hackers hide bad instructions in text that the AI reads.
Another company called OpenAI upgraded its Responses API this week to give AI agents more power and better safety features. The upgraded system allows agents to run long and complicated tasks without losing track of information, and agents can now operate inside managed computer environments with persistent storage and networking. These upgrades reduce the need for custom-built computer infrastructure while also raising important questions about governance and security around agent authorization.
## Human-Agent Collaboration is the Key
A Harvard Business Review article published this week explains that the most successful companies are not trying to replace humans with AI agents entirely. Instead, they are redesigning their workflows to have humans and AI agents work together. For example, a U.S. mortgage company redesigned how they handle business processes by creating multiple specialist agents that work together under an orchestrator agent that coordinates their work, with additional governance agents that check for accuracy. This human-agent collaboration approach creates value that neither humans nor AI agents could achieve alone.
Companies that are using this collaboration approach are seeing real business results. An article from Harvard Business Review reports that more than 74% of executives whose organizations have introduced agentic AI see returns on their investment within the first year. One example mentioned in the article is a retail pricing analytics company that built a multi-agent system that was approved for production in less than four months because it was directly helping them make faster market decisions and reducing manual mistakes.
## Planning for the AI Agent Future
A Thomson Reuters report released this week found that AI adoption in professional services—which includes law firms, accounting companies, and consulting firms—has reached a tipping point. About 15% of organizations in professional services have already adopted some type of agentic AI tool, and more importantly, an additional 53% report that their organizations are either actively planning to use agentic AI tools or are considering whether to use them. This indicates that adoption of AI agents is happening very rapidly.
The key to making this transition successfully appears to be building trust through authentication, clear rules, and human oversight. As more AI agents enter the workforce and handle increasingly important business tasks, companies need to verify who created each agent, make sure authorized people are giving instructions, maintain clear audit trails of what each agent does, and keep humans involved in important decisions. The technology and standards for this are already emerging, but experts warn that the window to implement these safeguards is closing quickly, measured in months rather than years.