Human-Agent Trust Weekly AI News

February 2 - February 10, 2026

This weekly update focuses on how companies are working to make AI agents safer and more trustworthy. The biggest news came when Gen, a company known for security tools like Norton, launched the Agent Trust Hub on February 4. This free platform helps people stay safe when using AI agents like OpenClaw, which are programs that can do tasks on their own, like reading emails or moving money between accounts.

The reason for this new tool is serious. Researchers found that more than 18,000 OpenClaw instances are exposed on the internet and could be attacked. Even worse, about 15% of the skills these agents can use contain harmful instructions. A skill is like a tool or power that an AI agent can use to do its job.

Gen's new Agent Trust Hub includes two helpful tools. First, there is the AI Skills Scanner, which checks if a skill is safe before someone uses it. Second, there is an AI Skills Marketplace where people can find skills that have been checked and are safe to use.

Other big companies are also building tools to help people work safely with AI agents. Anthropic, which makes Claude, released Claude Opus 4.6 with multi-agent teams that can work together. OpenAI launched a new service called Frontier to help companies use AI agents safely. Snowflake and OpenAI are also working together to build AI agents for businesses.

Experts agree that 2026 is an important year for AI agents. Many leaders say this is like the moment ChatGPT first became popular. However, they also say that trust and security must come first before companies can use these agents in their real work.

Extended Coverage