This week's updates on human-agent trust highlight growing concerns over AI security risks. A report revealed over 23.7 million secrets leaked on GitHub in 2024, with AI agents contributing to non-human identity (NHI) sprawl. Companies now manage 45 machine identities for every human user, creating complex security challenges. Repositories using tools like GitHub Copilot saw 40% more leaks, showing how AI tools can accidentally increase risks.

Businesses are adopting governance frameworks to build trust. SAS introduced AI agents in its Viya platform that balance human oversight with AI autonomy. These agents explain decisions and follow ethical rules, helping companies feel confident in AI-driven actions.

Experts urge careful supervision of AI agents. Salesforce uses agents to handle customer queries but ensures they flag unsure cases to humans. Future personal AI agents could manage schedules and purchases but face questions about loyalty and control.

Surveys show 99% of developers are exploring AI agents, signaling rapid growth. However, real-world agent abilities still lag behind expectations, requiring clearer trust-building measures.

Extended Coverage