Ethics & Safety Weekly AI News
April 28 - May 6, 2025Companies are focusing on trust-building measures for agentic AI systems this week. Dr. Eoghan Casey, a leader at Salesforce, stressed that data governance—or keeping AI’s information accurate and fair—is key to making these systems reliable. He compared safety guardrails to training wheels, helping AI stay within legal and ethical boundaries set by humans. For example, an AI handling medical records could use these guardrails to avoid sharing private patient details.
In the U.S. insurance industry, agentic AI sparked debates about customer focus. Independent agents like those at IMA Financial Group fear carriers (big insurance companies) will use AI to cut human agents out of the process. Max Kane from Novella, an insurance wholesaler, noted that agents care less about flashy AI tools and more about finding easy solutions for clients. Meanwhile, Jason Wrather at Grange Insurance emphasized the need for checks and balances, like double-checking AI decisions about claims or pricing.
Globally, experts agree that human oversight remains vital. While AI can handle tasks like sorting resumes or managing supplies faster than people, mistakes—like a Canadian airline’s AI bot overpromising refunds last year—show why humans must stay in charge. Dr. Casey summed it up: “Trust isn’t a bonus feature; it’s the foundation” for AI to succeed.