Ethics & Safety Weekly AI News
May 26 - June 3, 2025The RSA Cybersecurity Conference became a hotspot for debating agentic AI ethics this week. Major companies like Google and SentinelOne demonstrated security agents that can automatically block hackers and analyze malware. While these tools work faster than humans, experts warned about the trust gap – many people still don’t believe AI should make security decisions alone.
In customer service news, Cisco’s new report forecasts agentic AI handling nearly 70% of support chats by 2028. This rapid adoption is making governments nervous – the European Union just proposed strict testing requirements for any AI that gives financial or medical advice. They want companies to prove their systems won’t accidentally harm users.
Privacy took center stage when Apple revealed plans for local AI processing. Their new system keeps sensitive decisions on your device instead of sending data to company servers. Microsoft and Google are working on similar ‘personal cloud’ systems where AI agents can access stronger computers without risking your private information.
Harvard researchers published a guide for building trustworthy AI agents, comparing them to teenage drivers needing supervision. They recommend three key safeguards: 1) Independent audits by third-party checkers, 2) Usage limits preventing AI from making too many big decisions, and 3) Clear emergency stop buttons for humans. The report highlights Apple’s Private Cloud Compute as a model others should follow.
Legal experts are scrambling to update laws for the agentic AI era. A California proposal would make companies legally responsible if their AI agents break privacy laws, even if the system acted unexpectedly. Meanwhile, insurance companies are creating special policies for AI mistakes – similar to car insurance for self-driving vehicles.
Parents’ groups raised concerns about AI companions for children after tests showed some agents could bypass content filters. In response, the UK announced plans to rate AI systems like video games – with age restrictions and content warnings. Japan is testing a AI safety mark program where approved systems get special certification seals.
The biggest debate came from security professionals. While agentic AI can respond to cyberattacks in milliseconds, some worry about machines making life-or-death decisions in hospitals or power plants. A coalition of tech companies pledged to develop shared safety standards by 2026, but consumer advocates argue the timeline is too slow given how fast these systems are spreading.