Ethics & Safety Weekly AI News
December 8 - December 16, 2025This weekly update highlights major developments in AI agent safety and ethics from key announcements and expert discussions.
The Linux Foundation announced the Agentic AI Foundation (AAIF) on December 9, bringing together tech giants like Anthropic, AWS, Google, Microsoft, and IBM to create shared safety standards for AI agents. This new group aims to make AI agents work better together and prevent companies from creating their own separate systems that don't talk to each other. The foundation will focus on how agents authenticate, share information, and take actions safely across different systems.
Meanwhile, safety and fraud concerns are growing rapidly. According to recent reports, fraudsters are using AI agents more effectively than legitimate businesses are. The technology is being misused to create what experts call "machine deception" - automated fraud systems that learn and adapt in real time. Researchers have identified specific vulnerabilities like prompt injection attacks, where bad actors trick AI agents into following harmful instructions hidden in normal-looking text.
Government agencies are taking action too. The FDA launched its own agentic AI platform on December 1 to help employees work more efficiently. The HHS (Department of Health and Human Services) released a strategy emphasizing robust oversight, risk controls, and public transparency about how AI is being used in healthcare. By April 3, 2026, all high-impact AI systems in HHS must meet strict safety requirements, including bias testing and human oversight, or they will be shut down.
Experts gathered at a Technology Salon in New York on December 8 to discuss critical governance questions. They emphasized that no single company controls agentic AI systems end-to-end, creating confusion about who is responsible when something goes wrong. The discussion revealed major gaps in how people consent to AI agents accessing their personal data and performing tasks on their behalf.