Ethics & Safety Weekly AI News

December 8 - December 16, 2025

This weekly update covers important news about keeping AI agents safe and ethical as the technology grows rapidly around the world.

## New Global Standards for AI Agent Safety

On December 9, the Linux Foundation announced a major new organization called the Agentic AI Foundation (AAIF). This group includes some of the world's biggest technology companies: Anthropic, Amazon Web Services (AWS), Google, Microsoft, and IBM. These companies are working together to create shared rules and standards for how AI agents should work and stay safe. Think of it like establishing traffic rules for a new highway - everyone needs to follow the same safety guidelines.

The foundation will focus on three main areas: making agents work together smoothly, keeping them safe, and establishing best practices that everyone can use. One important contribution is the Model Context Protocol (MCP), which is like a common language that helps different AI agents understand each other. Without these shared standards, each company would create its own system, making it harder for AI agents from different platforms to work together safely.

## Serious Fraud and Security Challenges

However, not everyone is using AI agents for good purposes. Recent reports show that fraudsters are actually better at using AI agents than legitimate businesses. According to fraud prevention experts, 2025 was really the "Year of Machine Deception" because fraud has become automated and adaptive. This means that when criminals try to commit fraud, the AI learns from failures and tries again in slightly different ways, like a living creature that evolves.

Experts have identified dangerous vulnerabilities in AI agents. One major threat is called prompt injection attacks, where someone tricks an AI agent by hiding harmful instructions in normal-looking text. For example, a bad actor could put invisible instructions on a website, and when an AI agent reads that page, it could unknowingly follow those hidden commands.

## Government Steps In to Protect People

Government health agencies are taking safety seriously. On December 1, the FDA (Food and Drug Administration) launched its own agentic AI platform to help employees work more efficiently while staying safe. The United States Department of Health and Human Services (HHS) released a comprehensive strategy on December 1 to make sure AI is used responsibly in healthcare.

The HHS strategy requires all departments to identify which AI systems could affect people's health or personal information. By April 3, 2026, every high-impact AI system must have strong safety protections, including tests for unfair bias and human oversight. If an AI system cannot meet these requirements by the deadline, it must be stopped. This approach, called "governance with teeth," means the government will actively enforce these safety rules.

## Expert Discussions on Accountability and Ethics

On December 8, technology experts gathered in New York City at a special discussion called the Technology Salon to talk about agentic AI challenges. They raised important questions about who is responsible when an AI agent makes a mistake or causes harm.

Experts explained that responsibility is now shared among three groups: the provider who sells the AI service, the model vendor who creates the underlying AI, and the company or organization actually using the agent. This creates confusion because if something goes wrong, it's unclear who should be held accountable.

Another major concern discussed was consent and privacy. When AI agents access your email, calendar, or business documents to do their jobs, it raises serious questions about data protection. Current consent agreements are often not clear and don't really give people meaningful choices about how their data is used. The experts said we need to completely rethink how people agree to let AI agents access their information.

Weekly Highlights