Ethics & Safety Weekly AI News

April 6 - April 14, 2026

What Are Agentic AI Systems?

Agentic AI systems are a new type of artificial intelligence that can work independently, which means they can make decisions and take actions without a human telling them what to do step-by-step. These systems are different from older AI tools that just help people find information or do simple tasks. Agentic AI can access, understand, and use sensitive information all on their own, which makes them more powerful but also riskier. This is why doctors, hospitals, and governments are very focused on making sure these systems are safe and follow all the rules.

New Rules for Healthcare Organizations

In the United States, the government made new rules for hospitals and healthcare organizations that use agentic AI systems. Starting in February 2026, these organizations must study their agentic AI very carefully to find any possible problems before they use it with patient information. This process is called a "risk analysis," and it helps doctors and hospital leaders understand what could go wrong. Hospitals also need to sign special agreements called Business Associate Agreements with any company that sells them AI tools. These agreements explain how the AI company must protect patient information and keep it safe from hackers. If organizations break these new rules, they can be fined very large amounts of money—up to $2.13 million each year.

Healthcare organizations also need to use special computer protections like encryption to keep patient information safe when agentic AI systems use it. They need to keep track of who is using the AI system and what information they are looking at, like keeping a detailed diary of everything that happens. The rules also say that organizations should only give the agentic AI the smallest amount of patient information it really needs to do its job, not everything.

The World Works Together on AI Safety

The United Nations recently started a brand new panel of scientific experts to study artificial intelligence and make sure it is used fairly and safely around the world. This is the first time the world has created one global group like this to focus on AI ethics and safety. The leaders of the United Nations say that important decisions about people's lives should not be made only by computer algorithms—humans must stay in charge. Experts are also worried about creating AI that is like a "Frankenstein's monster" because it does not understand basic human values.

What Governments Are Doing Worldwide

Many countries are creating new laws about AI safety this year. In Japan, the government is letting companies use more personal information to develop AI, but it is also creating strict fines if companies misuse that data. In the European Union, lawmakers are creating new rules to make sure AI systems are transparent and fair, and they even banned AI tools that create fake sexual images of real people without permission. The United States White House created a new policy framework for how American AI should work, focusing on protecting children, helping workers, and making sure AI companies can still create new technology.

Big Problems with Following the Rules

Even though there are many new laws and guidelines about AI, many organizations are struggling to follow them all. The United States government discovered that many of its own agencies are not following AI safety rules correctly. Out of 227 AI systems that could harm people, 206 received extra time to follow the rules because they said they did not have the right safety protections in place yet. Companies also struggle because there are no trusted standards or tools to check if AI systems are working safely, and there is no good way for companies to share information about AI problems with each other.

Tools to Help Stay Safe

To help organizations manage all these complicated rules, experts created special frameworks and tools. The NIST framework describes what makes AI trustworthy and gives organizations a step-by-step guide for checking their AI systems. Healthcare organizations are encouraged to follow America's AI Action Plan, which gives clear guidance on how to use AI safely and responsibly. New tools and software can now help organizations check if their AI systems follow all the rules much faster than doing it by hand.

What Comes Next

The main goal for governments and organizations around the world is finding the right balance between letting AI companies create amazing new technology and making sure that agentic AI systems do not hurt people or violate their privacy. Everyone agrees that humans must stay in control of important decisions, and AI should be used to help people, not replace their judgment or take away their rights.

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now