Ethics & Safety Weekly AI News
March 9 - March 17, 2026# This Weekly Update: AI Ethics and Safety March 9-17, 2026
## Vietnam Introduces Strong New AI Safety Rules
On March 10, Vietnam announced an important new framework to keep AI systems safe and ethical. This framework is like a rulebook that tells companies how to build and use AI responsibly. According to the rules, AI systems must be safe and reliable, and they must not hurt people's health, feelings, or privacy. The framework is very clear that developers must think ahead about things that could go wrong and build safety features from the very beginning, not after. This means companies cannot just build an AI system and hope it works—they must plan for problems.
The Vietnam framework also requires companies to test their AI systems carefully before using them with real people. Just like a doctor checks a medicine to make sure it is safe before giving it to patients, companies must check their AI systems to make sure they work correctly. The framework says humans must always be in charge of big decisions. Even if an AI thinks it knows the right answer, a person should check that decision first. This is called human oversight, and it is very important for keeping AI safe.
## Security Experts Warn About Hidden AI Systems
One of the biggest problems security experts are talking about this week is something called "Shadow AI". This is when workers in a company use AI tools that the company does not know about or approve of. Maybe a worker uses a popular AI chatbot without telling their boss, or they download an AI tool without permission. This sounds like it might not be a big problem, but it actually is very serious.
When Shadow AI causes a data breach—meaning hackers steal secret company information—the cost becomes much, much higher. A regular data breach costs about $4.44 million on average around the world, but with Shadow AI involved, it costs an extra $670,000, bringing the total to more than $5 million. Why does Shadow AI make breaches so expensive? Because these hidden tools operate outside the company's normal protection systems. Hackers have more time to steal information before anyone notices what is happening.
## New Threats to Artificial Intelligence Systems
One major threat to AI systems that worry security experts is called prompt injection. Think of it like sneaking secret instructions into the AI's brain. A hacker might send an email with hidden commands inside it. When an agentic AI—an AI that can take actions on its own—reads that email, the hidden commands take over. For example, if an AI has access to a bank account, a hacker could trick it into sending money somewhere using prompt injection. This is similar to a dangerous computer trick from the past called "SQL injection," but now it is targeting modern AI systems.
Another threat is model poisoning. This is when hackers secretly put bad information into the data that is used to teach an AI how to work. If the training data is poisoned, the AI might start making wrong decisions on purpose. In one real example, a bank's AI system that checks if loans are safe was poisoned by hackers, causing the bank to lose $127 million.
There is also something called RAG vulnerabilities. RAG stands for Retrieval-Augmented Generation, which is a fancy way of saying the AI looks up information from company files to answer questions. But if hackers can sneak just 5 bad documents into a database with millions of files, they can trick the AI into making up false information about company rules or policies—about 90 percent of the time.
## Rules Are Getting Stricter Around the World
Governments around the world are making new rules to keep AI systems safe. The European Union's AI Act is now very strict. By August 2026, companies must tell people when they are talking to an AI, and they must prove that their AI systems are fair and do not discriminate. If a company breaks these rules, they can be fined up to 7 percent of their worldwide profits—that could be billions of dollars for big companies.
In the United States, different states are making their own rules. Colorado now requires companies to check their AI systems to make sure they are fair when used for hiring or housing decisions. California requires AI makers to explain what information they used to teach their AI systems. Texas has a law that tells people when government websites are using AI, and it bans using AI to create harmful material.
## How Companies Are Protecting AI Systems
Security experts say companies need to change how they think about AI safety. Instead of trying to block all AI tools, companies should use better tools to see which AI systems workers are using. This is called the "discovery before control" strategy—if you cannot see the AI tools, you cannot protect them. Companies are also learning to use AI to fight AI, meaning they use AI systems to look for hackers and stop attacks faster. This can save companies almost $2 million per data breach.
Experts also recommend that companies have a person check important AI decisions—this is called human-in-the-loop approval. Before an AI takes a big action, especially when money or people are involved, a human should say "okay" first. This helps prevent AI systems from making big mistakes or being tricked by hackers.
## What This Means for the Future
The message this week is clear: AI safety is now a legal requirement, not just a nice idea. Companies that do not take AI ethics seriously face big fines and lose customers' trust. As AI systems become smarter and more independent, keeping them safe, honest, and fair will become even more important. Both governments and companies are working hard to make sure that as AI helps us do more things, it does them safely and fairly for everyone.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.