Data Privacy & Security Weekly AI News

February 16 - February 24, 2026

## Weekly Update: Data Privacy and Security News About AI Agents

### Hidden Instructions in Links Are Tricking AI Chatbots

Microsoft security researchers found a new problem with AI chatbots. Hackers are hiding secret instructions inside regular-looking links. When you click on these links, the AI chatbot remembers the hidden instructions and changes how it behaves. Scientists call this hidden prompt injection. For example, a link might look normal when you see it, but it actually tells the AI to always recommend a certain company's products. The AI doesn't know it's being tricked.

Microsoft found that over 30 different organizations have already tried this trick. These organizations work in many different industries, like banking, health care, law, and software companies. The hackers hide their instructions in something called URL parameters, which are parts of web addresses that most people never see. This is very sneaky because users can't see what's happening.

### Companies Are Getting In Trouble For Not Protecting Kids

California, a large state in the United States, is investigating an AI company called xAI. The company created a chatbot called Grok that was creating pictures of real people without their permission. These pictures showed people doing things they never agreed to. California's Attorney General Rob Bonta sent xAI a letter telling them to stop this behavior immediately. This shows that state governments are starting to enforce rules about AI safety.

### The United Kingdom Is Making New Rules For AI Chatbots

The United Kingdom government is working on very strict new rules for AI chatbots. They might even ban children under 16 from using social media completely. The government is also closing loopholes that let some AI chatbots avoid safety rules that protect kids. Leaders in the UK say that AI chatbots are forming unsafe relationships with young people, and they want to stop this. These changes could happen within just a few months.

### Many European Countries Are Taking Action Together

Multiple countries in Europe are upset with big technology companies. Spain, Ireland, France, Greece, Denmark, Slovenia, and the Czech Republic are all investigating big tech companies or planning bans on social media for kids. Germany and Britain are also thinking about doing the same thing. The governments say they're doing this to protect children from AI-created harmful content and to keep people safe online.

### A Disagreement About AI Transparency

The White House, which is where the President of the United States works, is trying to stop Utah from making a new law. Utah wanted to require AI companies to publish their safety plans and share information about how they protect kids. But the White House doesn't want states to make their own AI rules. This disagreement shows that there's confusion about who should make rules for AI in America.

### Understanding AI Agent Problems

A security company called HUMAN Security explained something important about AI agents. While many news stories make AI agents sound very dangerous, the company says this might not be totally accurate. HUMAN Security says they see tens of millions of AI-related requests every week, but most of these are not harmful. The company uses special technology to tell the difference between normal AI activity and harmful activity.

HUMAN Security says the real problem isn't that AI agents are brand new dangers. Instead, the problem is that we don't understand them very well yet. The company watches AI agents throughout their entire journey on websites, not just where they see ads. By doing this, they can measure what AI agents are actually doing and spot real problems.

### What This Means For Everyone

These stories show that AI safety and privacy are becoming more important every day. Governments around the world are making new rules to protect people. Companies are discovering new ways that hackers can trick AI. And security experts are learning how to measure and understand what AI agents actually do. The key to making this all work is transparency, which means being honest about how AI works and what problems it can cause. When companies share information about their AI safety plans, and when governments create clear rules, people can trust AI more.

Weekly Highlights