Data Privacy & Security Weekly AI News
February 16 - February 24, 2026This weekly update covers important data privacy and security concerns related to AI agents around the world.
Microsoft recently discovered a sneaky new way hackers can trick AI chatbots. They call it hidden prompt injection, which happens when hackers hide secret instructions in links. When people click these links, the AI remembers the bad instructions and gives biased answers in the future. This trick was used by more than 30 companies trying to trick people into buying their products.
Governments are also getting more serious about protecting people from AI dangers. California is investigating an AI company called xAI because its chatbot was creating harmful images without permission. Meanwhile, the United Kingdom is planning strict new rules for AI chatbots that talk to kids, and might even ban kids under 16 from using social media altogether. Many countries in Europe, including Spain, Ireland, and France, are also taking action to keep kids safe from harmful AI content.
There's also a disagreement between the White House and the state of Utah. Utah wanted companies to share their AI safety plans, but the White House tried to stop this rule from happening.
A company called HUMAN Security explained that AI agents do cause some problems, but they're not as big of a problem as older types of computer attacks. They said the real challenge is measuring and understanding how AI agents work, not just being scared of them. Most importantly, keeping AI agents safe will require transparency and honest communication between companies and the people who use their products.