Data Privacy & Security Weekly AI News

December 29 - January 6, 2026

# Data Privacy and Security Weekly Update: AI Agents and Agentic AI Take Center Stage

## Understanding Agentic AI and Why It Matters

This week's biggest privacy and security story is about agentic AI and agentic browsers. These are special kinds of artificial intelligence that can do things all by themselves. Instead of a person clicking buttons and typing commands, the AI does it for them. Think of it like having a robot assistant that can browse the internet, send emails, and do tasks without you telling it exactly what to do each time. This is very powerful and helpful, but it also creates new problems that didn't exist before.

## How Bad Guys Are Tricking Agentic Browsers

The biggest problem with agentic browsers is something called prompt injection attacks. This means bad people can trick the AI into doing things it shouldn't do by sending it special commands hidden in links or messages. Even very smart companies like OpenAI, which created ChatGPT, have had trouble keeping their agentic browser called Atlas secure. Attackers figured out that if they put a specially made link into the browser's web address box, they could trick it into thinking a website link was actually a real command. It's like tricking a robot into thinking a piece of paper is a person giving it orders.

## Private Conversations Are Being Leaked

Another big problem with AI is that private information is leaking. When companies use AI to help people, they sometimes don't protect people's private messages and information well enough. Some AI programs that are supposed to be like friends to people have accidentally shared private conversations online. This happened because users didn't know that when they turned on certain settings, their chats would become searchable or used for advertisements. About 24 large companies in America even told their investors that they worry about this problem.

## New Rules in America Starting Now

Starting on January 1, 2026, big changes to AI rules began happening in America. California made rules called the California Transparency in Frontier Artificial Intelligence Act, or TFAIA. Texas made rules called the Texas Responsible Artificial Intelligence Governance Act, or RAIGA. These state laws try to keep people safe by requiring companies to be honest about how they make AI and by protecting people from AI that treats them unfairly. However, on December 11, 2025, President Trump signed a new rule that creates confusion. This new rule might stop some of the state laws from working, and lawyers are trying to understand what happens next.

## Europe Is Making Everyone Follow New Rules

In Europe, 2025 was the year that new AI rules stopped being just ideas and started being real. The EU AI Act is now something that companies must actually follow. Before, this law was passed by lawmakers, but now the European Commission is telling companies exactly what the law means and what they have to do. Companies now have to prove they're following the rules or they get in trouble. This is much harder than people thought it would be, and it changed how companies do business in Europe. Additionally, the United Kingdom made new privacy rules to work with their own laws, and these rules protect people from AI that might hurt them.

## Training Data Is the New Privacy Battleground

One of the biggest fights about privacy in Europe is about training data. This is the information used to teach AI how to do things. European countries say that if AI is trained using people's personal information, companies need permission to do it. They're investigating big companies like Meta (which owns Facebook) because Meta wanted to use people's posts to teach their AI. These countries are saying "no, you can't just take everyone's information without asking them first". The company that runs Twitter is also being investigated for using people's tweets to train a new AI called Grok.

## What Security Experts Predict for 2026

Experts who study computer security think 2026 will be a hard year for staying safe online. They predict that bad guys will use AI to make more attacks happen faster and cheaper. This doesn't mean they'll find brand new ways to attack computers. Instead, they'll use AI to do the old tricks much faster. For example, they might send out many more fake emails that look real, or they might try to break into more computers at once. The good news is that the best defense is still the same: having strong passwords, keeping your computer updated, and not clicking on strange links.

## What Companies and Countries Need to Do

Because of all these changes, companies and governments now must act quickly. Instead of waiting to see what happens, they need to start building systems to follow the rules. Companies need to know exactly where their training information came from and how to protect it. They also need to have an "off switch" if someone asks them to stop using their personal information. Different countries are making different rules, so companies need plans that work in the United States, Europe, and the United Kingdom at the same time. This is a big change from how companies used to do things, but it's important for protecting everyone's privacy.

Weekly Highlights