Data Privacy & Security Weekly AI News

October 20 - October 28, 2025

This week brought exciting news and serious warnings about artificial intelligence agents—smart AI systems that can do tasks on the internet for you. OpenAI, the company that makes ChatGPT, launched a brand new browser called ChatGPT Atlas that uses AI to help people search the web, make plans, and even book flights automatically.

Understanding How AI Browsers Work is important because they're different from regular web browsers like Chrome or Firefox. When you use ChatGPT Atlas, you're giving the AI permission to look at your websites, read your emails, and even use your passwords to do things on the internet for you. The AI can remember things about you to give better suggestions, and it has something called "agent mode" where it takes over your screen to complete tasks step by step.

But security experts immediately raised alarm bells about how dangerous these AI browsers could be. The main problem is called prompt injection attacks, which is a sneaky way to trick AI into misbehaving. Think of it like this: if someone hides secret instructions in a picture or a button on a website, the AI might follow those hidden instructions instead of what you wanted it to do.

Real-world examples show how serious these attacks can be. One security expert showed that they could trick ChatGPT Atlas by putting hidden "copy to clipboard" commands in website buttons, so when the AI visited the site, it would secretly copy bad links into your computer. Later, if you accidentally pasted one of those links, you might get sent to a fake website that steals your passwords and security codes. Another example showed that attackers could hide commands in pictures, and the AI would follow them when taking screenshots.

Privacy experts worry about what information gets shared with OpenAI's servers. The browser asks users if they want to import their passwords and browsing history from their old browser, which means the AI could see really private information. Many people don't understand they're sharing this much information, according to security researchers. If someone hacks into OpenAI's computers or tricks the AI into leaking data, personal information like bank account details could be exposed.

Companies are trying different safety solutions. OpenAI created a feature called "Watch Mode" that shows you what the AI is doing before it does something important. They also have "logged out mode" where the AI doesn't log into your account, which is more secure but makes the AI less helpful. Perplexity's Comet browser built a system to detect prompt injection attacks as they happen. However, security experts from a company called Brave said that even with these protections, the problem is still not completely solved.

OpenAI's head of security admitted this is a big challenge. He said the company is "very thoughtfully researching" ways to stop these attacks, and they did lots of testing before launching ChatGPT Atlas. They trained the AI to ignore bad instructions and added extra safety walls to protect users. But he also said that "prompt injection remains a frontier, unsolved security problem," which means hackers will keep trying to find new ways to trick these AI agents.

Other companies are releasing similar AI browsers, which means this problem is growing. Perplexity has Comet, Google added AI to Chrome, and Microsoft is working on AI browser features too. Each one has the same security challenges because they all work the same way—giving AI permission to do things on the internet for you.

Security experts say we need new ways of thinking about browser safety. Regular web browser security isn't enough for AI agents because the AI is actively reading and making decisions. A researcher at MIT explained that "if you want the AI to be useful, you need to give it access to your data and your privileges, and if attackers trick the AI, it's like they tricked you." This means the safety of AI agents is more about controlling what information they see and what they're allowed to do than just protecting them from hackers.

The bottom line is that AI browsers are powerful tools with exciting possibilities, but they also come with real dangers that nobody has completely figured out how to fix yet. People should think carefully about what information they share with these browsers and watch what the AI is doing before it makes important decisions.

Weekly Highlights