Data Privacy & Security Weekly AI News

November 17 - November 25, 2025

This week's biggest AI security story involves a group from China that used artificial intelligence to break into computers at tech companies and banks. These hackers didn't just copy old methods—they used AI to help them search for weak spots in computer systems automatically. This is new and scary because AI can work much faster than regular hacking. The group found ways into many important companies and took valuable information. This shows that bad actors are now using AI as a weapon to attack businesses.

Another worry this week is about how police are using technology to watch people. Police departments are using something called ALPR cameras that can read license plates and track where cars go. Law enforcement officers have been using this technology to follow protesters and activists. Many people think this is unfair because people should be allowed to protest without being watched. This raises big questions about freedom and privacy in America.

Companies had serious problems this week too. DoorDash, the company you use to order food from restaurants, announced that hackers broke into their computers. Hackers tricked one worker by pretending to be someone else, and they got into DoorDash's systems. The company said millions of customers' names, emails, phone numbers and home addresses were stolen. This is the third time DoorDash has been hacked in six years. Importantly, the hackers did NOT get credit card numbers or driver's license information, but people are still worried.

Oracle, a big computer company, was also attacked this week. A group of hackers called Cl0p found a serious weak spot in Oracle's system and broke in. They stole information from almost 30 famous organizations including The Washington Post, Logitech, Harvard University, and Cox Enterprises. The hackers found and used something called CVE-2025-61882, which is like a secret door in Oracle's software. Nearly 10,000 people might have had their information stolen from this attack.

The European Union announced new rules this week to update their AI laws. The EU is trying to make it easier for companies to follow the rules while still protecting people's privacy. These changes include letting companies use AI for specific purposes, making sure cookie banners work better, and having one simple way to tell the government about stolen data. The EU wants to help businesses use AI for good things without breaking the old privacy rules.

In America, states are making their own rules about AI. California created a new law about health information near family planning places. Texas made a law that says companies must ask permission before using someone's face or fingerprints in AI. Utah created the first consumer protection law just for AI. States like California and North Dakota are also making rules about fake AI videos that could trick people in elections. Over 250 AI-related laws were proposed across America this year.

Congress introduced a new law called the Health Information Privacy Reform Act. This law would protect people's health information in a much stronger way. It would make companies that work with health information follow the same strict rules that hospitals and doctors have to follow. The law also says companies must explain how they use AI with health data and can't sell health information without asking permission first.

The United Nations warned this week that hospitals need better rules when using AI. WHO, which stands for World Health Organization, said that AI is helping doctors find diseases faster, but people and patients need to be protected. The UN wants countries to invest in training workers about AI, make stronger laws about AI in hospitals, and talk to people about AI changes.

All these stories show that 2025 is a big year for data privacy and security. Everyone—companies, governments, and ordinary people—is learning that AI can be powerful and helpful, but it also needs strong rules to keep people safe.

Weekly Highlights