Data Privacy & Security Weekly AI News
February 9 - February 17, 2026### AI Privacy Becomes More Important
One of the biggest news stories this week is that privacy protection in AI is becoming a serious requirement for all companies. As AI tools are being used more and more in everyday work, organizations need to build in safety protections from the start. This is called "privacy-by-design," which means thinking about protecting information before building the AI tool, not after. Companies should use tools like encryption (which scrambles information so bad actors cannot read it), minimization (using only the information you really need), and access controls (deciding who can see what).
Why is this important? Because AI tools can touch a lot of sensitive information that should be private. The challenge many companies face is that they are using AI faster than they can create rules for it. This sometimes creates "shadow AI" where teams experiment with AI tools without asking permission, and this can accidentally leak private information. To fix this problem, organizations need repeatable controls that they can use over and over, and they need to keep records that prove they are following privacy rules.
### New Rules from the Government
Governments around the world are creating new rules about AI. In the United States, there is now a new "bulk data transfer" rule from the Department of Justice that stops companies from sending large amounts of sensitive personal information to certain countries. These countries include China, Russia, Iran, North Korea, Cuba, and Venezuela. This rule is meant to protect national security, and companies that break it can face very big punishments, including fines and criminal charges.
Another important development is that NIST (which stands for National Institute of Standards and Technology) created new guidance to help organizations keep their AI systems safe from attacks. The guidance came out in December 2025 and was ready for feedback until January 30, 2026. This guidance helps companies understand special cybersecurity problems that come with AI systems. Companies that already follow NIST's other security rules can add AI-specific protections to what they are already doing.
### States Create Different AI Laws
In the United States, different states are creating different AI rules, which is making it complicated for companies. For example, Colorado was the first state to pass a big AI law called the Colorado Artificial Intelligence Act, and it will be enforced starting June 30, 2026. This law focuses on AI that makes big decisions about people's jobs, where they live, and their health. California also passed several AI laws about transparency (telling people when AI is being used) and preventing unfairness. These California laws will take effect in 2026.
However, the White House is saying that there should be fewer rules, not more. This creates confusion because states want strict rules but the federal government wants fewer rules. Companies now have to figure out how to follow both state AI laws and the federal government's wishes, which is very tricky.
### Protecting Children from AI Chatbots
Many states are creating new rules specifically about AI chatbots (computer programs that talk to people like a human). Arizona, Florida, California, Nebraska, Massachusetts, Pennsylvania, Virginia, and many other states have introduced bills about chatbot safety. Most of these bills focus on protecting children and require chatbots to check the age of users before letting them use the service.
Additionally, the Federal Trade Commission (FTC) made new rules about COPPA (Children's Online Privacy Protection Act) in January 2025. The new rules expanded the definition of what counts as personal information to include biometrics (like fingerprints and face recognition). The FTC has been taking legal action against companies, including a messaging app company that was collecting children's information without permission.
### Fake Videos and AI Misuse are Growing Dangers
A major new international report called the International AI Safety Report 2026 came out on February 3, 2026, and it warns about dangerous ways that AI can be misused. The report was created by over 100 AI experts from more than 30 countries. It warns that AI-generated deepfakes (fake videos that look real) are getting better and harder to spot, and bad people are using them for crimes like scams, blackmail, and creating fake intimate images. The report says this problem disproportionately affects women and girls.
The report also warns that AI can help criminals execute cyberattacks by finding weaknesses in computer systems and writing code to attack them. However, it points out that AI is not yet running cyberattacks completely on its own—it mostly helps prepare attacks. There is also a tricky problem called a "dual-use challenge," which means that tools used to find security problems can also be used to cause harm.
### Companies are Building Safety Frameworks
Many large AI companies are now publishing "Frontier AI Safety Frameworks" that explain how they plan to keep their AI systems safe as they make them more powerful. In 2025, 12 companies published or updated these frameworks. Companies are also using other safety practices like documentation, incident reporting, risk management, whistleblower protection, and transparency reports. However, there is not yet one agreed-upon way for all companies to do this.
To keep AI safe, companies are using multiple layers of protection. This includes things done before the AI goes live (like filtering bad content and having humans check the work) and things done after it goes live (like identifying fake content made by AI). Even though companies have made it harder to get around AI safety protections, bad actors keep finding new ways to attack them.