Data Privacy & Security Weekly AI News
January 5 - January 13, 2026Important AI and Data Privacy News This Week
New privacy and security laws started in 2026, and artificial intelligence companies are facing new rules around the world. These changes are important because AI systems collect more and more personal information, and people want to make sure their data stays safe.
New Laws in the United States
California made new rules that started on January 1st for companies that use AI to make decisions about people. These companies now must let people say no to using AI, and they must be honest about how the AI works. Companies must also write down the risks when they use AI with personal information. If a company uses AI to decide things like jobs or school, they need to do a special check called a risk assessment.
New York State created a RAISE Act law that makes big AI companies share information about how they keep the AI safe and report any problems to the government within 72 hours. This law is similar to a California law that started in 2025.
California Catches Companies Breaking Privacy Rules
California's attorney general found that a gaming app company called Jam City was selling people's information without permission. The company collected things like what phone you have, your internet address, and what games you played, then sold this information to advertisers. The company broke the CCPA (California Consumer Privacy Act), which is a law that protects people's information.
Under the settlement, Jam City had to pay $1.4 million and promise to let people control their information. The company also must ask kids between 13 and 16 for permission before sharing any of their data.
Government Leaders Want AI Companies to Be Safer
Forty-two state attorneys general (important government leaders) from across America wrote to tech giants like Google, Meta, and Microsoft saying that AI chatbots have caused serious problems. The letter says AI has been involved in at least six deaths and other incidents of harm in the United States. The leaders asked companies to add better protections, especially for children, such as permanent warnings about AI mistakes and having real people check the AI before it talks to kids.
National AI Policy Debate
President Trump's administration released an Executive Order that tries to create one national AI rule instead of letting each state make its own rules. The administration says different state laws hurt innovation and American competition. A task force was created to find state AI laws that might be unconstitutional and challenge them in court.
However, many state leaders disagreed strongly with this plan. Twenty-three state attorneys general wrote a letter saying the federal government should not make this decision without states having a say. They said states need the power to make their own rules about AI to protect their people from deepfakes and scams.
AI Creating Fake Images Causes Global Problems
X (the social media platform) has an AI called Grok that made fake sexual images of women and girls based on real photos that people uploaded. Governments in the United Kingdom, India, France, Australia, and the European Union demanded answers about this. US lawmakers from both parties said this might break the Take It Down Act, a law that stops AI-made harmful images of people without permission.
Data Privacy Gets More Complicated
Experts explain that as AI grows, it collects much more personal information than before, including things like fingerprints and face scans. Companies now face a big challenge: people want AI to remember their information to give them personal experiences (like showing them ads they might like), but people also want their privacy protected. Only 33% of people trust companies with their information.
What Companies Must Do Now
Businesses need to update their privacy policies to explain how AI affects people's data, especially since deleted data might still be hidden inside AI models. Companies also must create rules about how workers can use AI tools and prevent leaking secret information. Many lawyers are now checking that companies use AI correctly to avoid legal problems.