Data Privacy & Security Weekly AI News
April 6 - April 14, 2026## Weekly Data Privacy and Security Update
### AI Company Hit by Major Hack
Mercor, a company worth $10 billion, faced a serious data breach this week. Mercor is special because it provides training data to big AI companies like OpenAI and Anthropic. These AI companies use this training data to teach their AI systems to work better. When hackers broke into Mercor's computers, they stole important files including Slack messages, internal work tickets, and videos showing how Mercor's AI communicates with workers. The hackers also said they took source code and database records. This breach is very important because it shows that the companies building AI systems need to protect the data they use. If hackers steal this training data, it could affect millions of people who use these AI systems.
### New Rules to Keep AI Safe
The United States government is taking action to protect people from AI risks. Scientists at the National Institute of Standards and Technology (NIST) created an updated set of privacy rules called the Privacy Framework 1.1. This new framework specifically talks about how AI systems can threaten people's privacy. For example, AI systems might accidentally reveal information about people through something called data reconstruction or prompt injection. The new rules also talk about bias, which means when AI makes unfair decisions about people. These updated rules help companies understand how to use AI safely while protecting people's personal information.
### Europe Creates AI Rules
In Europe, leaders are passing new laws to control how AI is used. One important rule stops people from using AI to create fake nude images of real people without their permission. The new rules also say that companies must put watermarks (like digital signatures) on AI-created content like images, videos, and audio. Companies must complete this work by November 2, 2026. These European rules help protect people from harmful AI uses and show that AI regulation is becoming a serious focus around the world.
### Hackers Target Software Developers
North Korea-linked hackers created serious problems for software developers this week. These hackers released over 1,700 harmful software packages into public places where developers download code. These fake packages were designed to trick developers into downloading malware that could damage their computers. This attack is especially important for AI development because developers building AI tools need safe places to download code. If their computers get infected, it could compromise all the AI projects they are working on. Security experts also found that hackers were copying the names of popular tools like Microsoft Teams and Zoom to trick people into visiting fake websites.
### Multiple Large Data Breaches
Many companies experienced data breaches this week that exposed millions of people's information. For example, Crunchbase, a company with business information, lost 2 million records containing personal information and company documents to hackers who used voice phishing. Another company called CarGurus had over 12 million people's information stolen, including names, addresses, email addresses, and phone numbers. A healthcare company called Marquis Health suffered a breach affecting 780,000 people, with stolen Social Security numbers and financial information. These large breaches show that cybersecurity is a huge problem affecting real people's lives and private information. Hackers used different methods to attack these companies, including stealing passwords and using ransomware.
### Why This Matters for AI
All of these security issues are connected to AI development. When hackers steal training data from companies like Mercor, it affects all the AI systems that depend on that data. When governments create new privacy rules, companies building AI must follow them. When hackers target developers, they could compromise the tools used to create AI. This weekly update shows that keeping data safe and following privacy rules is essential for the future of artificial intelligence.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.