Data Privacy & Security Weekly AI News

October 6 - October 14, 2025

This weekly update brings troubling news about security and privacy problems affecting artificial intelligence technology. Multiple reports show that AI systems create new ways for hackers to steal information and for company secrets to leak out accidentally.

Researchers from North Carolina State University made an important discovery about a hardware security flaw that affects AI systems. They found a vulnerability called GATEBLEED in Intel's computer chips that include special AI accelerators. These accelerators are parts of computer chips designed to make AI run faster while using less power. The problem is that hackers can use this flaw to figure out what data was used to train an AI system and even steal private information from users.

What makes GATEBLEED particularly dangerous is that it cannot be fixed with a regular software update. Because the problem exists in the physical hardware itself, computer chip makers would need to redesign their chips, which takes many years. In the United States, this affects systems using Intel's 4th Generation Xeon Scalable CPUs, which include the Advanced Matrix Extensions technology. The researchers found that this vulnerability can achieve 81% accuracy in determining whether specific data was used to train an AI model.

At the same time, a separate study by a company called LayerX reveals that AI tools have become the biggest source of data leaks from companies. The report analyzed real-world data from businesses and found shocking statistics. Almost half of all employees (45%) now use AI tools like ChatGPT, Claude, and Copilot. However, 67% of this AI usage happens through personal accounts that company security teams cannot see or control.

The most alarming finding is about how employees share information with AI systems. The study found that 40% of files uploaded to AI tools contain personally identifiable information or payment card data. Even worse, the main way data leaks out is through copy-and-paste actions rather than file uploads. Workers paste information into AI tools through personal accounts 82% of the time, averaging 14 pastes per day with at least three containing sensitive data. This makes copy-paste into AI systems the number one way corporate information leaves company control.

Major corporations are starting to recognize these dangers. According to Cybersecurity Dive, more than 70% of companies in the S&P 500 now identify AI as a material risk in their public disclosure documents. This represents a dramatic increase from just 12% in 2023. Companies are no longer treating AI as experimental technology. Instead, they are embedding it in important business systems including product design, logistics, credit decisions, and customer service.

These companies identify three main areas of concern. Reputational risk leads the list, with more than one-third of companies worried about brand damage from AI failures or privacy mishaps. Cybersecurity risk ranks second, with one in five companies explicitly citing security concerns about their AI deployments. Companies face threats both from their own AI systems and from third-party AI applications they use. Regulatory risk completes the top three, as state and federal governments work to create new rules for AI technology.

Another study focused specifically on how AI data becomes a prime target for cyber attackers. The research found that about 24-25% of survey respondents reported at least one exposure in their AI infrastructure. The leading hotspot is data at inference, which refers to when AI systems actively process information to make predictions or decisions. This is especially concerning as more companies deploy agentic AI systems where artificial intelligence agents connect to other agents and make decisions automatically.

Experts warn that as agentic AI systems become more autonomous, the security stakes grow higher. Organizations will need to prove how AI systems reached their decisions, not just what decisions they made. Security professionals note that data privacy and compliance concerns are slowing down the adoption of agentic AI in many industries. The challenge is that attackers themselves now use AI tools, creating an arms race where companies must use AI to defend against AI-powered threats.

These developments show that AI security and privacy have moved from future concerns to present-day challenges. Companies and individuals using AI tools need to be aware of these risks and take steps to protect sensitive information. The problems affect systems across multiple countries and require both technical solutions and careful policies about how people use AI tools at work and at home.

Weekly Highlights