Data Privacy & Security Weekly AI News

October 6 - October 14, 2025

This weekly update reveals major security problems affecting artificial intelligence systems that companies and individuals use every day.

Researchers discovered a serious hardware flaw called GATEBLEED that lets hackers steal private information from AI systems. This problem affects Intel computer chips and cannot be fixed with a simple software update. The flaw allows attackers to figure out what data was used to train AI models and even steal private user information. What makes this especially dangerous is that deeper AI networks are more vulnerable to this type of attack.

Meanwhile, AI tools have become the number one way that company secrets leak out of organizations. A new report shows that employees paste sensitive information into ChatGPT and similar tools 77% of the time through personal accounts that companies cannot monitor. Workers copy and paste data about 14 times per day on average, with at least three of those pastes containing private information.

Large companies are taking these risks seriously. More than 70% of S&P 500 companies now list AI as a major business risk in their official reports, compared to only 12% in 2023. These companies worry about reputation damage, cybersecurity threats, and new regulations. The security concerns are especially important as agentic AI systems that make decisions on their own become more common.

Extended Coverage