Orion Security made waves this week with its new AI threat-detection tool designed to catch insider risks. The system learns normal employee behavior patterns and flags unusual file access or data transfers. Early tests show it reduces false alarms by 60% compared to older methods.

US states are taking the lead on privacy laws while federal rules stall. Proposed regulations focus on three key areas: stricter controls for health apps, bans on selling kids' location data, and new transparency rules for data brokers. California's latest bill would let people sue companies that recklessly expose sensitive information.

A shocking study revealed that generative AI tools often mishandle private data. When businesses use AI for tasks like customer service, nearly half of all prompts accidentally include personal details. Security credentials appear in 13% of AI interactions, creating easy targets for hackers.

AI-powered cyberattacks are growing smarter. Bad actors now use machine learning to scan networks for vulnerabilities up to 100 times faster than human hackers. Security teams must upgrade their defenses with AI detection systems that evolve alongside these new threats.

Companies like Darktrace and Palo Alto Networks are promoting built-in data protection for AI workflows. Their solutions automatically remove sensitive info from training data and monitor AI outputs for leaks. This "sanitize first" approach helps prevent accidental exposure of secrets.

Global experts agree that regulation must keep pace with AI advances. The EU's upcoming AI Act and US state laws show a shift toward requiring clear documentation of data practices. Businesses worldwide will need to audit their AI systems and prove they protect user privacy.

Weekly Highlights