A new company called Confident Security came out of hiding with $4.2 million to build special locks for AI data. Their technology uses end-to-end encryption to stop AI companies from saving your questions or personal details. This helps companies follow strict European Union rules called the AI Act that protect people's safety and privacy. In the United States, President Trump signed the One Big Beautiful Bill Act on July 4, 2025. This new law gives money to American AI companies but has strong rules about working with foreign countries, especially China. It also requires AI companies to use more materials made in the U.S..

Security experts warned everyone to think twice before letting AI helpers access personal information. When you let an AI book a restaurant table, it might ask for your calendar, contacts, and credit card – like giving away a photo of your private life. Signal app president Meredith Whittaker said using these AI helpers is like "putting your brain in a jar" because you lose control of your secrets. Perplexity AI says it stores data on your device, but you still let them use your information to improve their models.

McDonald's had a security problem with its AI hiring chatbot called McHire. Security researchers found weak passwords in the system that let them see five U.S. job applicants' private details like names and phone numbers. The chatbot was made by a company called Paradox.ai, which fixed the problem within hours. McDonald's said they were disappointed and made the company solve the issue immediately. This shows why companies must keep AI tools updated with strong passwords.

AI is becoming a double-edged sword for cybersecurity. On the good side, autonomous response systems can stop cyber attacks instantly without waiting for humans. For example, they can block ransomware attacks right away, saving companies an average of $2.22 million. Another helpful tool is federated learning, where AI learns from data without moving private information from its original location. But bad news: 1 in 4 security bosses reported AI-powered attacks on their companies this year. These attacks are hard to spot because they act like humans. Security leaders now worry more about AI risks than other dangers.

Companies are being told to create clear rules for using AI tools safely. About 25% of American workers use AI weekly, with over 10% using it daily for tasks like writing or data analysis. Lawyer Andrew Tibbetts says every company needs policies about what information can be shared with which AI tools to avoid leaking secrets. Almost 70% of companies already use AI helpers, with 23% planning to start next year. Security bosses want simple "allow-by-default" controls to safely manage these tools.

New protections are being developed for future threats. AI is helping create quantum-resistant cryptography to prepare for super-powerful quantum computers that could break today's digital locks. In California, a big settlement against Healthline Media showed that states are strictly enforcing privacy rules, especially for medical information. Lawyer Heather Egan called this a "wake-up call" for all websites to protect consumer data properly.

Weekly Highlights