Data Privacy & Security Weekly AI News
March 24 - April 1, 2025Microsoft unveiled Security Copilot agents, AI tools designed to tackle phishing attacks by automating threat detection and response. These agents can process over 30 billion phishing emails monthly, letting human experts focus on advanced threats. The company also added real-time protection in Microsoft Teams, blocking malicious links and tracking incidents in Defender. New features like browser data loss prevention in Edge aim to stop sensitive info from entering AI chatbots like ChatGPT.
In the EU, the European Data Protection Board (EDPB) cracked down on AI data scraping, ruling that even public data used in AI models must follow GDPR privacy laws. This means companies must prove their AI doesn’t expose personal details through inference or re-identification. For example, a model trained on public social media posts could still violate privacy if it predicts users’ addresses. The AI Act now requires stricter audits for high-risk AI systems, especially in healthcare and finance.
The U.S. saw mixed AI developments. OpenAI partnered with National Laboratories to enhance nuclear security through better threat detection. Meanwhile, Chinese rival DeepSeek triggered a tech stock sell-off by releasing an AI app rivaling GPT-4, highlighting global AI competition.
Compliance hurdles dominated enterprise concerns. A survey found 51% of developers rank security as their top challenge, followed by AI code reliability (45%) and data privacy (41%). Firms like Cloudera noted that overlapping rules (e.g., GDPR vs. AI Act) create confusion, delaying projects. One CISO warned that strict data controls for AI tools don’t match looser rules for legacy systems, creating unfair barriers.
To address shadow AI risks, Microsoft rolled out Entra internet access filters to block unauthorized AI apps, while Purview DLP in Edge prevents data leaks into tools like Gemini. Experts urged vendors to simplify compliance docs and focus on common regulations to speed up approvals.
Globally, the push for ethical AI intensified. The OWASP listed top AI risks (e.g., prompt injection), prompting Microsoft to add specialized Defender alerts by May 2025. Alibaba’s open-source Qwen2 model aimed to democratize AI agent development, but experts stressed the need for built-in safeguards.
In China, leaked documents revealed government AI censorship tools that auto-detect sensitive content, raising concerns about surveillance. Meanwhile, the U.S. and EU debated balancing innovation with controls, as 73% of tech leaders prioritized expanding AI despite risks.