The United States Congress made waves this week with a proposed 10-year moratorium on new state and local AI regulations. Hidden inside a large budget bill, this measure would stop regional governments from creating AI-specific laws until 2035. While supporters argue this will create consistent national standards, critics warn it could leave gaps in AI consumer protections. Legal experts predict challenges if the bill passes, as states like Massachusetts have already started applying existing privacy laws to AI systems.

In parallel updates, the Federal Trade Commission (FTC) changed children's online privacy rules to specifically cover AI training. Companies must now get verified parental consent before using data from users under 13 to develop AI tools. The FTC emphasized that biometric data (like voice or face scans) used in AI systems poses special risks for tracking kids. They also reminded companies to delete old data quickly, noting that over-retention fines now reach $53,000 per violation.

A global coalition of cybersecurity agencies led by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) identified data flaws as the Achilles' heel of AI systems. Their 23-page guide stresses that contaminated training data can lead to dangerous AI errors in fields like healthcare and transportation. The report recommends encrypting all AI-related data streams and strictly limiting access during development. It also warns that hackers are increasingly targeting AI training pipelines to manipulate outcomes.

European data regulators published final guidance on international data transfers involving AI systems. The guidelines help companies legally share information across borders while complying with the EU's strict privacy laws. A new training program teaches AI developers how to conduct risk assessments for data transfers, particularly when working with cloud providers in different countries.

India's Digital Personal Data Protection Act (DPDPA) enters force next month with strict rules for AI companies. The law requires immediate breach reporting and limits how long AI systems can retain personal data. Foreign tech firms must appoint local representatives and follow data localization rules when processing Indian users' information through AI tools. Meanwhile, U.S. states like Montana and Iowa began enforcing updated privacy laws that affect AI developers working with sensitive data like location history and online identifiers.

These global developments highlight growing tensions between AI innovation and privacy rights. While the U.S. debates freezing state-level AI rules, other regions are rushing to implement guardrails. The FTC's new consent requirements for child data in AI and India's steep penalty system show regulators taking hard lines on irresponsible data practices. Cybersecurity experts urge companies to view data security as critical infrastructure for trustworthy AI, not just a compliance checkbox. As AI becomes central to more products, these privacy frameworks will shape how companies design, train, and deploy intelligent systems worldwide.

Weekly Highlights