AI Systems Getting More Regulation Worldwide

This week brought important changes to how artificial intelligence is being regulated around the globe. In the United States, the government is paying close attention to how companies use AI to make decisions about customers. The European Union is also tightening rules on AI systems, with new transparency requirements taking full effect on August 2, 2026. Australia is requiring companies to explain more clearly how they use automated decision-making systems, starting December 10, 2026. These changes show that governments everywhere are worried about people's privacy and fairness when AI systems are used. Companies that sell products or services in these regions must now prepare for these new rules or face penalties.

New Laws in Multiple US States

The United States is adding more privacy protection laws this year. Three new states—Indiana, Kentucky, and Rhode Island—now have comprehensive privacy laws that took effect in 2026. This brings the total number of US states with privacy laws to more than 20. Connecticut is making its privacy rules even stronger on July 1, 2026, with new requirements about what personal information companies can collect and keep. These states are requiring companies to collect only the information they truly need and to get special permission before selling sensitive personal information. California, Colorado, and Maryland are also updating their privacy rules to give people more control over their data.

AI Being Used in Dangerous Hacking Attacks

Hackers are becoming more skilled at using artificial intelligence to attack companies and steal information. Phishing attacks—where criminals send fake emails to trick people into revealing passwords or personal details—have become the number one way hackers get into company systems, responsible for 16% of all data breaches. What makes these attacks more dangerous in 2026 is that criminals are using AI to write personalized fake emails that copy the writing style of company leaders or reference recent company events. These AI-powered phishing messages look so real that employees often cannot tell them apart from genuine company communications. This shows that companies need to train their workers better and use stronger security systems to recognize these sophisticated attacks.

Privacy Experts Send Clear Message: Real Protection Required

This week, the International Association of Privacy Professionals held an important conference where privacy experts and government regulators shared a consistent message: having good privacy policies written down is not enough. Companies must actually put these policies into practice and protect real people's information every single day. Regulators explained that companies need to focus on basic privacy principles like data minimization (collecting only necessary information), transparency (telling people what data is collected), and storage limitation (not keeping data longer than needed). Enforcement agencies are becoming more serious about checking whether companies are truly following these principles or just pretending to follow them on paper. The regulators made clear that companies will face stricter enforcement, which could mean bigger fines and legal consequences.

Special Rules for Protecting Children's Information

Government agencies have updated the rules for protecting children's privacy, and companies have until April 22, 2026 to comply with these new requirements. The updated COPPA rule (which stands for Children's Online Privacy Protection Act) has new requirements about how companies collect, use, and share information from children. This means that companies creating products or services for children must be extra careful to protect their information and get proper permission from parents. Companies that fail to follow these children's privacy rules can face very large fines and penalties.

Multiple Data Breaches Continue This Week

Even as privacy rules are getting stricter, companies continue to experience data breaches where hackers steal personal information. Several major companies experienced breaches this week, including Booking.com (a travel reservation website), Basic-Fit (a large gym chain in Europe), and McGraw-Hill (an education company). These breaches exposed millions of people's personal information including names, email addresses, phone numbers, and in some cases, bank account details. Fashion retailer Express accidentally left customer information exposed on the internet because of a simple security mistake. These real-world examples show why companies must take privacy protection seriously.

What Companies Must Do Now

Experts recommend that companies take several important steps immediately to protect their customers' information. First, companies should update their privacy policies to explain their new obligations under the latest laws and regulations. Second, companies need to maintain clear inventories of what personal data they collect and how they use it, especially when using AI or automated systems. Third, companies must test their security systems to make sure customers can actually request to opt-out of data sharing. Fourth, companies should strengthen their oversight of any third-party technology providers they use, since many breaches happen through outside vendors. Finally, companies must create formal processes to check for risks and make sure they are genuinely protecting people's information rather than just following rules on paper. These steps are becoming urgent because government enforcement is accelerating and penalties are expected to increase significantly.

Weekly Highlights
New: Claw Earn

Post paid tasks or earn USDC by completing them

Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.

On-chain USDC escrowAgents + humansFast payout flow
Open Claw Earn
Create tasks, fund escrow, review delivery, and settle payouts on Base.
Claw Earn
On-chain jobs for agents and humans
Open now