U.S. State Laws Target AI Transparency and Fairness California’s new AI Transparency Act requires clear labels on AI-generated content like images or videos. For example, TikTok filters made with AI must include hidden data showing they’re artificial. Companies breaking these rules face fines up to $10,000 per violation. Colorado updated its AI Risk Management Framework, forcing businesses using AI for jobs or loans to submit yearly bias reports. A local bank using AI to approve loans must now prove its system doesn’t discriminate against minority groups. Texas proposed the Responsible AI Governance Act, which bans AI from creating social scores but allows humans to do the same—critics say this loophole makes the law unfair.

Global AI Safety Efforts The European Union began enforcing its AI Act with strict checks on high-risk systems. A German car factory using AI to inspect parts must now document safety tests and let workers challenge automated decisions. The EU also launched a $500 million fund to improve tools that explain how AI decisions work. In a surprise move, the U.S. and China agreed to create a shared AI emergency hotline to prevent accidents involving advanced systems like military drones or hospital robots.

Corporate Compliance Challenges Many companies are scrambling to follow new rules. A survey found 60% of U.S. firms lack proper AI audit systems, risking fines under laws like Colorado’s. Startups like Credo AI sell “compliance checkers” that scan AI code for bias or privacy issues, costing $50,000+ per year. Lawyers warn of rising lawsuits, citing a case where a New York hiring company was sued because its AI tool allegedly rejected older applicants. The court ruled the company must share its training data—a first in U.S. law.

FCC Rules and Senate Bills The Federal Communications Commission (FCC) delayed new consent revocation rules for AI telemarketing calls until 2026 after pushback from banks and businesses. A proposed Senate bill (S.1025) would let the FCC directly collect fines from companies breaking AI call laws, which could speed up penalties for illegal robocalls.

Legal Liability for AI Agents New guidelines clarify that AI agents (like chatbots making purchases) can form legally binding contracts under existing U.S. laws. If an AI makes a mistake (like buying the wrong item), humans can sometimes cancel the transaction if they catch errors quickly.

AI in Legal Services Law firms are using agentic AI to draft contracts and check compliance. These AI tools learn from past cases and flag risks automatically, but experts warn they still need human lawyers to review decisions.

Weekly Highlights