Ethics & Safety Weekly AI News
January 5 - January 13, 2026This weekly update covers major developments in AI ethics and safety regulation across the United States. New York became one of the first states to enact strong AI safety laws, while federal actions aim to reshape how AI is governed nationwide. Regulators are cracking down on companies that misuse AI or fail to protect consumer data. States are also taking action against social media platforms and shopping apps that use AI in potentially harmful ways. These changes mean that companies using AI must now focus more on safety, transparency, and accountability.
New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act) into law, which requires large AI developers to publish information about their safety methods and report serious incidents to the state within 72 hours. This law is similar to California's law from September 2025. Meanwhile, President Trump issued an executive order to create a national AI policy that limits what states can regulate about AI. The order directs federal agencies to challenge state AI laws they think are too strict.
Regulators are also punishing companies that break the rules. The California Attorney General settled a case with Jam City, a mobile gaming company, for $1.4 million because it illegally collected and shared personal information from millions of users. Texas sued Chinese companies Hisense and TCL for using hidden technology to track what Americans watch on TV without permission. Arizona sued Temu, a Chinese shopping app, for secretly harvesting user data including location, microphone, and camera access. These enforcement actions show that protecting consumer privacy is now a top enforcement priority for state governments.