Ethics & Safety Weekly AI News

January 5 - January 13, 2026

# This Week in AI Ethics and Safety

## New York Takes Lead on AI Safety Regulations

New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act) on December 19, 2025, making New York a leader in AI safety regulation. The RAISE Act requires large AI developers to create and publish detailed information about their safety protocols and to report safety incidents to New York State within 72 hours of discovering them. This groundbreaking law focuses on "frontier" AI models—the most advanced artificial intelligence systems currently being developed. The RAISE Act builds on California's Transparency in Frontier Artificial Intelligence Act, which was signed into law in September 2025, creating what Governor Hochul called "a unified benchmark among the country's leading tech states."

The new law represents a shift toward holding big technology companies accountable for the safety of their AI systems. Companies developing advanced AI now need to begin assessing whether their models fall under the law's requirements. They should review their existing AI safety documentation, evaluate their ability to identify and report qualifying incidents within the required 72-hour window, and monitor how the law is implemented. The RAISE Act also includes companion bills that add requirements for transparency about training data, developer disclosures, and impact assessments for systems that affect workers and consumers.

## Federal Government Pushes Back on State AI Laws

While states like New York and California move forward with AI regulations, the Trump Administration is taking a different approach. President Trump issued an Executive Order establishing a federal policy aimed at reducing what it characterizes as "excessive" and "burdensome" state AI regulation. The order creates a "minimally burdensome national standard" intended to reduce patchwork state rules that the administration argues impede innovation and harm U.S. competitiveness in the global AI race.

The Executive Order directs the U.S. Attorney General to form an AI litigation task force to challenge state AI laws deemed unconstitutional or preempted by federal authority. It also instructs the Department of Commerce to evaluate state AI laws within 90 days to identify targets for legal challenge. The Federal Communications Commission (FCC) is ordered to consider adopting federal AI reporting and disclosure standards that would preempt conflicting state laws. However, the order specifies narrow areas where it seeks to avoid preemption, such as children's online safety and state procurement practices.

This conflict between federal and state authorities has already sparked opposition. Senator Edward Markey introduced the "States' Right to Regulate AI Act" to block the Executive Order's implementation, labeling the order "lawless". Additionally, 23 state attorneys general wrote a letter urging the FCC to "stand down," arguing that the agency lacks authority to preempt broad state AI oversight and criticizing the effort as vague and beyond the FCC's legal authority. The attorneys general emphasized states' interests in addressing deepfakes, scams, and consumer protection that are not telecommunications issues.

## State Attorneys General Push Companies for Stronger AI Safeguards

42 State Attorneys General issued a letter to major technology companies including Google, Meta, and Microsoft warning about dangers from generative AI systems. The letter highlights that generative AI chatbots have been linked to at least six deaths in the United States, along with other serious incidents involving domestic violence, poisoning, and hospitalizations for psychosis. The AGs are particularly concerned about sycophantic and delusional outputs—situations where AI systems tell users what they want to hear or generate false information, sometimes with serious consequences.

The letter urges technology companies to adopt additional safeguards, with special focus on protecting children. Specifically, the state leaders are asking companies to: (i) maintain clear policies and procedures about sycophantic and delusional outputs, including staff training; (ii) provide clear and permanent warnings about potentially harmful AI outputs that users might see; (iii) prohibit harmful outputs for child-related accounts; and (iv) subject models to independent third-party audits that state and federal regulators can review. This coordinated action by state attorneys general represents a significant push for industry accountability.

## Major Enforcement Actions Against Companies Misusing AI

Regulators are moving beyond warnings and taking legal action against companies that misuse AI or fail to protect consumer data. The California Attorney General settled with Jam City, Inc., a mobile gaming app company, over violations of California's consumer privacy law. Jam City allegedly collected personal information such as device identifiers, IP addresses, and user activity data, and shared it with third parties for advertising and analytics purposes without proper consent. The company failed to give consumers ways to opt-out of data sharing across 21 gaming apps and its website. Under the settlement, Jam City must pay $1.4 million in civil penalties and provide in-app methods for users to opt-out of data sharing. Importantly, the settlement requires Jam City to obtain affirmative opt-in consent from users aged 13 to 16 before selling or sharing their personal data.

Texas is taking action against Chinese television manufacturers for using hidden tracking technology. The Texas Attorney General sued Hisense and TCL, both based in China, for using automatic content recognition (ACR) technology to secretly capture what Texans watch on television. The attorney general expressed concern that "Chinese ties pose serious concerns about consumer data harvesting and are exacerbated by China's National Security Law, which gives its government the capability to get its hands on U.S. consumer data." The lawsuits allege that ACR captures audio and visual data in "hundredths of milliseconds" to build a fingerprint of content, and that the companies falsely claimed this feature was designed to provide tailored viewing experiences.

Arizona is also taking action against Chinese shopping platforms. The Arizona Attorney General announced a lawsuit against Temu, a Chinese online shopping app, for unauthorized data collection and privacy violations. According to the lawsuit, Temu is designed to harvest sensitive user data without users' knowledge or consent and to evade detection. The app allegedly collects far more sensitive information than necessary for an online shopping service, including users' precise physical location, phone microphone and camera access, and private activity on other apps.

## Tech Companies and AI Safety: The Accountability Year

The search results show that 2026 is shaping up to be the year of AI accountability, moving beyond discussions about safety to concrete enforcement and compliance requirements. Organizations across all industries—not just AI developers—must now focus on AI governance because many companies use AI tools provided by vendors. Insurers are already asking about AI governance practices during policy renewals, and legal teams are preparing for AI-related lawsuits. Companies without documented AI governance may face higher insurance premiums or denied coverage.

In response to these developments, state and federal regulators are establishing new requirements. The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), effective January 1, 2026, establishes a framework that bans certain harmful AI uses and requires disclosures when government agencies and healthcare providers use AI systems. The Utah Artificial Intelligence Policy Act requires businesses to clearly disclose when consumers interact with generative AI in regulated transactions, and makes companies liable for deceptive or unlawful practices carried out through AI as if they were the companies' own acts. Experts recommend that organizations should not wait for federal action and should build compliance programs around the strictest state standards currently available.

Governance experts emphasize that establishing AI governance frameworks now costs far less than dealing with incidents or regulatory audits later. Well-governed companies gain competitive advantages including lower insurance costs, smoother vendor onboarding, and stronger standing in merger and investment reviews. As regulations evolve across the United States, companies that prioritize transparency, safety, and accountability in their AI systems will be best positioned for success.

Weekly Highlights