Ethics & Safety Weekly AI News
December 29 - January 6, 2026New York Takes Lead on Advanced AI Safety
New York State signed a historic law called the RAISE Act that specifically targets the most powerful AI systems. This law focuses on frontier models—the biggest and most advanced artificial intelligence systems that cost over $100 million to build. Companies creating these advanced systems must now write down their safety plans, check if their AI could cause serious harm, and report safety problems to New York's Attorney General within 72 hours. The law takes effect on January 1, 2027. This matters because it's one of the first state-level rules focused specifically on the most advanced AI systems rather than all AI tools.
Different Approaches to AI Rules Create New Debate
At the same time, President Trump signed an executive order in December 2025 that tried to replace all the different state AI rules with one national rule. The order says that AI companies must be free to innovate without heavy regulation, but this creates a problem. States like Colorado had made their own AI fairness laws to protect people from algorithmic discrimination—when AI systems treat people unfairly based on their race or income. Experts worry that having just one national rule might remove protections that communities created locally. Some experts believe states should keep the power to enforce child safety laws and protect their own communities.
Blockchain Technology Tested for AI Transparency
Blockchain is emerging as a tool to make AI more transparent and trustworthy. In 2025, several companies tested blockchain systems to track where AI models come from and how they work. The European Union discussed using blockchain to keep records of companies following AI rules. This technology could help prove that AI systems are being used safely and that their training data came from ethical sources. However, blockchain isn't solving transparency problems completely yet.
AI Automation Cuts HR Jobs Deeply
One of the biggest surprises in 2025 was how agentic AI—automated systems that make decisions without human help—eliminated HR jobs across major companies. Reports showed that Fortune 500 companies using AI screening tools and automated reviews cut their HR departments by 25 to 40 percent. HR became the new target for AI automation, just like customer service was before. This shows how AI agents that can handle recruiting, performance reviews, and rule-checking are reshaping the job market.
Child Safety Concerns Rise
One of the biggest safety problems emerged when AI systems generated inappropriate images of children. This shows why safety rules matter—even companies building AI need clear rules about what their systems cannot create. Experts are calling for stronger protections, especially since AI image generation is becoming more powerful.
Moving Toward Transparent, Ethical AI
Experts and religious leaders argue that innovation and safety don't have to conflict. They say transparent AI systems are essential for building public trust. OpenAI's safety leadership team is an example of how companies can prioritize risk management. The debate this week showed that real progress requires asking: "What kind of society are we building?" not just "How fast can we move?" Going forward, accountability and human oversight matter as much as innovation speed.