Ethics & Safety Weekly AI News

April 21 - April 29, 2025

In the United States, Nevada’s Department of Education shared a major update for schools using AI agents. Their new guide, “Nevada’s STELLAR Pathway,” gives teachers tips on using AI tools like chatbots or learning apps in classrooms. The rules focus on three big ideas: keeping student data private, making sure AI tools work fairly for all kids, and letting teachers stay in control of lessons. The state worked with parents, students, and tech experts for months to create these rules.

Dr. Steve Canavero, Nevada’s Interim Superintendent, said the goal is to prepare students for a future full of technology while keeping schools safe and fair. The guide also warns about AI bias—like apps that might accidentally favor some students over others—and asks schools to test AI tools before using them.

Around the world, people are still talking about how to manage military AI, like drones or robots used in wars. Earlier in April, a global report stressed that armies need clear ethical rules to prevent AI mistakes that could harm civilians. Though this wasn’t new this week, it shows how different groups—from schools to armies—are racing to set AI safety standards.

In other news, universities and companies keep pushing for global AI ethics. For example, a March conference at Carnegie Mellon University brought together 500 experts to discuss generative AI risks, like fake news or stolen art. While older, these talks remind us that AI governance is a hot topic everywhere, not just in schools.

Overall, this week proved that AI ethics isn’t just for scientists—it’s something teachers, parents, and leaders worldwide need to tackle together. Nevada’s school rules are a small but important piece of this bigger puzzle.

Weekly Highlights