Ethics & Safety Weekly AI News
September 1 - September 9, 2025This weekly update covers major developments in AI ethics and safety from around the world. Scientists, government leaders, and safety experts are all working to make sure artificial intelligence helps people safely.
In Europe, the European Union Aviation Safety Agency released important research on September 4th about AI in airplanes. They asked aviation workers across Europe what they think about using AI in flying. The results showed mixed feelings. Most workers gave AI a score of 4.4 out of 7 for acceptance. This means they are somewhat okay with AI but still worried.
The biggest concerns from airplane workers were about AI performance limits, keeping data private, and who is responsible if something goes wrong. About two-thirds of the workers said they would reject at least one type of AI use in aviation. They especially worried that humans might lose important skills if AI takes over too many tasks. The workers strongly want aviation authorities to create tough rules for AI use.
In the United States, North Carolina became a leader in AI governance when Governor Cooper signed Executive Order No. 24 on September 2nd. This new rule creates an "AI Accelerator" team to watch how the state uses artificial intelligence. The team will make sure AI is fair, accountable, and protects people's rights. They will create standard definitions for AI and check all AI projects for risks before they start.
The North Carolina order focuses on trustworthy AI that helps all citizens equally. The state will work with colleges, companies, and nonprofit groups to test AI safely. They will also protect sensitive information and make sure people know when AI is being used to make decisions about their lives.
A big breakthrough in multi-agent AI safety came from Wake Forest University, where Professor Sarra Alqahtani received a $598,609 grant from the National Science Foundation. She will spend five years creating the first safety standards for collaborative AI systems. These are groups of AI agents that work together, like multiple robots coordinating a rescue mission or AI systems managing both a pacemaker and insulin pump for a patient.
Currently, there are no good safety standards for these multi-agent systems. Professor Alqahtani's research showed these AI teams are vulnerable because if one agent gets hacked, it can compromise the whole system. Her work will create algorithms that help AI agents collaborate safely, even if one fails or gets attacked.
The most alarming predictions this week came from Dr. Roman Yampolskiy, a University of Louisville professor who first coined the term "AI safety". On a popular podcast, he warned that artificial intelligence could become humanity's "last invention." Unlike tools like fire or wheels that need human operators, AI can invent new things by itself.
Yampolskiy predicts that artificial general intelligence will arrive by 2027, and by 2030, it could cause 99% unemployment. He believes AI will first automate all computer-based jobs, then humanoid robots will take over physical work within five years. Even jobs once thought safe, like coding and prompt engineering, won't survive because "AI is way better at designing prompts for other AIs than any human."
The professor argues that this job loss will create problems beyond just money. People get structure, status, and community from work. If jobs disappear, society will need to create new ways to provide meaning in people's lives. This might include universal income, civic service programs, and new ways to earn recognition and build community.
Other experts disagree with Yampolskiy's timeline. Geoffrey Hinton, called the "Godfather of AI," thinks manual jobs like plumbing will stay safe. Adam Dorr from think tank RethinkX predicts mass job loss by 2045, not 2030. These different predictions show that even experts aren't sure how fast AI will change the job market.
These developments highlight the urgent need for proactive AI governance. From aviation safety in Europe to state-level policies in North Carolina, leaders are trying to create rules before AI becomes too advanced to control safely. The focus on multi-agent AI safety is especially important as these systems become more common in critical areas like healthcare and emergency response.