Ethics & Safety Weekly AI News
April 7 - April 15, 2025Global organizations took big steps to manage AI risks this week. The OECD's new reporting system asks companies to answer 7 key questions about how they handle AI safety. This includes checking for hidden biases and protecting user privacy. Over 150 tech firms promised to submit their first reports by April 15.
Kenya made history with its 2025-2030 AI plan. The country will create Africa’s first AI safety institute and update laws about computer crimes. Other East African nations plan to copy Kenya’s model for cross-border data rules.
The UN Security Council held special talks about AI’s role in wars. Greece showed how AI helped track illegal weapons shipments, but France warned about hackers using AI to spread false battle reports. A UN team demonstrated tools to spot AI-made fake videos during elections.
Military AI experts met in Geneva to discuss safety. The Dutch army shared their "human confirm" system where soldiers must approve every AI drone strike. New tools were shown to detect when AI weapons try to hide their real targets.
Bioethicist Dr. David Resnik warned that AI-made data could ruin scientific research. His team wants all synthetic data watermarked after finding 1 in 5 scientists accidentally used fake AI numbers in recent studies. China introduced face scan rules requiring non-AI login options.
The Schmidt Foundation offered $500,000 grants to study super-smart AI safety. Top projects will look at how to turn off AI systems that outthink humans. Google engineers revealed new methods to check AI "thought patterns" during medical diagnoses.
Civil engineers learned key safety lessons from IBM’s $4 billion AI failure. New guidelines say engineers must double-check all AI building designs against old methods. A US policy now requires human engineers to sign off on every AI-made bridge plan.
UK announced strict cyber rules for power plants using AI. Companies must report any AI security errors within 2 hours. Japan started testing AI safety locks that prevent robots from moving if they detect system errors.
Over 100 universities joined a global AI ethics training program. Students will learn to spot biased algorithms and protect worker privacy in factory robots. The program includes real cases like an AI that wrongly fired workers based on bad data.