Ethics & Safety Weekly AI News
April 14 - April 24, 2025The push for AI safety faced challenges this week. Dr. Sean McGregor compared AI risks to early airplane crashes, saying we’re using AI everywhere before fully understanding its dangers. At the Paris AI Summit, 60 nations promised to make AI "open and trustworthy," but the US and UK stayed out, focusing more on economic growth than safety. Critics like David Leslie said the summit ignored real risks for profit.
OpenAI stirred controversy by removing safety tests for AI’s ability to persuade or trick people. Experts fear this could let harmful AI spread faster. At the same time, military AI tools are being built for better battlefield decisions, but questions remain about who controls deadly machines.
In Hyderabad, India, UNESCO’s Ethics by Design meeting brought together tech experts and lawmakers. They discussed rules to prevent AI bias and protect privacy, especially for apps used in schools and hospitals.
Cybersecurity guides highlighted encrypted data as key for safe AI in healthcare. One hospital’s AI now works only on scrambled patient info, blocking hackers. The AI Bill of Rights gained support as a blueprint for fair, secure systems.
Globally, companies are urged to run risk assessments and train staff to spot AI threats like fake data inputs. China’s new AI models like DeepSeek added to worries about a tech race overriding safety. Experts agree: making AI safe needs teamwork between governments, companies, and watchdogs to balance innovation with human rights.