Ethics & Safety Weekly AI News

July 7 - July 15, 2025

New safety rules for AI helpers were announced at a United Nations meeting in Geneva. The World Digital Technology Academy (WDTA) created special tests to check if single AI agents work safely. These tests are like safety belts for AI helpers in cars, hospitals, factories, and banks. WDTA's leader Yale Li said these rules help "put ethics into every stage of AI's life". The group is already trying these tests with money and health companies first.

In the United States, Texas made a new law about AI. It says companies must show how their AI works, check for unfairness, and allow safety inspections. This is one of America's strongest state AI laws so far. Michigan lawmakers proposed the AI Safety and Security Transparency Act. If passed, it would make big AI companies share safety plans, do yearly outside checks, and protect workers who report problems.

California is discussing several AI safety bills. One would require people to watch over AI running important things like trains, power systems, and emergency services. Another would protect workers who report risks from powerful AI models. These laws aim to prevent AI accidents in critical areas.

Doctors got important advice about using AI helpers. A study in a medical journal said hospitals must get clear patient permission before AI helps with health decisions. The authors warned that vague rules hurt trust between doctors and patients. This matches WDTA's work on health AI safety testing.

At the UN's AI for Good Summit in Geneva, experts shared worries about rushing AI into daily life. ITU leader Doreen Bogdan-Martin warned: "The biggest risk is putting AI everywhere without understanding what it means for people and our planet". She asked everyone to help make AI safe and useful for all.

Countries in the BRICS group (including Brazil, Russia, India, China, and South Africa) asked the United Nations to lead global AI rules. They want fair access to AI technology and ethical oversight. This push for worldwide cooperation comes as AI helpers spread quickly.

Businesses got reminders about ethical AI use. Experts said companies must measure AI trustworthiness and understand where biases come from. They suggested using numbers to track fairness and set clear responsibility lines. Another article advised companies to build privacy protections into AI from the start. This includes using less personal data and strong security locks.

An opinion piece compared using AI in hiring to past tech changes like calculators. The author argued that calling AI assistance "cheating" might miss how technology naturally changes work. This debate shows how AI helpers are forcing new conversations about fairness in everyday life.

Weekly Highlights