The ICLR 2025 workshop in Singapore brought together AI researchers to tackle big questions about agentic AI in science. Speakers identified four main challenges: helping AI systems share specialized knowledge (like chemistry or physics), making teamwork between multiple AI agents more efficient, ensuring results can be reproduced, and keeping models updated with new discoveries. One example showed how AI could combine weather data and pollution studies to predict climate change impacts.

Clarivate expanded its Academic AI Platform with pre-built research assistant agents and a no-code Agent Builder tool. These help librarians organize digital collections and let students ask complex questions like "Show me recent breakthroughs in renewable energy" across multiple databases. Early tests at 3,000 institutions reduced time spent on literature reviews by 40%.

Google’s new AI co-scientist system, built with Gemini 2.0, demonstrated how AI can propose novel research ideas. In one case, it suggested combining cancer drug research with nanotechnology studies—an approach later validated by human scientists. Separately, Google published a paper where AI agents generated synthetic training data to study rare diseases, addressing the problem of limited real-world medical datasets.

Ethical concerns took center stage, with panels debating how to prevent AI from "hallucinating" fake data in papers. Proposed solutions included automated fact-checking tools and requiring AI systems to cite sources like human researchers. The Singapore government announced plans for AI research validation standards to be tested later this year.

Looking ahead, teams from MIT and ETH Zurich revealed partnerships to deploy climate modeling AI agents that simulate glacier melt and carbon capture scenarios. These systems will help policymakers create disaster response plans while highlighting the need for global collaboration in AI-driven science.

Weekly Highlights