Healthcare Weekly AI News
November 10 - November 18, 2025Healthcare is changing quickly thanks to artificial intelligence agents that can work like doctors. These special AI systems don't just follow instructions - they can think, make choices, and take actions on their own. They can talk to patients, understand their problems, figure out what's wrong with them, and suggest treatments. This is very different from regular AI that just does one specific job.
One of the most exciting examples is Scope AI, created by a company called Akido Labs. Scope AI can do what is called an "omni-specialty" medical job, meaning it can help with many different types of medical problems. When a patient talks to Scope AI, it asks questions, learns about their health, and runs that information against millions of real patient cases to find the best diagnosis. Then it suggests tests to run, writes notes about the patient, and even recommends medicines or lifestyle changes. The really important part is that a real human doctor always reviews everything Scope AI suggests before anything is given to the patient. In Akido's clinics in Southern California, United States, doctors say they can see four times more patients each day because Scope AI helps them work faster. By the end of 2025, more than 250,000 patients will have used Scope AI for their medical visits.
Because AI is becoming such a big part of healthcare, important medical groups are creating rules to make sure it is used safely. The American Heart Association released guidance this week that tells hospitals how to pick, test, and watch AI tools to make sure they help patients. The guidance says hospitals should use four main principles: making sure AI matches what the hospital wants to do, checking that AI is fair and does not treat some patients worse than others, making sure AI actually helps patients get better, and thinking about costs. The Association found something worrying - only 61 percent of hospitals test their AI tools on their own local patients before using them, and even fewer check to see if the AI treats all patients fairly. Small, rural, and non-academic hospitals have even bigger problems with this.
The FDA, which is the government group in the United States that approves new medicines and medical devices, is also paying attention to AI. On November 6, 2025, the FDA held a meeting to talk about AI systems that help with mental health. The FDA is trying to figure out the right way to check these AI systems to make sure they are safe and helpful. One of their big worries is that AI keeps changing and learning, so doctors and the FDA need new ways to test it and make sure it stays safe for a long time.
The American Medical Association, which is a big group of doctors in the United States, just started a new center to help with AI and digital health. They say that doctors must be part of every step of making and using AI in healthcare. The AMA thinks the government needs to create one clear set of rules instead of many different rules, so everyone knows what to do. They also say doctors need training to use AI properly and that AI should help doctors do their job better, not get in the way.
Meanwhile, China has announced a big national plan to use AI in healthcare. Starting between November 2025 and March 2026, China will begin planning how to use AI, then test it in 50 hospitals and 500 clinics. This shows that AI in healthcare is becoming important everywhere in the world, not just in the United States.
The money flowing into healthcare AI companies is huge. The worldwide AI healthcare market is expected to grow from 37.09 billion dollars in 2025 to 701.79 billion dollars by 2034. However, some people who invest in companies are worried because hospitals are being slow to use AI. Only 3 out of every 10 AI projects that hospitals try actually end up being used for real patients. The problem is that hospitals move slowly because they need to be very careful about patient safety, while AI companies want to move fast.
Experts warn that not all AI tools for mental health and wellness have enough proof that they really work. The American Psychological Association says that AI chatbots and apps for feeling better need more testing and better rules before people use them. This shows that while AI can help healthcare in many ways, we still need to be careful and test everything thoroughly to keep patients safe.