Wednesday, February 25, 2026
Wednesday, February 25, 2026

Musk predicts everyone will have better health care than the president does now; Says med school is pointless

Elon Musk recently predicted that in the not-too-distant future, “everyone will have access to medical care that surpasses what the President currently receives,” tying the development to rapid advances in AI and robotics.​ He also forecasts that AI-powered robot surgeons and medical systems will make ultra-elite care cheap and widely available by 2030.​

His underlying argument is that once high-performing medical AI and robot platforms exist, they can be replicated at low marginal cost, turning scarce expert labor into abundant computing capacity.

And get this, he insists that going to medical school these days is pointless.

What bold predictions!

His predictions have sparked backlash from medical professionals and educators who argue that timelines for safe, fully autonomous surgical robots at scale are highly optimistic and that medicine involves complex human judgment and context.​ Critics also note that even if the technology worked, achieving universal access at “president-level” quality would require massive changes in regulation, liability, infrastructure, and insurance, which Musk’s short clips largely gloss over.​

Additionally, safety and ethics groups point out ongoing dangers of depending only on AI: it can give unreliable results, be biased, lack clarity, and require training for doctors to understand its limits and reduce mistakes, especially as these tools are introduced quickly.

A 2025 meta-analysis of generative AI for diagnosis found no significant difference versus non-expert physicians but a clear gap versus experts, with expert doctors about 16 percentage points more accurate on average

AI has undoubtedly changed the way doctors work; however, the current body of literature shows that there will be an augmentation process, rather than a complete replacement, for at least the next ten years. Doctor training and credentialing will likely evolve to include emphasis on effective collaboration with AI agents—i.e., maintaining core clinical skills without becoming overly reliant on AI agents. As of now, studies indicate that the issue is a non-trivial concern.

Doctors will likely have to rely much more on supervision (e.g., coordinating multiple AI agents, validating the output of AI agents, ensuring data quality, explaining the potential risks of an AI agent’s decision-making, and focusing on cases that are more complex/ambiguous rather than relying on rote pattern matching).

So, while many physicians believe that AI will help them reduce their workload and assist in identifying findings that they would miss, they still envision themselves as the final authority for making judgment calls and communicating with patients and families regarding care options—and do not simply view their role as “accepting” AI’s outputs through the press of a button.

Multiple studies examining the effects of removing AI from clinical workflows, referred to as “deskilling,” and relying too heavily on AI have shown a drop-off in detection performance, which has led regulatory agencies and hospital administrators to be cautious in fully removing humans from the decision-making loop.

author avatar
Lee Cleveland
Lee is the Editor-in-Chief and founder of 2026PREDICT.com (predictionsandodds.com)—a cutting-edge platform dedicated to analyzing and tracking the accuracy of prediction markets and forecasting models.

Latest