Beyond Eye Rolls: Why AI Belongs in Clinical Reasoning and the Power of OPEN EVIDENCE, a Medical Provider's Platform
My usual topics are cheerful subjects, such as environmental toxins and epidemics. Recent experiences with patients and family members using ChatGPT made me think that a column like this might be particularly relevant for our times. It is addressed to my fellow medical providers, but may be of interest to all.
*******************************************************
AI isn’t just for your patients anymore. It may soon be your most indispensable clinical colleague.
Not long ago, hearing a patient say, “Well, I read online…” made many of us brace ourselves. “Dr. Google,” as we half-joked, was flooding the medical world with misinformation, worst-case scenarios from obscure websites, sketchy forum posts, and pseudo-science in blog format. It was harder and harder to separate credible concerns from digital hypochondria. The tension grew: white coat versus search engine.
But something fundamental has changed. And we, as medical care providers, should take notice.
Enter “Dr. AI,” artificial intelligence platforms such as ChatGPT, and more importantly, Open Evidence: a physician-oriented AI platform designed specifically to support clinical reasoning and evidence-based decision-making. If ChatGPT is a powerful new tool patients can wield, Open Evidence is the AI partner built for us. Think of it as your gateway to responsible AI use in medicine, one that can augment or even replace general-purpose tools like ChatGPT when stakes are high.
Many of us are already seeing patients walk in with AI-generated differential diagnoses. At first glance, this feels like another round of internet printouts. But here’s the twist: this time, the AI might well be right. And if we dismiss it too quickly, we risk missing something important.
A 2023 study in JAMA Internal Medicine found that AI responses to patient questions were not only more accurate but also more empathetic than those of physicians in head-to-head comparisons. In radiology, AI models now outperform many radiologists in detecting breast cancer on mammograms. The FDA has approved over 1000 AI-assisted medical devices, from autonomous retinal scanners to echocardiogram interpreters. One system even beat expert dermatologists in diagnosing malignant skin lesions.
AI isn’t replacing clinical expertise; it’s extending it. Open Evidence, for example, isn’t a black-box algorithm. It provides transparent citations, up-to-date clinical guidelines, and interpretable reasoning tailored for professional users. It doesn’t just deliver answers, it sharpens your judgment, jogs your memory, and helps close the cognitive gaps that lead to errors.
One of AI’s most profound contributions may lie in its ability to think across medical silos. Unlike human clinicians, who are often trained within the boundaries of specific specialties and are understandably constrained by time, cognitive bandwidth, and institutional hierarchies, AI can draw connections across disciplines without prejudice or fatigue. A patient with dermatologic, endocrine, and neurological symptoms might be shuffled from one specialist to another in the traditional system, each silo offering its own narrow lens. AI can integrate these findings simultaneously, cross-referencing literature and pattern-matching across specialties to suggest diagnoses or interactions that might otherwise be missed. In this sense, AI offers not just more information, but a fundamentally different—and often broader—mode of reasoning. That’s a clinical superpower worth paying attention to.
Let’s consider a real risk: if a patient presents with a ChatGPT-generated concern that turns out to be correct, and you dismiss it out of hand, you may not only miss a diagnosis, but also expose yourself to medical liability. Patients now have access to tools that can challenge our reasoning. Ignoring them isn’t just arrogant, it’s risky.
Instead, imagine this: a patient mentions what ChatGPT suggested. You take a breath. Then, you consult Open Evidence, a platform designed specifically for clinical use, to validate, refute, or expand on the concern. You’re not relinquishing your expertise; you’re anchoring it in the most up-to-date and probabilistically sound information available.
Think of these platforms not as threats, but as hyper-literate colleagues. ChatGPT may be your always-available, sleep-free medical student. Open Evidence is your seasoned consultant, transparent, grounded in evidence, and ready to elevate your diagnostic process.
In the end, adopting AI tools isn’t a concession; it’s an evolution. The physicians of the future won’t just rely on stethoscopes and lab tests. They’ll learn to interrogate large language models, interpret predictive analytics, and use platforms like Open Evidence to practice safer, smarter medicine.
So next time a patient says, “I asked ChatGPT and it said…” Listen to them. Ask questions. Then open your laptop. And this time, instead of rolling your eyes, open Open Evidence. You might save time. You might avoid an error. You might even avoid a lawsuit.
In the era of AI-assisted medicine, humility, paired with the right tools, may be the most powerful diagnostic instrument we have.
A Word of Caution
Despite their power and utility, AI platforms like ChatGPT and Open Evidence are not infallible. Just like physicians, they can be wrong. These tools are best viewed as intelligent assistants, not decision-makers. The responsibility for clinical judgment and patient care still rests with the physician, who must weigh AI-generated input against the unique context of the patient in front of them. AI can support, but it must not replace, human discernment.
Important and convincing.