More

    ChatGPT wants to play doctor with a special mode, but I’ll skip

    Published on:


    A few weeks ago, a medical researcher reported a rather curious case of AI-induced medical flub, detailing the case of an individual who consulted ChatGPT and went into a psychotic spiral due to bromide poisoning. Such concerns have been raised by experts for a while now, and it seems OpenAI aims to counter them with a dedicated ChatGPT mode for medical advice.

    The tall hope

    Tibor Blaho, an engineer at AI-focused firm AIPRM, shared an interesting snippet they spotted in the code of ChatGPT’s web app. The strings mention a new feature called “Clinician Mode.” The code doesn’t go into detail about how it works, but it seems like a dedicated mode for seeking health-related advice, just the way safety guardrails are implemented for teen accounts.

    ChatGPT web app now includes new references to a “Clinician Mode”

    Plus, the experiment config includes a new “model speaks first” prompt (likely when using voice mode): “Greet the user in a friendly way. The user’s locale is {locale}, use their language. Be brief – no more than… pic.twitter.com/hhrLppQ1ub

    — Tibor Blaho (@btibor91) October 9, 2025

    Notably, OpenAI hasn’t made any official announcements about any such feature, so take this news with the proverbial pinch of salt. But there are a few theories floating around on how it could work. Justin Angel, a developer who is a familiar name in the Apple and Windows community, shared on X that it could be a protected mode that limits the information source to medical research papers.

    w00t. @ConsensusNLP just announced “Medical mode” limiting RAG sources for 8M trusted 10/50 medical publications.

    Just ran this query for depression treatments and it checks out. Is this what the new “Clinician mode” from ChatGPT going to do? Limit results to medical science?… pic.twitter.com/GlpS5rgT5x

    — Justin Angel (@JustinAngel) October 10, 2025

    For example, if you seek any medical advice related to wellness issues or symptoms, ChatGPT will only give a response based on information it has extracted from trusted medical sources. That way, there are fewer chances of ChatGPT doling out misleading health-related advice.

    The blunt reality

    The idea behind something like “Clinician Mode” is not too far-fetched. Just a day ago, Consensus launched a feature called “Medical Mode.” When a health-related query is pushed, the conversational AI looks for answers “exclusively on the highest-quality medical evidence,” a corpus that includes over eight million papers and thousands of vetted clinical guidelines. The approach sounds safe on the surface, but the risks persist.

    Medicine moves fast. Evidence should move faster.

    Medical Mode is built for questions where evidence quality can’t be compromised.

    Join the millions of researchers and clinicians, try Medical Mode in Consensus today: https://t.co/7OrmoDufgw

    — Consensus (@ConsensusNLP) October 9, 2025

    A paper published in the Scientific Reports journal last month highlighted the pitfalls of pushing ChatGPT in the medical context. “Caution should be exercised, and education provided to colleagues and patients, regarding the risk of hallucination and incorporation of technical jargon which may make the results challenging to interpret,” the paper warned. But it appears that the industry is moving steadily towards AI.

    The Stanford School of Medicine is testing an AI-powered software called ChatEHR that allows doctors to interact with medical records in a more conversational fashion. OpenAI also tested a “medical AI assistant” in partnership with Penda Health a few months ago and announced that the tool reduced the chances of diagnostic error by a 16% margin.






    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here