Physician and AI Responses to Cancer Questions (2025)

Table of Contents
Journals Citation FAQs

Our website uses cookies to enhance your experience. By continuing to use our site, or clicking "Continue," you are agreeing to our Cookie Policy|Continue

JAMA Oncology

    Sign In

    Individual Sign In

    Sign inCreate an Account

    Access through your institution

    Sign In

    Purchase Options:

    Buy this article

    Subscribe to the JAMA Oncology journal

    This Issue

    May 16, 2024

    DavidChen,BMSc1,2; RodParsa,MSc1,3; AndrewHope,MD1,4; et al BreffniHannon,MBChB5,6; ErnieMak,MD5,7; LawsonEng,MD8,9; Fei-FeiLiu,MD1,4; NazaninFallah-Rad,MD8; Ann M.Heesters,PhD10,11,12; SrinivasRaman,MD, MASc1,4

    Author Affiliations Article Information

    • 1Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Ontario, Canada

    • 2Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada

    • 3Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Ontario, Canada

    • 4Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada

    • 5Department of Supportive Care, University Health Network, Toronto, Ontario, Canada

    • 6Department of Medicine, University of Toronto, Toronto, Ontario, Canada

    • 7Department of Family & Community Medicine, University of Toronto, Toronto, Ontario, Canada

    • 8Division of Medical Oncology and Hematology, Department of Medicine, Princess Margaret Cancer Centre/University Health Network Toronto, Toronto, Ontario, Canada

    • 9Division of Medical Oncology, Department of Medicine, University of Toronto, Toronto, Ontario, Canada

    • 10Department of Clinical and Organizational Ethics, University Health Network, Toronto, Ontario, Canada

    • 11The Institute for Education Research, University Health Network, Toronto, Ontario, Canada

    • 12Dalla Lana School of Public Health and Joint Centre for Bioethics, University of Toronto, Toronto, Ontario, Canada

    JAMA Oncol. 2024;10(7):956-960. doi:10.1001/jamaoncol.2024.0836

    visual abstract icon Visual Abstract editorial comment icon Editorial Comment related articles icon Related Articles author interview icon Interviews multimedia icon Multimedia audio icon Listen to this article

    Full Text

    Key Points

    Question In response to patient questions about cancer on an online forum, how do conversational artificial intelligence chatbots compare with licensed physicians across measures of empathy, response quality, and readability?

    Findings In this equivalence trial, after controlling for response length to 200 patient questions, 6 oncology physician evaluators consistently rated chatbot responses higher in response empathy, quality, and readability based on style of writing. The mean reading grade level of physician responses was lower than those from 2 of 3 chatbots, suggesting that chatbot responses may be more difficult to read based on word and sentence length.

    Meaning The results of this study may motivate future development of physician-chatbot collaborations in clinical practice to expand access to care for more patients and decrease physician burnout, wherein chatbots may provide empathetic response templates for physicians to edit for medical accuracy using their expertise and clinical judgment.

    Abstract

    Importance Artificial intelligence (AI) chatbots pose the opportunity to draft template responses to patient questions. However, the ability of chatbots to generate responses based on domain-specific knowledge of cancer remains to be tested.

    Objective To evaluate the competency of AI chatbots (GPT-3.5 [chatbot 1], GPT-4 [chatbot 2], and Claude AI [chatbot 3]) to generate high-quality, empathetic, and readable responses to patient questions about cancer.

    Design, Setting, and Participants This equivalence study compared the AI chatbot responses and responses by 6 verified oncologists to 200 patient questions about cancer from a public online forum. Data were collected on May 31, 2023.

    Exposures Random sample of 200 patient questions related to cancer from a public online forum (Reddit r/AskDocs) spanning from January 1, 2018, to May 31, 2023, was posed to 3 AI chatbots.

    Main Outcomes and Measures The primary outcomes were pilot ratings of the quality, empathy, and readability on a Likert scale from 1 (very poor) to 5 (very good). Two teams of attending oncology specialists evaluated each response based on pilot measures of quality, empathy, and readability in triplicate. The secondary outcome was readability assessed using Flesch-Kincaid Grade Level.

    Results Responses to 200 questions generated by chatbot 3, the best-performing AI chatbot, were rated consistently higher in overall measures of quality (mean, 3.56 [95% CI, 3.48-3.63] vs 3.00 [95% CI, 2.91-3.09]; P < .001), empathy (mean, 3.62 [95% CI, 3.53-3.70] vs 2.43 [95% CI, 2.32-2.53]; P < .001), and readability (mean, 3.79 [95% CI, 3.72-3.87] vs 3.07 [95% CI, 3.00-3.15]; P < .001) compared with physician responses. The mean Flesch-Kincaid Grade Level of physician responses (mean, 10.11 [95% CI, 9.21-11.03]) was not significantly different from chatbot 3 responses (mean, 10.31 [95% CI, 9.89-10.72]; P > .99) but was lower than those from chatbot 1 (mean, 12.33 [95% CI, 11.84-12.83]; P < .001) and chatbot 2 (mean, 11.32 [95% CI, 11.05-11.79]; P = .01).

    Conclusions and Relevance The findings of this study suggest that chatbots can generate quality, empathetic, and readable responses to patient questions comparable to physician responses sourced from an online forum. Further research is required to assess the scope, process integration, and patient and physician outcomes of chatbot-facilitated interactions.

    Full Text

    Add or change institution

    Comment

    Read More About

    Oncology

    Download PDF Full Text

    Cite This

    Citation

    Chen D, Parsa R, Hope A, et al. Physician and Artificial Intelligence Chatbot Responses to Cancer Questions From Social Media. JAMA Oncol. 2024;10(7):956–960. doi:10.1001/jamaoncol.2024.0836

    Manage citations:

    Ris (Zotero) EndNote BibTex Medlars ProCite RefWorks Reference Manager Mendeley

    © 2024

    Comment

    Add or change institution

    Artificial Intelligence ResourceCenter

    Others Also Liked

    Select Your Interests

    Customize your JAMA Network experience by selecting one or more topics from the list below.

    • Academic Medicine
    • Acid Base, Electrolytes, Fluids
    • Allergy and Clinical Immunology
    • American Indian or Alaska Natives
    • Anesthesiology
    • Anticoagulation
    • Art and Images in Psychiatry
    • Artificial Intelligence
    • Assisted Reproduction
    • Bleeding and Transfusion
    • Cardiology
    • Caring for the Critically Ill Patient
    • Challenges in Clinical Electrocardiography
    • Climate and Health
    • Climate Change
    • Clinical Challenge
    • Clinical Decision Support
    • Clinical Implications of Basic Neuroscience
    • Clinical Pharmacy and Pharmacology
    • Complementary and Alternative Medicine
    • Consensus Statements
    • Coronavirus (COVID-19)
    • Critical Care Medicine
    • Cultural Competency
    • Dental Medicine
    • Dermatology
    • Diabetes and Endocrinology
    • Diagnostic Test Interpretation
    • Drug Development
    • Electronic Health Records
    • Emergency Medicine
    • End of Life, Hospice, Palliative Care
    • Environmental Health
    • Equity, Diversity, and Inclusion
    • Ethics
    • Facial Plastic Surgery
    • Gastroenterology and Hepatology
    • Genetics and Genomics
    • Genomics and Precision Health
    • Geriatrics
    • Global Health
    • Guide to Statistics and Methods
    • Guidelines
    • Hair Disorders
    • Health Care Delivery Models
    • Health Care Economics, Insurance, Payment
    • Health Care Quality
    • Health Care Reform
    • Health Care Safety
    • Health Care Workforce
    • Health Disparities
    • Health Inequities
    • Health Policy
    • Health Systems Science
    • Hematology
    • History of Medicine
    • Humanities
    • Hypertension
    • Images in Neurology
    • Implementation Science
    • Infectious Diseases
    • Innovations in Health Care Delivery
    • JAMA Infographic
    • Law and Medicine
    • Leading Change
    • Less is More
    • LGBTQIA Medicine
    • Lifestyle Behaviors
    • Medical Coding
    • Medical Devices and Equipment
    • Medical Education
    • Medical Education and Training
    • Medical Journals and Publishing
    • Melanoma
    • Mobile Health and Telemedicine
    • Narrative Medicine
    • Nephrology
    • Neurology
    • Neuroscience and Psychiatry
    • Notable Notes
    • Nursing
    • Nutrition
    • Nutrition, Obesity, Exercise
    • Obesity
    • Obstetrics and Gynecology
    • Occupational Health
    • Oncology
    • Ophthalmology
    • Orthopedics
    • Otolaryngology
    • Pain Medicine
    • Palliative Care
    • Pathology and Laboratory Medicine
    • Patient Care
    • Patient Information
    • Pediatrics
    • Performance Improvement
    • Performance Measures
    • Perioperative Care and Consultation
    • Pharmacoeconomics
    • Pharmacoepidemiology
    • Pharmacogenetics
    • Pharmacy and Clinical Pharmacology
    • Physical Medicine and Rehabilitation
    • Physical Therapy
    • Physician Leadership
    • Poetry
    • Population Health
    • Primary Care
    • Professional Well-being
    • Professionalism
    • Psychiatry and Behavioral Health
    • Public Health
    • Pulmonary Medicine
    • Radiology
    • Regulatory Agencies
    • Reproductive Health
    • Research, Methods, Statistics
    • Resuscitation
    • Rheumatology
    • Risk Management
    • Scientific Discovery and the Future of Medicine
    • Shared Decision Making and Communication
    • Sleep Medicine
    • Sports Medicine
    • Stem Cell Transplantation
    • Substance Use and Addiction Medicine
    • Surgery
    • Surgical Innovation
    • Surgical Pearls
    • Teachable Moment
    • Technology and Finance
    • The Art of JAMA
    • The Arts and Medicine
    • The Rational Clinical Examination
    • Tobacco and e-Cigarettes
    • Toxicology
    • Translational Medicine
    • Trauma and Injury
    • Treatment Adherence
    • Ultrasonography
    • Urology
    • Users' Guide to the Medical Literature
    • Vaccination
    • Venous Thromboembolism
    • Veterans Health
    • Violence
    • Women's Health
    • Workflow and Process
    • Wound Care, Infection, Healing

    Save Preferences

    Privacy Policy | Terms of Use

    X

    .

    ×

    Access your subscriptions

    Add or change institution

    Free access to newly published articles

    To register for email alerts, access free PDF, and more

    Purchase access

    Get full journal access for 1 year

    Get unlimited access and a printable PDF ($40.00)—
    Sign in or create a free account

    Rent this article from DeepDyve

    Access your subscriptions

    Add or change institution

    Free access to newly published articles

    To register for email alerts, access free PDF, and more

    Purchase access

    Get full journal access for 1 year

    Get unlimited access and a printable PDF ($40.00)—
    Sign in or create a free account

    Rent this article from DeepDyve

    Sign in to access free PDF

    Add or change institution

    Free access to newly published articles

    To register for email alerts, access free PDF, and more

    Save your search

    Free access to newly published articles

    To register for email alerts, access free PDF, and more

    Purchase access

    Customize your interests

    Free access to newly published articles

    To register for email alerts, access free PDF, and more

    Create a personal account or sign in to:

    • Register for email alerts with links to free full-text articles
    • Access PDFs of free articles
    • Manage your interests
    • Save searches and receive search alerts

      Privacy Policy

      Make a comment

      Free access to newly published articles

      To register for email alerts, access free PDF, and more

      Create a personal account or sign in to:

      • Register for email alerts with links to free full-text articles
      • Access PDFs of free articles
      • Manage your interests
      • Save searches and receive search alerts

        Privacy Policy

        Physician and AI Responses to Cancer Questions (2025)

        FAQs

        Physician and AI Responses to Cancer Questions? ›

        Study benchmarks observed that chatbot 1, chatbot 2, and chatbot 3 can provide quality, empathetic, and readable responses to patients' cancer questions compared with physician responses in a public online forum; this result is consistent with previous findings in the general medicine setting.

        Does AI detect cancer better than doctors? ›

        An AI tool may spot prostate cancer with 84% accuracy, according to a UCLA research, outperforming doctors who had a 67% accuracy rate. Unfold AI, an AI program created by Avenda Health, visualizes the possibility of cancer by analyzing clinical data using an algorithm.

        Can artificial intelligence-driven chatbots correctly answer questions about cancer? ›

        Results from two studies have found that, although AI chatbots can gather cancer information from reputable sources, their responses to questions about treatment and other topics can include errors and omissions.

        How is AI used in cancer diagnosis? ›

        The AI model recognizes patterns that represent cells and tissue types and the way those components interact,” better enabling the pathologist to assess the cancer risk. The AI “doesn't say there's no chance of this being cancer,” Margolies says of an AI-assisted mammogram.

        Is Chatgpt better than physicians? ›

        Outcomes: The chatbot's responses were preferred to those of the physician and rated significantly higher in quality and empathy. In 78.6%, the chatbot's responses were considered to be better than the doctor's. More specifically, the doctors' answers were significantly shorter than the chatbot's answers (52 words vs.

        How accurate is AI cancer detection? ›

        The detection and diagnosis of malignant tumors with the help of AI seems to be feasible and accurate with the use of different technologies, such as CAD systems, deep and machine learning algorithms and radiomic analysis when compared with the traditional model, although these technologies are not capable of to ...

        What are the limitations of AI in cancer? ›

        However, data-related concerns and human biases that seep into algorithms during development and post-deployment phases affect performance in real-world settings, limiting the utility and safety of AI technology in oncology clinics.

        What should you not ask AI? ›

        What else should you never ask an AI assistant? Copied!
        • Don't ask voice assistants like Siri, Alexa, or Google Assistant to handle your banking tasks. ...
        • Don't rely on voice assistants to find and dial phone numbers for you. ...
        • Avoid using voice assistants for medical advice.
        Aug 19, 2024

        How accurate is ChatGPT percentage? ›

        ChatGPT achieved more than 50% accuracy across all US Medical Licensing Examination exams (MedRXIV)

        Where AI should not be used? ›

        If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans.

        How accurate is AI in diagnosing patients? ›

        —How does artificial intelligence work? The study analyzed responses by 457 clinicians who diagnosed at least one fictional patient; 418 diagnosed all nine. Without an AI helper, the clinicians' diagnoses were accurate about 73% of the time. With the standard, unbiased AI, this percentage jumped to 75.9%.

        What to expect from AI in oncology? ›

        Pathologists trained to scrutinise tumour slides to determine the type of cancer and stage find that AI can now tell you not only the type and stage, but can also molecularly classify the cancer and suggest treatment decisions and other clinical management decisions based on that.

        What is the market for AI in cancer diagnosis? ›

        In 2023, the software solutions segment led the global AI in cancer market, capturing a 58% market share. The breast cancer segment is the largest user of AI-based solutions for early diagnosis and treatment, holding a 41% share.

        Is ChatGPT accurate for medical advice? ›

        WEISSMAN: ChatGPT should not be used to support clinical decision-making. There is no evidence that it is safe, equitable or effective for this purpose. As far as I know, there is also no authorization from the Food and Drug Administration for its use in this way.

        Is AI outperforming doctors? ›

        The AI system matched or surpassed the physicians' diagnostic accuracy in all six medical specialties considered.

        Does ChatGPT lack empathy? ›

        The questionnaires indicated that ChatGPT is able to interpret the emotions of others and take their perspective but still has some difficulties showing a higher level of empathy compared to healthy humans.

        Is AI better at diagnosing than doctors? ›

        The AI system matched or surpassed the physicians' diagnostic accuracy in all six medical specialties considered. The bot outperformed physicians in 24 of 26 criteria for conversation quality, including politeness, explaining the condition and treatment, coming across as honest, and expressing care and commitment.

        How accurate is the AI medical diagnosis? ›

        The average agreement rate for half of all presenting symptoms was greater than or equal to 90 percent overall. In cases where adjudication was necessary, the consensus diagnosis—reached in 58.2 percent of cases—was always present in the AI's differential diagnosis.

        Is AI more accurate than radiologists? ›

        The first large, rigorous studies testing AI-assisted radiologists against those working alone give hints at the potential improvements. Initial results from a Swedish study of 80,000 women showed that AI-supported breast screening detected 20 per cent more cancers compared to two radiologists.

        What is the most accurate scan for cancer? ›

        In particular, MRI scans, CT scans and blood tests are often the preferred choices to detect cancer because of their accuracy, low risk levels, and reliability.

        Top Articles
        Latest Posts
        Recommended Articles
        Article information

        Author: Chrissy Homenick

        Last Updated:

        Views: 5637

        Rating: 4.3 / 5 (54 voted)

        Reviews: 93% of readers found this page helpful

        Author information

        Name: Chrissy Homenick

        Birthday: 2001-10-22

        Address: 611 Kuhn Oval, Feltonbury, NY 02783-3818

        Phone: +96619177651654

        Job: Mining Representative

        Hobby: amateur radio, Sculling, Knife making, Gardening, Watching movies, Gunsmithing, Video gaming

        Introduction: My name is Chrissy Homenick, I am a tender, funny, determined, tender, glorious, fancy, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.