Abstract

The rapid adoption of large language models (LLMs) in healthcare has created opportunities for innovation but also has raised critical concerns about scientific rigor. This article provides a toolbox for clinicians, researchers, and reviewers involved with LLM studies, highlighting the importance of methodologic transparency, reproducibility, and ethical considerations. It addresses foundational aspects of LLM functioning, including their training data, inherent biases, and black-box nature. Prompt engineering strategies are reviewed to understand and optimize model interaction, emphasizing the necessity of systematic evaluation of these methods. Key challenges around interpreting outputs are discussed, advocating for explainability and fairness. It stresses clear reporting of computational resources, environmental impacts, and the risks of rapid model iteration on study obsolescence. Given the pace at which LLMs evolve, traditional peer-review practices are often outpaced, requiring new guidelines and rigorous qualitative assessments to ensure validity, fairness, and clinical utility. Recommendations to enhance reporting and reproducibility standards are provided.

Affiliated Institutions

Related Publications

Publication Info

Year
2025
Type
article
Citations
0
Access
Closed

External Links

Citation Metrics

0
OpenAlex

Cite This

Nicole E. Dundas, Tyler J. Law, Teva D. Brender et al. (2025). All That Shines Is Not Gold: Maintaining Scientific Rigor When Evaluating, Interpreting, and Reviewing Studies Using Large Language Models. Anesthesiology . https://doi.org/10.1097/aln.0000000000005795

Identifiers

DOI
10.1097/aln.0000000000005795