There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Abstract
<p class="first" id="P1">Many scientific manuscripts submitted for publication are
limited by fundamental mistakes
in their preparation, leading to rejection. We describe how to write a well-organized
radiological research manuscript containing all of the important ingredients for effective
communication of a hypothesis driven scientific study in the context of medical imaging.
</p>
ABSTRACT Science is often romanticised as a flawless system of knowledge building, where scientists work together to systematically find answers. In reality, this is not always the case. Dissemination of results are straightforward when the findings are positive, but what happens when you obtain results that support the null hypothesis, or do not fit with the current scientific thinking? In this Editorial, we discuss the issues surrounding publication bias and the difficulty in communicating negative results. Negative findings are a valuable component of the scientific literature because they force us to critically evaluate and validate our current thinking, and fundamentally move us towards unabridged science.
To improve the accuracy and completeness of reporting of studies of diagnostic accuracy, to allow readers to assess the potential for bias in the study and to evaluate its generalisability. The Standards for Reporting of Diagnostic Accuracy (STARD) steering group searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. The search for published guidelines regarding diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25-item checklist, using evidence, whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of the clinicians, researchers, reviewers, journals, and the public. Copyright RSNA, 2003
Poor reporting of diagnostic accuracy studies impedes an objective appraisal of the clinical performance of diagnostic tests. The Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement, first published in 2003, aims to improve the reporting quality of such studies. To investigate to which extent published diagnostic accuracy studies adhere to the 25-item STARD checklist, whether the reporting quality has improved after STARD's launch and whether there are any factors associated with adherence. We performed a systematic review and searched MEDLINE, EMBASE and the Methodology Register of the Cochrane Library for studies that primarily aimed to examine the reporting quality of articles on diagnostic accuracy studies in humans by evaluating adherence to STARD. Study selection was performed in duplicate; data were extracted by one author and verified by the second author. We included 16 studies, analysing 1496 articles in total. Three studies investigated adherence in a general sample of diagnostic accuracy studies; the others did so in a specific field of research. The overall mean number of items reported varied from 9.1 to 14.3 between 13 evaluations that evaluated all 25 STARD items. Six studies quantitatively compared post-STARD with pre-STARD articles. Combining these results in a random-effects meta-analysis revealed a modest but significant increase in adherence after STARD's introduction (mean difference 1.41 items (95% CI 0.65 to 2.18)). The reporting quality of diagnostic accuracy studies was consistently moderate, at least through halfway the 2000s. Our results suggest a small improvement in the years after the introduction of STARD. Adherence to STARD should be further promoted among researchers, editors and peer reviewers.
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.