16
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Online learning developments in undergraduate medical education in response to the COVID-19 pandemic: A BEME systematic review: BEME Guide No. 69

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references91

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions

          Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Interrater reliability: the kappa statistic

            The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              The answer is 17 years, what is the question: understanding time lags in translational research

              This study aimed to review the literature describing and quantifying time lags in the health research translation process. Papers were included in the review if they quantified time lags in the development of health interventions. The study identified 23 papers. Few were comparable as different studies use different measures, of different things, at different time points. We concluded that the current state of knowledge of time lags is of limited use to those responsible for R&D and knowledge transfer who face difficulties in knowing what they should or can do to reduce time lags. This effectively ‘blindfolds’ investment decisions and risks wasting effort. The study concludes that understanding lags first requires agreeing models, definitions and measures, which can be applied in practice. A second task would be to develop a process by which to gather these data.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Medical Teacher
                Medical Teacher
                Informa UK Limited
                0142-159X
                1466-187X
                February 01 2022
                October 28 2021
                February 01 2022
                : 44
                : 2
                : 109-129
                Affiliations
                [1 ]Internal Medicine and Pediatrics, University of Michigan Medical School, Ann Arbor, MI, USA
                [2 ]Department of Pediatrics, Texas Children’s Hospital and Baylor College of Medicine, Houston, TX, USA
                [3 ]Family Medicine and Public Health, University of California San Diego School of Medicine, La Jolla, CA, USA
                [4 ]RCSI University of Medicine and Health Sciences, Dublin, Ireland
                [5 ]McGovern Medical School, University of Texas Health Science Center, Houston, TX, USA
                [6 ]School of Medicine, University of Leicester, Leicester, UK
                [7 ]Blackpool Victoria Hospital, Blackpool, UK
                [8 ]School of Medicine, University of Central Lancashire, Preston, UK
                Article
                10.1080/0142159X.2021.1992373
                34709949
                e0bd62f2-4891-47f7-88ad-0fd183a0c076
                © 2022
                History

                Comments

                Comment on this article