94
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Revised standards for statistical evidence

      Proceedings of the National Academy of Sciences
      Proceedings of the National Academy of Sciences

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

          Related collections

          Most cited references5

          • Record: found
          • Abstract: not found
          • Article: not found

          Testing Precise Hypotheses

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            What is the probability of replicating a statistically significant effect?

            If an initial experiment produces a statistically significant effect, what is the probability that this effect will be replicated in a follow-up experiment? I argue that this seemingly fundamental question can be interpreted in two very different ways and that its answer is, in practice, virtually unknowable under either interpretation. Although the data from an initial experiment can be used to estimate one type of replication probability, this estimate will rarely be precise enough to be of any use. The other type of replication probability is also unknowable, because it depends on unknown aspects of the research context. Thus, although it would be nice to know the probability of replicating a significant effect, researchers must accept the fact that they generally cannot determine this information, whichever type of replication probability they seek.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Evidence that publication bias contaminated studies relating social class and unethical behavior.

                Bookmark

                Author and article information

                Journal
                Proceedings of the National Academy of Sciences
                Proceedings of the National Academy of Sciences
                Proceedings of the National Academy of Sciences
                0027-8424
                1091-6490
                November 26 2013
                November 26 2013
                November 11 2013
                November 26 2013
                : 110
                : 48
                : 19313-19317
                Article
                10.1073/pnas.1313476110
                3845140
                24218581
                d03775e3-9606-4881-a549-32e2b25c06e9
                © 2013
                History

                Comments

                Comment on this article