27
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found

      Eine Einführung in die Plausible-Values-Technik für die psychologische Forschung

      research-article

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Zusammenfassung. In der psychologischen Forschung durchgeführte Messungen zur Erfassung von Konstrukten sind meistens mit einem Messfehler behaftet. Diese Messfehler führen zu verzerrten Schätzern von Populationsparametern und deren Standardfehlern. In den letzten Jahrzehnten hat sich im Bereich der Large-Scale-Assessments mit der Plausible-Values-Technik ein Verfahren zur Korrektur von messfehlerbehafteten Zusammenhängen zwischen latenten Variablen und beobachteten Kovariaten etabliert. Der vorliegende Beitrag führt anhand eines einfachen Beispiels aus der Klassischen Testtheorie in dieses komplexe statistische Verfahren ein. Es wird gezeigt, dass alternative Verfahren zur Schätzung von Personenwerten im Allgemeinen zu verzerrten Schätzungen von Zusammenhängen auf Populationsebene führen. In einer Simulationsstudie werden diese Befunde auf ein IRT-Modell für dichotome Indikatoren übertragen. Aus diagnostischer Sicht wird betont, dass Plausible Values nicht zur Schätzung von individuellen Fähigkeitsausprägungen verwendet werden sollen. Abschließend werden methodische Herausforderungen bei der Anwendung der Plausible-Values-Technik sowie das Potential für die psychologische Forschung diskutiert.

          An Introduction to the Plausible Value Technique for Psychological Research

          Abstract. In psychological research, the assessment of most constructs is affected by measurement error. Measurement error results in biased estimates of population parameters and their standard errors. In the past few decades, in the area of large-scale assessment studies, the plausible values technique has been established as a procedure for correcting relationships between latent variables and covariates. The present article introduces this complex statistical technique using a simple example from classic test theory. It shows that alternative procedures for estimating person parameters result in biased estimates of relationships at the population level. A simulation study was conducted to demonstrate that these findings also hold for an item response model in the case of dichotomous indicators. The results highlight that plausible values should not be used for estimating individual person parameters and are not appropriate for individual psychological assessment. Finally, we discuss methodological challenges in the application of the plausible value technique and the potential of this technique for psychological research.

          Related collections

          Most cited references56

          • Record: found
          • Abstract: not found
          • Article: not found

          mice: Multivariate Imputation by Chained Equations inR

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Missing data: our view of the state of the art.

            Statistical procedures for missing data have vastly improved, yet misconception and unsound practice still abound. The authors frame the missing-data problem, review methods, offer advice, and raise issues that remain unresolved. They clear up common misunderstandings regarding the missing at random (MAR) concept. They summarize the evidence against older procedures and, with few exceptions, discourage their use. They present, in both technical and practical language, 2 general approaches that come highly recommended: maximum likelihood (ML) and Bayesian multiple imputation (MI). Newer developments are discussed, including some for dealing with missing data that are not MAR. Although not yet in the mainstream, these procedures may eventually extend the ML and MI methods that currently represent the state of the art.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety.

              Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb. This study used Monte Carlo data simulation techniques to evaluate sample size requirements for common applied SEMs. Across a series of simulations, we systematically varied key model properties, including number of indicators and factors, magnitude of factor loadings and path coefficients, and amount of missing data. We investigated how changes in these parameters affected sample size requirements with respect to statistical power, bias in the parameter estimates, and overall solution propriety. Results revealed a range of sample size requirements (i.e., from 30 to 460 cases), meaningful patterns of association between parameters and sample size, and highlight the limitations of commonly cited rules-of-thumb. The broad "lessons learned" for determining SEM sample size requirements are discussed.
                Bookmark

                Author and article information

                Contributors
                Journal
                dia
                Diagnostica
                Hogrefe Verlag, Göttingen
                0012-1924
                2190-622X
                2017
                : 63
                : 3
                : 193-205
                Affiliations
                [ 1 ]Leibniz-Institut für Pädagogik der Naturwissenschaften und Mathematik, Kiel
                [ 2 ]Zentrum für internationale Bildungsvergleichsstudien (ZIB), München
                Author notes
                Prof. Dr. Oliver Lüdtke, Dr. Alexander Robitzsch, Leibniz-Institut für Pädagogik der Naturwissenschaften und Mathematik, Olshausenstraße 62, 24118 Kiel, E-Mail oluedtke@ 123456ipn.uni-kiel.de
                Prof. Dr. Oliver Lüdtke, Dr. Alexander Robitzsch, Zentrum für internationale Bildungsvergleichsstudien (ZIB), Arcisstraße 21, 80333 München
                Article
                dia_63_3_193
                10.1026/0012-1924/a000175
                62b592d3-6551-4678-89f1-2ec6a8d7d5d5
                Copyright @ 2017
                History
                Categories
                Originalarbeit

                Psychology,Clinical Psychology & Psychiatry
                Plausible Values,missing data,reliability,latent variables,large-scale assessment,plausible values,Reliabilität Missing Data,latente Variablen,Large-Scale-Assessment

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content59

                Cited by5

                Most referenced authors619