92
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research

      research-article
      1 , * , 2 , 2
      PLoS ONE
      Public Library of Science

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to traditional laboratory studies, many aspects of the testing environment are not under the experimenter's control. In this paper, we attempt to empirically evaluate the fidelity of the AMT system for use in cognitive behavioral experiments. These types of experiment differ from simple surveys in that they require multiple trials, sustained attention from participants, comprehension of complex instructions, and millisecond accuracy for response recording and stimulus presentation. We replicate a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT. While most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser, others revealed disparity between laboratory results and online results. A number of important lessons were encountered in the process of conducting these replications that should be of value to other researchers.

          Related collections

          Most cited references26

          • Record: found
          • Abstract: found
          • Article: not found

          Conducting behavioral research on Amazon's Mechanical Turk.

          Amazon's Mechanical Turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay. The central purpose of this article is to demonstrate how to use this Web site for conducting behavioral research and to lower the barrier to entry for researchers who could benefit from this platform. We describe general techniques that apply to a variety of types of research and experiments across disciplines. We begin by discussing some of the advantages of doing experiments on Mechanical Turk, such as easy access to a large, stable, and diverse subject pool, the low cost of doing experiments, and faster iteration between developing theory and executing experiments. While other methods of conducting behavioral research may be comparable to or even better than Mechanical Turk on one or more of the axes outlined above, we will show that when taken as a whole Mechanical Turk can be a useful tool for many researchers. We will discuss how the behavior of workers compares with that of experts and laboratory subjects. Then we will illustrate the mechanics of putting a task on Mechanical Turk, including recruiting subjects, executing the task, and reviewing the work that was submitted. We also provide solutions to common problems that a researcher might face when executing their research on this platform, including techniques for conducting synchronous experiments, methods for ensuring high-quality work, how to keep data private, and how to maintain code security.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Temporary suppression of visual processing in an RSVP task: An attentional blink?

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The influence of irrelevant location information on performance: A review of the Simon and spatial Stroop effects.

              The purpose of this paper is to investigate the effect of irrelevant location information on performance of visual choice-reaction tasks. We review empirical findings and theoretical explanations from two domains, those of the Simon effect and the spatial Stroop effect, in which stimulus location has been shown to affect reaction time when irrelevant to the task. We then integrate the findings and explanations from the two domains to clarify how and why stimulus location influences performance even when it is uninformative to the correct response. Factors that influence the processing of irrelevant location information include response modality, relative timing with respect to the relevant information, spatial coding, and allocation of attention. The most promising accounts are offered by models in which response selection is a function of (1) strength of association of the irrelevant stimulus information with the response and (2) temporal overlap of the resulting response activation with that produced by the relevant stimulus information.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, USA )
                1932-6203
                2013
                13 March 2013
                : 8
                : 3
                : e57410
                Affiliations
                [1 ]Department of Psychology, Brooklyn College of CUNY, Brooklyn, New York, United States of America
                [2 ]Department of Psychology, New York University, New York, New York, United States of America
                University College London, United Kingdom
                Author notes

                Competing Interests: The authors have declared that no competing interests exist.

                Conceived and designed the experiments: MC JM TG. Performed the experiments: MC JM TG. Analyzed the data: MC JM TG. Contributed reagents/materials/analysis tools: MC JM TG. Wrote the paper: MC JM TG.

                Article
                PONE-D-12-36621
                10.1371/journal.pone.0057410
                3596391
                23516406
                41d9e91b-c33c-414b-a77c-3f8a16946889
                Copyright @ 2013

                This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 20 November 2012
                : 21 January 2013
                Page count
                Pages: 18
                Funding
                The authors have no support or funding to report.
                Categories
                Research Article
                Computer Science
                Computer Applications
                Web-Based Applications
                Medicine
                Mental Health
                Psychology
                Behavior
                Human Performance
                Science Policy
                Research Assessment
                Research Validity
                Reproducibility
                Social and Behavioral Sciences
                Psychology
                Behavior
                Human Performance
                Cognitive Psychology
                Experimental Psychology
                Psychometrics

                Uncategorized
                Uncategorized

                Comments

                Comment on this article