11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Bot, or not? Comparing three methods for detecting social bots in five political discourses

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Social bots – partially or fully automated accounts on social media platforms – have not only been widely discussed, but have also entered political, media and research agendas. However, bot detection is not an exact science. Quantitative estimates of bot prevalence vary considerably and comparative research is rare. We show that findings on the prevalence and activity of bots on Twitter depend strongly on the methods used to identify automated accounts. We search for bots in political discourses on Twitter, using three different bot detection methods: Botometer, Tweetbotornot and “heavy automation”. We drew a sample of 122,884 unique user Twitter accounts that had produced 263,821 tweets contributing to five political discourses in five Western democracies. While all three bot detection methods classified accounts as bots in all our cases, the comparison shows that the three approaches produce very different results. We discuss why neither manual validation nor triangulation resolves the basic problems, and conclude that social scientists studying the influence of social bots on (political) communication and discourse dynamics should be careful with easy-to-use methods, and consider interdisciplinary research.

          Related collections

          Most cited references39

          • Record: found
          • Abstract: not found
          • Article: not found

          The spread of true and false news online

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate

            Objectives. To understand how Twitter bots and trolls (“bots”) promote online health content. Methods. We compared bots’ to average users’ rates of vaccine-relevant messages, which we collected online from July 2014 through September 2017. We estimated the likelihood that users were bots, comparing proportions of polarized and antivaccine tweets across user types. We conducted a content analysis of a Twitter hashtag associated with Russian troll activity. Results. Compared with average users, Russian trolls (χ 2 (1) = 102.0; P  < .001), sophisticated bots (χ 2 (1) = 28.6; P  < .001), and “content polluters” (χ 2 (1) = 7.0; P  < .001) tweeted about vaccination at higher rates. Whereas content polluters posted more antivaccine content (χ 2 (1) = 11.18; P  < .001), Russian trolls amplified both sides. Unidentifiable accounts were more polarized (χ 2 (1) = 12.1; P  < .001) and antivaccine (χ 2 (1) = 35.9; P  < .001). Analysis of the Russian troll hashtag showed that its messages were more political and divisive. Conclusions. Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination. Public Health Implications. Directly confronting vaccine skeptics enables bots to legitimize the vaccine debate. More research is needed to determine how best to combat bot-driven content.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The rise of social bots

                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Big Data & Society
                Big Data & Society
                SAGE Publications
                2053-9517
                2053-9517
                July 2021
                August 23 2021
                July 2021
                : 8
                : 2
                : 205395172110335
                Affiliations
                [1 ]Weizenbaum Institute for the Networked Society, Freie Universität Berlin, Germany
                [2 ]Digital Media Research Centre, Queensland University of Technology, Australia; gfs.bern, Bern, Switzerland (present)
                Article
                10.1177/20539517211033566
                41023161-6a3a-4b41-a5e4-36b78bc92d37
                © 2021

                https://creativecommons.org/licenses/by-nc/4.0/

                History

                Comments

                Comment on this article