3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Causally estimating the effect of YouTube’s recommender system using counterfactual bots

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In recent years, critics of online platforms have raised concerns about the ability of recommendation algorithms to amplify problematic content, with potentially radicalizing consequences. However, attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactuals—what a user would have viewed in the absence of algorithmic recommendations—and hence cannot disentangle the effects of the algorithm from a user’s intentions. Here we propose a method that we call “counterfactual bots” to causally estimate the role of algorithmic recommendations on the consumption of highly partisan content on YouTube. By comparing bots that replicate real users’ consumption patterns with “counterfactual” bots that follow rule-based trajectories, we show that, on average, relying exclusively on the YouTube recommender results in less partisan consumption, where the effect is most pronounced for heavy partisan consumers. Following a similar method, we also show that if partisan consumers switch to moderate content, YouTube’s sidebar recommender “forgets” their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content. Overall, our findings indicate that, at least since the algorithm changes that YouTube implemented in 2019, individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.

          Related collections

          Most cited references17

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Neural Networks for YouTube Recommendations

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Auditing radicalization pathways on YouTube

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube

              Search engines are the primary gateways of information. Yet, they do not take into account the credibility of search results. There is a growing concern that YouTube, the second largest search engine and the most popular video-sharing platform, has been promoting and recommending misinformative content for certain search topics. In this study, we audit YouTube to verify those claims. Our audit experiments investigate whether personalization (based on age, gender, geolocation, or watch history) contributes to amplifying misinformation. After shortlisting five popular topics known to contain misinformative content and compiling associated search queries representing them, we conduct two sets of audits-Search-and Watch-misinformative audits. Our audits resulted in a dataset of more than 56K videos compiled to link stance (whether promoting misinformation or not) with the personalization attribute audited. Our videos correspond to three major YouTube components: search results, Up-Next, and Top 5 recommendations. We find that demographics, such as, gender, age, and geolocation do not have a significant effect on amplifying misinformation in returned search results for users with brand new accounts. On the other hand, once a user develops a watch history, these attributes do affect the extent of misinformation recommended to them. Further analyses reveal a filter bubble effect, both in the Top 5 and Up-Next recommendations for all topics, except vaccine controversies; for these topics, watching videos that promote misinformation leads to more misinformative video recommendations. In conclusion, YouTube still has a long way to go to mitigate misinformation on its platform.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Proceedings of the National Academy of Sciences
                Proc. Natl. Acad. Sci. U.S.A.
                Proceedings of the National Academy of Sciences
                0027-8424
                1091-6490
                February 20 2024
                February 13 2024
                February 20 2024
                : 121
                : 8
                Affiliations
                [1 ]Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104
                [2 ]Annenberg School of Communication, University of Pennsylvania, Philadelphia, PA 19104
                [3 ]Yale Institute for Network Science, Yale University, New Haven, CT 06511
                [4 ]Heinz College of Information Systems and Public Policy, Carnegie Mellon University, Pittsburgh, PA 15213
                [5 ]School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, 1015 Ecublens, Switzerland
                [6 ]Operations, Information, and Decisions Department, University of Pennsylvania, Philadelphia, PA 19104
                Article
                10.1073/pnas.2313377121
                c4bdd7c4-db17-4275-9c5f-7e88174fd39c
                © 2024

                https://creativecommons.org/licenses/by-nc-nd/4.0/

                History

                Comments

                Comment on this article