2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Significance

          We develop an AI chat assistant that makes real-time, evidence-based suggestions for messages in divisive online political conversations. In a randomized controlled trial, we show that when one participant in a conversation had access to this assistant, it increased their partner’s reported quality of conversation and both participants’ willingness to grant political opponents space to express and advocate their views in the public sphere. Participants had the ability to accept, modify, or ignore the AI chat assistant’s recommendations. Notably, participants’ policy positions were unchanged by the intervention. Though many are rightly concerned about the role of AI sowing social division, our findings suggest it can do the opposite—improve political conversations without manipulating participants’ views.

          Abstract

          Political discourse is the soul of democracy, but misunderstanding and conflict can fester in divisive conversations. The widespread shift to online discourse exacerbates many of these problems and corrodes the capacity of diverse societies to cooperate in solving social problems. Scholars and civil society groups promote interventions that make conversations less divisive or more productive, but scaling these efforts to online discourse is challenging. We conduct a large-scale experiment that demonstrates how online conversations about divisive topics can be improved with AI tools. Specifically, we employ a large language model to make real-time, evidence-based recommendations intended to improve participants’ perception of feeling understood. These interventions improve reported conversation quality, promote democratic reciprocity, and improve the tone, without systematically changing the content of the conversation or moving people’s policy attitudes.

          Related collections

          Most cited references90

          • Record: found
          • Abstract: found
          • Article: not found

          Dissecting racial bias in an algorithm used to manage the health of populations

          Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Language Models are Few-Shot Learners

            Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general. 40+32 pages
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Semantics derived automatically from language corpora contain human-like biases

              Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.
                Bookmark

                Author and article information

                Contributors
                Journal
                Proc Natl Acad Sci U S A
                Proc Natl Acad Sci U S A
                PNAS
                Proceedings of the National Academy of Sciences of the United States of America
                National Academy of Sciences
                0027-8424
                1091-6490
                3 October 2023
                10 October 2023
                3 October 2023
                : 120
                : 41
                : e2311627120
                Affiliations
                [1] aDepartment of Political Science , Brigham Young University , Provo, UT, 84602
                [2] bDepartment of Sociology, Political Science, and Public Policy , Duke University , Durham, NC, 27708
                [3] cDepartment of Computer Science , Brigham Young University , Provo, UT, 84602
                [4] dDepartment of Computer Science , University of Washington , Seattle, WA, 98195
                Author notes
                2To whom correspondence may be addressed. Email: lpargyle@ 123456byu.edu or christopher.bail@ 123456duke.edu .

                Edited by Kathleen Jamieson, University of Pennsylvania, Philadelphia, PA; received July 9, 2023; accepted August 18, 2023

                1L.P.A., C.B., E.C.B., J.R.G., T.H., C.R., T.S., and D.W. contributed equally to this work.

                Author information
                https://orcid.org/0000-0003-3109-2537
                https://orcid.org/0000-0002-8931-6348
                https://orcid.org/0000-0003-1635-8210
                https://orcid.org/0000-0002-7373-9741
                https://orcid.org/0000-0002-3251-3527
                https://orcid.org/0000-0003-1850-6926
                Article
                202311627
                10.1073/pnas.2311627120
                10576030
                37788311
                22aadf37-af1b-4777-a9d2-d269be4913f0
                Copyright © 2023 the Author(s). Published by PNAS.

                This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY).

                History
                : 09 July 2023
                : 18 August 2023
                Page count
                Pages: 8, Words: 4620
                Funding
                Funded by: National Science Foundation (NSF), FundRef 100000001;
                Award ID: 2141680
                Award Recipient : Lisa P. Argyle Award Recipient : Ethan C. Busby Award Recipient : Joshua R. Gubler Award Recipient : David Wingate
                Funded by: Brigham Young University (BYU), FundRef 100006756;
                Award ID: n/a
                Award Recipient : David Wingate
                Funded by: Duke University (DU), FundRef 100006510;
                Award ID: n/a
                Award Recipient : Christopher Andrew Bail
                Categories
                research-article, Research Article
                pol-sci, Political Sciences
                comp-sci, Computer Sciences
                411
                429
                Social Sciences
                Political Sciences
                Physical Sciences
                Computer Sciences

                democratic deliberation,computational social science,generative ai,political science

                Comments

                Comment on this article