12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Addressing bias in big data and AI for health care: A call for open science

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Summary

          Artificial intelligence (AI) has an astonishing potential in assisting clinical decision making and revolutionizing the field of health care. A major open challenge that AI will need to address before its integration in the clinical routine is that of algorithmic bias. Most AI algorithms need big datasets to learn from, but several groups of the human population have a long history of being absent or misrepresented in existing biomedical datasets. If the training data is misrepresentative of the population variability, AI is prone to reinforcing bias, which can lead to fatal outcomes, misdiagnoses, and lack of generalization. Here, we describe the challenges in rendering AI algorithms fairer, and we propose concrete steps for addressing bias using tools from the field of open science.

          The bigger picture

          Bias in the medical field can be dissected along three directions: data-driven, algorithmic, and human. Bias in AI algorithms for health care can have catastrophic consequences by propagating deeply rooted societal biases. This can result in misdiagnosing certain patient groups, like gender and ethnic minorities, that have a history of being underrepresented in existing datasets, further amplifying inequalities.

          Open science practices can assist in moving toward fairness in AI for health care. These include (1) participant-centered development of AI algorithms and participatory science; (2) responsible data sharing and inclusive data standards to support interoperability; and (3) code sharing, including sharing of AI algorithms that can synthesize underrepresented data to address bias. Future research needs to focus on developing standards for AI in health care that enable transparency and data sharing, while at the same time preserving patients’ privacy.

          Abstract

          Artificial intelligence (AI) has an astonishing potential in revolutionizing health care. A major challenge is that of algorithmic bias. Most AI algorithms need big datasets to learn from, but several groups of the human population are absent or misrepresented in existing datasets. AI is thus prone to reinforcing bias, which can lead to fatal outcomes and misdiagnoses. Here, we describe challenges in rendering AI algorithms fairer, and we propose concrete steps for addressing bias using open science tools.

          Related collections

          Most cited references69

          • Record: found
          • Abstract: found
          • Article: not found

          The weirdest people in the world?

          Behavioral scientists routinely publish broad claims about human psychology and behavior in the world's top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers - often implicitly - assume that either there is little variation across human populations, or that these "standard subjects" are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species - frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior - hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Dissecting racial bias in an algorithm used to manage the health of populations

            Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

                Bookmark

                Author and article information

                Contributors
                Journal
                Patterns (N Y)
                Patterns (N Y)
                Patterns
                Elsevier
                2666-3899
                08 October 2021
                08 October 2021
                08 October 2021
                : 2
                : 10
                : 100347
                Affiliations
                [1 ]Institute of Computer Science, University of Bern, Neubrückstrasse 10 3012 Bern, Switzerland
                [2 ]Population Health Sciences, Bristol Medical School, University of Bristol, Bristol BS8 1UD, UK
                [3 ]Institute of Digital Technologies for Personalized Healthcare (MeDiTech), Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, 6962 Lugano, Switzerland
                [4 ]Sleep Wake Epilepsy Center | NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, 3010 Bern, Switzerland
                [5 ]Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA 94720, USA
                Author notes
                []Corresponding author athina.tz@ 123456gmail.com
                Article
                S2666-3899(21)00202-6 100347
                10.1016/j.patter.2021.100347
                8515002
                34693373
                80389c12-3e55-4ffe-aea7-1ef9bb960a0d
                © 2021 The Authors

                This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

                History
                Categories
                Perspective

                artificial intelligence,deep learning,health care,bias,open science,participatory science,data standards

                Comments

                Comment on this article