8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood of opioid addiction and misuse (PDMP algorithmic platforms). Drawing on this analysis, I aim to show that the wrong inflicted on epistemic agents involved in and affected by these systems’ decision-making processes can be captured through the lenses of Miranda Fricker’s account of hermeneutical injustice. I further argue that ML-induced hermeneutical injustice is particularly harmful due to what I define as an automated hermeneutical appropriation from the side of the ML system. The latter occurs if the ML system establishes meanings and shared hermeneutical resources without allowing for human oversight, impairing understanding and communication practices among stakeholders involved in medical decision-making. Furthermore and very much crucially, an automated hermeneutical appropriation can be recognized if physicians are strongly limited in their possibilities to safeguard patients from ML-induced hermeneutical injustice. Overall, my paper should expand the analysis of ethical issues raised by ML systems that are to be considered epistemic in nature, thus contributing to bridging the gap between these two dimensions in the ongoing debate.

          Related collections

          Most cited references38

          • Record: found
          • Abstract: found
          • Article: not found

          High-performance medicine: the convergence of human and artificial intelligence

          Eric Topol (2019)
          The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article. Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient-doctor relationship or facilitate its erosion remains to be seen.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A guide to deep learning in healthcare

            Here we present deep-learning techniques for healthcare, centering our discussion on deep learning in computer vision, natural language processing, reinforcement learning, and generalized methods. We describe how these computational techniques can impact a few key areas of medicine and explore how to build end-to-end systems. Our discussion of computer vision focuses largely on medical imaging, and we describe the application of natural language processing to domains such as electronic health record data. Similarly, reinforcement learning is discussed in the context of robotic-assisted surgery, and generalized deep-learning methods for genomics are reviewed.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found

              Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer

              Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency.
                Bookmark

                Author and article information

                Contributors
                G.Pozzi@tudelft.nl
                Journal
                Ethics Inf Technol
                Ethics Inf Technol
                Ethics and Information Technology
                Springer Netherlands (Dordrecht )
                1388-1957
                1572-8439
                23 January 2023
                23 January 2023
                2023
                : 25
                : 1
                : 3
                Affiliations
                GRID grid.5292.c, ISNI 0000 0001 2097 4740, Faculty of Technology, Policy and Management, , Delft University of Technology, ; Jaffalaan 5, 2628 BX Delft, The Netherlands
                Author information
                http://orcid.org/0000-0001-8928-5513
                Article
                9676
                10.1007/s10676-023-09676-z
                9869303
                0fb99f74-a999-4066-a1c8-82a84917e7cf
                © The Author(s) 2023

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 12 January 2023
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100007601, Horizon2020;
                Award ID: 871042
                Categories
                OriginalPaper
                Custom metadata
                © Springer Nature B.V. 2023

                epistemology and ethics of ml,pdmp,opioid risk score,medical ml,epistemic injustice,hermeneutical injustice,automated hermeneutical appropriation

                Comments

                Comment on this article