9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals’ preferences and concerns

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals’ attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees’ reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians’ responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.

          Related collections

          Most cited references38

          • Record: found
          • Abstract: found
          • Article: not found

          Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups.

          Qualitative research explores complex phenomena encountered by clinicians, health care providers, policy makers and consumers. Although partial checklists are available, no consolidated reporting framework exists for any type of qualitative design. To develop a checklist for explicit and comprehensive reporting of qualitative studies (in depth interviews and focus groups). We performed a comprehensive search in Cochrane and Campbell Protocols, Medline, CINAHL, systematic reviews of qualitative studies, author or reviewer guidelines of major medical journals and reference lists of relevant publications for existing checklists used to assess qualitative studies. Seventy-six items from 22 checklists were compiled into a comprehensive list. All items were grouped into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting. Duplicate items and those that were ambiguous, too broadly defined and impractical to assess were removed. Items most frequently included in the checklists related to sampling method, setting for data collection, method of data collection, respondent validation of findings, method of recording data, description of the derivation of themes and inclusion of supporting quotations. We grouped all items into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting. The criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

            Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Adapting to Artificial Intelligence

                Bookmark

                Author and article information

                Journal
                J Med Ethics
                J Med Ethics
                medethics
                jme
                Journal of Medical Ethics
                BMJ Publishing Group (BMA House, Tavistock Square, London, WC1H 9JR )
                0306-6800
                1473-4257
                January 2024
                22 May 2023
                : 50
                : 1
                : 6-11
                Affiliations
                [1 ] departmentInstitute of Ethics, History and Philosophy of Medicine , Ringgold_9177Hannover Medical School , Hannover, Germany
                [2 ] departmentInstitute of Ethics and History of Medicine , Eberhard Karls University Tübingen , Tübingen, Germany
                [3 ] departmentDepartment of Social Work , Ringgold_120182Protestant University of Applied Sciences RWL , Bochum, Germany
                [4 ] departmentInstitute of Medical Informatics , Ringgold_9165RWTH Aachen University , Aachen, Germany
                [5 ] departmentCompetence Center Emerging Technologies , Ringgold_28479Fraunhofer Institute for Systems and Innovation Research ISI , Karlsruhe, Germany
                [6 ] departmentPeter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School , Ringgold_9177Hannover Medical School , Hannover, Germany
                Author notes
                [Correspondence to ] Dr Florian Funer, Institute of Ethics and History of Medicine, Eberhard Karls University Tübingen, Tübingen, Baden-Württemberg, Germany; florian.funer@ 123456uni-tuebingen.de
                Author information
                http://orcid.org/0000-0001-9242-0827
                http://orcid.org/0000-0002-1433-658X
                http://orcid.org/0000-0002-7131-5005
                http://orcid.org/0000-0002-2987-2684
                Article
                jme-2022-108814
                10.1136/jme-2022-108814
                10803986
                37217277
                3a5c6a21-76f9-41de-bcd6-4de1d2c8dd18
                © Author(s) (or their employer(s)) 2024. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.

                This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

                History
                : 25 November 2022
                : 11 March 2023
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100002347, Bundesministerium für Bildung und Forschung;
                Award ID: ID 01GP1911A-D
                Award ID: ID 01ZX1912A
                Categories
                Clinical Ethics
                1506
                Custom metadata
                unlocked

                Ethics
                ethics- medical,decision making,ethics,health personnel,information technology
                Ethics
                ethics- medical, decision making, ethics, health personnel, information technology

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content103

                Cited by7

                Most referenced authors300