23
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      Submit your digital health research with an established publisher
      - celebrating 25 years of open access

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Adoption of Artificial Intelligence in Health Care and Social Services in Australia: Findings From a Methodologically Innovative National Survey of Values and Attitudes (the AVA-AI Study)

      research-article
      , BBE, MBA, PhD 1 , , BActSt, MActSt, PhD 2 , , AStat, BSc, MSc, PhD 2 , , BA, LLB, GDLP, PhD 3 , , BAppSci, MPH, PhD 4 ,
      (Reviewer), (Reviewer)
      Journal of Medical Internet Research
      JMIR Publications
      artificial intelligence, surveys and questionnaires, consumer health informatics, social welfare, bioethics, social values

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Artificial intelligence (AI) for use in health care and social services is rapidly developing, but this has significant ethical, legal, and social implications. Theoretical and conceptual research in AI ethics needs to be complemented with empirical research to understand the values and judgments of members of the public, who will be the ultimate recipients of AI-enabled services.

          Objective

          The aim of the Australian Values and Attitudes on AI (AVA-AI) study was to assess and compare Australians’ general and particular judgments regarding the use of AI, compare Australians’ judgments regarding different health care and social service applications of AI, and determine the attributes of health care and social service AI systems that Australians consider most important.

          Methods

          We conducted a survey of the Australian population using an innovative sampling and weighting methodology involving 2 sample components: one from an omnibus survey using a sample selected using scientific probability sampling methods and one from a nonprobability-sampled web-based panel. The web-based panel sample was calibrated to the omnibus survey sample using behavioral, lifestyle, and sociodemographic variables. Univariate and bivariate analyses were performed.

          Results

          We included weighted responses from 1950 Australians in the web-based panel along with a further 2498 responses from the omnibus survey for a subset of questions. Both weighted samples were sociodemographically well spread. An estimated 60% of Australians support the development of AI in general but, in specific health care scenarios, this diminishes to between 27% and 43% and, for social service scenarios, between 31% and 39%. Although all ethical and social dimensions of AI presented were rated as important, accuracy was consistently the most important and reducing costs the least important. Speed was also consistently lower in importance. In total, 4 in 5 Australians valued continued human contact and discretion in service provision more than any speed, accuracy, or convenience that AI systems might provide.

          Conclusions

          The ethical and social dimensions of AI systems matter to Australians. Most think AI systems should augment rather than replace humans in the provision of both health care and social services. Although expressing broad support for AI, people made finely tuned judgments about the acceptability of particular AI applications with different potential benefits and downsides. Further qualitative research is needed to understand the reasons underpinning these judgments. The participation of ethicists, social scientists, and the public can help guide AI development and implementation, particularly in sensitive and value-laden domains such as health care and social services.

          Related collections

          Most cited references59

          • Record: found
          • Abstract: found
          • Article: not found

          Dissecting racial bias in an algorithm used to manage the health of populations

          Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study

            Background Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants’ willingness to engage with AI-led health chatbots. Methods The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor. Results Three broad themes: ‘Understanding of chatbots’, ‘AI hesitancy’ and ‘Motivations for health chatbots’ were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI95%:0.13–0.78] and dislike for talking to computers OR = 0.77 [CI95%:0.60–0.99] as well as positively correlated with perceived utility OR = 5.10 [CI95%:3.08–8.43], positive attitude OR = 2.71 [CI95%:1.77–4.16] and perceived trustworthiness OR = 1.92 [CI95%:1.13–3.25]. Conclusion Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients’ concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients’ perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots.
              Bookmark
              • Record: found
              • Abstract: not found
              • Book: not found

              Automating inequality: How high-tech tools profile, police, and punish the poor

                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J Med Internet Res
                JMIR
                Journal of Medical Internet Research
                JMIR Publications (Toronto, Canada )
                1439-4456
                1438-8871
                August 2022
                22 August 2022
                : 24
                : 8
                : e37611
                Affiliations
                [1 ] Social Marketing @ Griffith Griffith Business School Griffith University Brisbane Australia
                [2 ] School of Mathematics and Applied Statistics Faculty of Engineering and Information Sciences University of Wollongong Wollongong Australia
                [3 ] Australian Research Council Centre of Excellence for Automated Decision-Making and Society The University of Sydney Law School The University of Sydney Sydney Australia
                [4 ] Australian Centre for Health Engagement Evidence and Values Faculty of the Arts, Social Sciences and Humanities University of Wollongong Wollongong Australia
                Author notes
                Corresponding Author: Stacy Carter stacyc@ 123456uow.edu.au
                Author information
                https://orcid.org/0000-0001-5842-2407
                https://orcid.org/0000-0002-4741-3326
                https://orcid.org/0000-0002-3137-9952
                https://orcid.org/0000-0002-0011-1363
                https://orcid.org/0000-0003-2617-8694
                Article
                v24i8e37611
                10.2196/37611
                9446139
                35994331
                e5777fbb-1006-4762-8994-a6881d6a2683
                ©Sebastian Isbanner, Pauline O’Shaughnessy, David Steel, Scarlet Wilcock, Stacy Carter. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.08.2022.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

                History
                : 28 February 2022
                : 8 April 2022
                : 25 May 2022
                : 19 July 2022
                Categories
                Original Paper
                Original Paper

                Medicine
                artificial intelligence,surveys and questionnaires,consumer health informatics,social welfare,bioethics,social values

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content47

                Cited by11

                Most referenced authors606