4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Empowerment of Artificial Intelligence in Post-Digital Organizations: Exploring Human Interactions with Supervisory AI

      , ,
      Human Technology
      Centre of Sociological Research, NGO

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Technology evolves together with humans. Across industrial revolutions, its role has evolved from that of a simple tool used by humans to that of intelligent decision-maker and teammate. In the post-digital era where ongoing advances in artificial intelligence are widely visible, the question arises regarding the extent to which technology will be “upgraded” into roles previously filled by human supervisors, thereby replacing persons in managerial positions. This text aims to delineate how the organizational role of technology has been transformed across decades and the forms that it currently takes within companies, with an eye to the future. We draw on posthuman managerial literature and known cases of organizations where some forms of supervisory artificial intelligence are already used. The text is conceptual-reflective by nature; it seeks to initiate a discussion on the many challenges that humanity will face in connection with the deployment of empowered posthuman agents in companies.

          Related collections

          Most cited references46

          • Record: found
          • Abstract: not found
          • Article: not found

          User Acceptance of Information Technology: Toward a Unified View

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            SOME EXPLORATIONS IN INITIAL INTERACTION AND BEYOND: TOWARD A DEVELOPMENTAL THEORY OF INTERPERSONAL COMMUNICATION

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Algorithm aversion: people erroneously avoid algorithms after seeing them err.

              Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Human Technology
                HT
                Centre of Sociological Research, NGO
                1795-6889
                October 03 2022
                October 03 2022
                : 18
                : 2
                : 98-121
                Article
                10.14254/1795-6889.2022.18-2.2
                e291613a-6195-4f42-8434-1dfade864546
                © 2022

                https://creativecommons.org/licenses/by-nc/4.0

                History

                Comments

                Comment on this article