50
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Towards Intelligent Regulation of Artificial Intelligence

      European Journal of Risk Regulation
      Cambridge University Press (CUP)

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Artificial intelligence (AI) is becoming a part of our daily lives at a fast pace, offering myriad benefits for society. At the same time, there is concern about the unpredictability and uncontrollability of AI. In response, legislators and scholars call for more transparency and explainability of AI. This article considers what it would mean to require transparency of AI. It advocates looking beyond the opaque concept of AI, focusing on the concrete risks and biases of its underlying technology: machine-learning algorithms. The article discusses the biases that algorithms may produce through the input data, the testing of the algorithm and the decision model. Any transparency requirement for algorithms should result in explanations of these biases that are both understandable for the prospective recipients, and technically feasible for producers. Before asking how much transparency the law should require from algorithms, we should therefore consider if the explanation that programmers could offer is useful in specific legal contexts.

          Related collections

          Most cited references44

          • Record: found
          • Abstract: not found
          • Article: not found

          CRITICAL QUESTIONS FOR BIG DATA

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            How the machine ‘thinks’: Understanding opacity in machine learning algorithms

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Semantics derived automatically from language corpora contain human-like biases

              Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.
                Bookmark

                Author and article information

                Journal
                European Journal of Risk Regulation
                Eur. j. risk regul.
                Cambridge University Press (CUP)
                1867-299X
                2190-8249
                March 2019
                April 29 2019
                March 2019
                : 10
                : 1
                : 41-59
                Article
                10.1017/err.2019.8
                ab5bf906-cafe-4b21-8be4-e9ecbe2df750
                © 2019

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article