23
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Neurobiological mechanisms for language, symbols and concepts: Clues from brain-constrained deep neural networks

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Neural networks are successfully used to imitate and model cognitive processes. However, to provide clues about the neurobiological mechanisms enabling human cognition, these models need to mimic the structure and function of real brains. Brain-constrained networks differ from classic neural networks by implementing brain similarities at different scales, ranging from the micro- and mesoscopic levels of neuronal function, local neuronal links and circuit interaction to large-scale anatomical structure and between-area connectivity. This review shows how brain-constrained neural networks can be applied to study in silico the formation of mechanisms for symbol and concept processing and to work towards neurobiological explanations of specifically human cognitive abilities. These include verbal working memory and learning of large vocabularies of symbols, semantic binding carried by specific areas of cortex, attention focusing and modulation driven by symbol type, and the acquisition of concrete and abstract concepts partly influenced by symbols. Neuronal assembly activity in the networks is analyzed to deliver putative mechanistic correlates of higher cognitive processes and to develop candidate explanations founded in established neurobiological principles.

          Highlights

          • To yield clues about the brain mechanisms of cognition, neural networks need to be constrained by neurobiology.

          • Brain-constrained networks build discrete circuit for cognitive computations.

          • Semantic circuit formation driven by correlation learning and cortical connectivity explains the emergence of semantic areas and hubs.

          • Feature correlational properties explain neurocognitive differences between proper names and category terms.

          • Feature correlations explain why circuits of concrete/abstract concepts differ, and why the latter require language.

          Related collections

          Most cited references300

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Attention Is All You Need

            The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. 15 pages, 5 figures
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Learning representations by back-propagating errors

                Bookmark

                Author and article information

                Contributors
                Journal
                Prog Neurobiol
                Prog Neurobiol
                Progress in Neurobiology
                Pergamon Press
                0301-0082
                1873-5118
                1 November 2023
                November 2023
                : 230
                : 102511
                Affiliations
                [a ]Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany
                [b ]Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099 Berlin, Germany
                [c ]Einstein Center for Neurosciences Berlin, 10117 Berlin, Germany
                [d ]Cluster of Excellence ‘Matters of Activity’, Humboldt Universität zu Berlin, 10099 Berlin, Germany
                Author notes
                [* ]Corresponding author at: Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany. friedemann.pulvermuller@ 123456fu-berlin.de
                Article
                S0301-0082(23)00112-0 102511
                10.1016/j.pneurobio.2023.102511
                10518464
                37482195
                37d5ab6a-7b13-42e8-ac43-71dbc5dbaa13
                © 2023 The Authors

                This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

                History
                : 3 December 2022
                : 2 May 2023
                : 18 July 2023
                Categories
                Article

                Neurosciences
                neurocognition,neurocomputation,semantics,language learning,deep neural network,brain-constrained model

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content70

                Cited by6

                Most referenced authors1,761