8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Metasurface-enabled on-chip multiplexed diffractive neural networks in the visible

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Replacing electrons with photons is a compelling route toward high-speed, massively parallel, and low-power artificial intelligence computing. Recently, diffractive networks composed of phase surfaces were trained to perform machine learning tasks through linear optical transformations. However, the existing architectures often comprise bulky components and, most critically, they cannot mimic the human brain for multitasking. Here, we demonstrate a multi-skilled diffractive neural network based on a metasurface device, which can perform on-chip multi-channel sensing and multitasking in the visible. The polarization multiplexing scheme of the subwavelength nanostructures is applied to construct a multi-channel classifier framework for simultaneous recognition of digital and fashionable items. The areal density of the artificial neurons can reach up to 6.25 × 10 6 mm −2 multiplied by the number of channels. The metasurface is integrated with the mature complementary metal-oxide semiconductor imaging sensor, providing a chip-scale architecture to process information directly at physical layers for energy-efficient and ultra-fast image processing in machine vision, autonomous driving, and precision medicine.

          Abstract

          A polarization-multiplexed metasurface-enabled diffractive neural network, which is integrated with a CMOS imaging sensor, demonstrates on-chip multi-channel sensing and multitasking in the visible.

          Related collections

          Most cited references66

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Deep Residual Learning for Image Recognition

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Gradient-based learning applied to document recognition

                Bookmark

                Author and article information

                Contributors
                huyq@hnu.edu.cn
                duanhg@hnu.edu.cn
                Journal
                Light Sci Appl
                Light Sci Appl
                Light, Science & Applications
                Nature Publishing Group UK (London )
                2095-5545
                2047-7538
                27 May 2022
                27 May 2022
                2022
                : 11
                : 158
                Affiliations
                [1 ]GRID grid.67293.39, National Research Center for High-Efficiency Grinding, College of Mechanical and Vehicle Engineering, , Hunan University, ; Changsha, 410082 China
                [2 ]GRID grid.24516.34, ISNI 0000000123704535, Institute of Precision Optical Engineering, School of Physics Science and Engineering, , Tongji University, ; Shanghai, 200092 China
                [3 ]GRID grid.67293.39, Advanced Manufacturing Laboratory of Micro-Nano Optical Devices, Shenzhen Research Institute, , Hunan University, ; Shenzhen, 518000 China
                [4 ]GRID grid.5719.a, ISNI 0000 0004 1936 9713, 2nd Physics Institute, , University of Stuttgart, ; Pfaffenwaldring 57, 70569 Stuttgart, Germany
                [5 ]GRID grid.419552.e, ISNI 0000 0001 1015 6736, Max Planck Institute for Solid State Research, ; Heisenbergstrasse 1, 70569 Stuttgart, Germany
                [6 ]GRID grid.67293.39, Greater Bay Area Institute for Innovation, , Hunan University, ; Guangzhou, 511300 China
                Author information
                http://orcid.org/0000-0002-4395-4299
                http://orcid.org/0000-0001-5831-3382
                http://orcid.org/0000-0002-3855-483X
                http://orcid.org/0000-0003-3335-3067
                http://orcid.org/0000-0001-9144-2864
                Article
                844
                10.1038/s41377-022-00844-2
                9142536
                35624107
                f81e3488-9dae-467f-9ce9-569034b9511d
                © The Author(s) 2022

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 19 November 2021
                : 6 May 2022
                : 10 May 2022
                Funding
                Funded by: FundRef https://doi.org/10.13039/501100001809, National Natural Science Foundation of China (National Science Foundation of China);
                Award ID: 52005175
                Award Recipient :
                Categories
                Article
                Custom metadata
                © The Author(s) 2022

                metamaterials,imaging and sensing
                metamaterials, imaging and sensing

                Comments

                Comment on this article