170
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      ANIMAL-SPOT enables animal-independent signal detection and classification using deep learning

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Bioacoustic research spans a wide range of biological questions and applications, relying on identification of target species or smaller acoustic units, such as distinct call types. However, manually identifying the signal of interest is time-intensive, error-prone, and becomes unfeasible with large data volumes. Therefore, machine-driven algorithms are increasingly applied to various bioacoustic signal identification challenges. Nevertheless, biologists still have major difficulties trying to transfer existing animal- and/or scenario-related machine learning approaches to their specific animal datasets and scientific questions. This study presents an animal-independent, open-source deep learning framework, along with a detailed user guide. Three signal identification tasks, commonly encountered in bioacoustics research, were investigated: (1) target signal vs. background noise detection, (2) species classification, and (3) call type categorization. ANIMAL-SPOT successfully segmented human-annotated target signals in data volumes representing 10 distinct animal species and 1 additional genus, resulting in a mean test accuracy of 97.9%, together with an average area under the ROC curve (AUC) of 95.9%, when predicting on unseen recordings. Moreover, an average segmentation accuracy and F1-score of 95.4% was achieved on the publicly available BirdVox-Full-Night data corpus. In addition, multi-class species and call type classification resulted in 96.6% and 92.7% accuracy on unseen test data, as well as 95.2% and 88.4% regarding previous animal-specific machine-based detection excerpts. Furthermore, an Unweighted Average Recall (UAR) of 89.3% outperformed the multi-species classification baseline system of the ComParE 2021 Primate Sub-Challenge. Besides animal independence, ANIMAL-SPOT does not rely on expert knowledge or special computing resources, thereby making deep-learning-based bioacoustic signal identification accessible to a broad audience.

          Related collections

          Most cited references52

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Residual Learning for Image Recognition

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Gradient-based learning applied to document recognition

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              ImageNet: A large-scale hierarchical image database

                Bookmark

                Author and article information

                Contributors
                christian.bergler@fau.de
                bklump@ab.mpg.de
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                19 December 2022
                19 December 2022
                2022
                : 12
                : 21966
                Affiliations
                [1 ]GRID grid.5330.5, ISNI 0000 0001 2107 3311, Pattern Recognition Lab, Department of Computer Science, , Friedrich-Alexander-Universität Erlangen-Nürnberg, ; 91058 Erlangen, Germany
                [2 ]GRID grid.507516.0, ISNI 0000 0004 7661 536X, Cognitive and Cultural Ecology Lab, , Max Planck Institute of Animal Behavior, ; 78315 Radolfzell, Germany
                [3 ]GRID grid.419518.0, ISNI 0000 0001 2159 1813, Department of Human Behavior, Ecology and Culture, , Max Planck Institute for Evolutionary Anthropology, ; 04103 Leipzig, Germany
                [4 ]GRID grid.9811.1, ISNI 0000 0001 0658 7699, Biology Department, , University of Konstanz, ; 78464 Constance, Germany
                [5 ]GRID grid.35403.31, ISNI 0000 0004 1936 9991, Department of Natural Resources and Environmental Sciences, , University of Illinois Urbana-Champaign, ; Champaign, IL United States
                [6 ]GRID grid.4372.2, ISNI 0000 0001 2105 1091, Max Planck Institute for Biological Intelligence, in Foundation, Seewiesen Eberhard-Gwinner-Strasse, ; 82319 Starnberg, Germany
                [7 ]GRID grid.143640.4, ISNI 0000 0004 1936 9465, Department of Anthropology, , University of Victoria, ; Victoria, BC V8P 5C2 Canada
                [8 ]GRID grid.418779.4, ISNI 0000 0001 0708 0355, Leibniz Institute for Zoo and Wildlife Research, ; Alfred-Kowalke-Straße 17, 10315 Berlin, Germany
                [9 ]GRID grid.7048.b, ISNI 0000 0001 1956 2722, Department of Bioscience, Wildlife Ecology, , Aarhus University, ; 8410 Rønde, Denmark
                [10 ]GRID grid.8585.0, ISNI 0000 0001 2370 4076, Department of Vertebrate Ecology and Zoology, Faculty of Biology, , University of Gdańsk, ; 80-308 Gdańsk, Poland
                [11 ]GRID grid.7048.b, ISNI 0000 0001 1956 2722, Department of Bioscience, Marine Mammal Research, , Aarhus University, ; 4000 Roskilde, Denmark
                [12 ]GRID grid.10825.3e, ISNI 0000 0001 0728 0170, Department of Biology, , University of Southern Denmark, ; 5230 Odense, Denmark
                Article
                26429
                10.1038/s41598-022-26429-y
                9763499
                36535999
                904789b2-d2c2-4fbf-ba60-3eccef9a7ced
                © The Author(s) 2022

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 12 September 2022
                : 14 December 2022
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100001659, Deutsche Forschungsgemeinschaft;
                Award ID: MA-4898/18-1
                Funded by: Friedrich-Alexander-Universität Erlangen-Nürnberg (1041)
                Categories
                Article
                Custom metadata
                © The Author(s) 2022

                Uncategorized
                classification and taxonomy,machine learning,software
                Uncategorized
                classification and taxonomy, machine learning, software

                Comments

                Comment on this article

                scite_
                20
                0
                38
                0
                Smart Citations
                20
                0
                38
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content72

                Cited by5

                Most referenced authors429