8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Multi-animal pose estimation, identification and tracking with DeepLabCut

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Estimating the pose of multiple animals is a challenging computer vision problem: frequent interactions cause occlusions and complicate the association of detected keypoints to the correct individuals, as well as having highly similar looking animals that interact more closely than in typical multi-human scenarios. To take up this challenge, we build on DeepLabCut, an open-source pose estimation toolbox, and provide high-performance animal assembly and tracking—features required for multi-animal scenarios. Furthermore, we integrate the ability to predict an animal’s identity to assist tracking (in case of occlusions). We illustrate the power of this framework with four datasets varying in complexity, which we release to serve as a benchmark for future algorithm development.

          Abstract

          DeepLabCut is extended to enable multi-animal pose estimation, animal identification and tracking, thereby enabling the analysis of social behaviors.

          Related collections

          Most cited references42

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Residual Learning for Image Recognition

            Bookmark
            • Record: found
            • Abstract: not found
            • Book Chapter: not found

            Microsoft COCO: Common Objects in Context

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              DeepLabCut: markerless pose estimation of user-defined body parts with deep learning

              Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.
                Bookmark

                Author and article information

                Contributors
                mackenzie@post.harvard.edu
                alexander.mathis@epfl.ch
                Journal
                Nat Methods
                Nat Methods
                Nature Methods
                Nature Publishing Group US (New York )
                1548-7091
                1548-7105
                12 April 2022
                12 April 2022
                2022
                : 19
                : 4
                : 496-504
                Affiliations
                [1 ]GRID grid.5333.6, ISNI 0000000121839049, Brain Mind Institute, School of Life Sciences, Swiss Federal Institute of Technology (EPFL), ; Lausanne, Switzerland
                [2 ]GRID grid.38142.3c, ISNI 000000041936754X, Rowland Institute at Harvard, Harvard University, ; Cambridge, MA USA
                [3 ]GRID grid.116068.8, ISNI 0000 0001 2341 2786, Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, , Massachusetts Institute of Technology, ; Cambridge, MA USA
                [4 ]GRID grid.38142.3c, ISNI 000000041936754X, Department for Molecular Biology and Center for Brain Science, , Harvard University, ; Cambridge, MA USA
                [5 ]GRID grid.413575.1, ISNI 0000 0001 2167 1581, Howard Hughes Medical Institute (HHMI), ; Chevy Chase, MD USA
                [6 ]GRID grid.38142.3c, ISNI 000000041936754X, Department of Organismic and Evolutionary Biology, , Harvard University, ; Cambridge, MA USA
                [7 ]GRID grid.10548.38, ISNI 0000 0004 1936 9377, Department of Zoology, , Stockholm University, ; Stockholm, Sweden
                Author information
                http://orcid.org/0000-0002-2092-7159
                http://orcid.org/0000-0001-9099-7294
                http://orcid.org/0000-0002-8021-277X
                http://orcid.org/0000-0003-2443-4252
                http://orcid.org/0000-0001-7368-4456
                http://orcid.org/0000-0002-3777-2202
                Article
                1443
                10.1038/s41592-022-01443-0
                9007739
                35414125
                bfda3031-233c-4094-a20e-734afdc50077
                © The Author(s) 2022

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 23 April 2021
                : 4 March 2022
                Funding
                Funded by: FundRef https://doi.org/10.13039/100009835, Harvard University | Rowland Institute at Harvard (Rowland Institute);
                Funded by: FundRef https://doi.org/10.13039/100009152, Fondation Bertarelli (Bertarelli Foundation);
                Categories
                Article
                Custom metadata
                © The Author(s), under exclusive licence to Springer Nature America, Inc. 2022

                Life sciences
                machine learning,computational neuroscience,zoology,behavioural methods
                Life sciences
                machine learning, computational neuroscience, zoology, behavioural methods

                Comments

                Comment on this article