8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The dosimetric impact of deep learning-based auto-segmentation of organs at risk on nasopharyngeal and rectal cancer

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose

          To investigate the dosimetric impact of deep learning-based auto-segmentation of organs at risk (OARs) on nasopharyngeal and rectal cancer.

          Methods and materials

          Twenty patients, including ten nasopharyngeal carcinoma (NPC) patients and ten rectal cancer patients, who received radiotherapy in our department were enrolled in this study. Two deep learning-based auto-segmentation systems, including an in-house developed system (FD) and a commercial product (UIH), were used to generate two auto-segmented OARs sets (OAR_FD and OAR_UIH). Treatment plans based on auto-segmented OARs and following our clinical requirements were generated for each patient on each OARs sets (Plan_FD and Plan_UIH). Geometric metrics (Hausdorff distance (HD), mean distance to agreement (MDA), the Dice similarity coefficient (DICE) and the Jaccard index) were calculated for geometric evaluation. The dosimetric impact was evaluated by comparing Plan_FD and Plan_UIH to original clinically approved plans (Plan_Manual) with dose-volume metrics and 3D gamma analysis. Spearman’s correlation analysis was performed to investigate the correlation between dosimetric difference and geometric metrics.

          Results

          FD and UIH could provide similar geometric performance in parotids, temporal lobes, lens, and eyes (DICE, p > 0.05). OAR_FD had better geometric performance in the optic nerves, oral cavity, larynx, and femoral heads (DICE, p < 0.05). OAR_UIH had better geometric performance in the bladder (DICE, p < 0.05). In dosimetric analysis, both Plan_FD and Plan_UIH had nonsignificant dosimetric differences compared to Plan_Manual for most PTV and OARs dose-volume metrics. The only significant dosimetric difference was the max dose of the left temporal lobe for Plan_FD vs. Plan_Manual ( p = 0.05). Only one significant correlation was found between the mean dose of the femoral head and its HD index (R = 0.4, p = 0.01), there is no OARs showed strong correlation between its dosimetric difference and all of four geometric metrics.

          Conclusions

          Deep learning-based OARs auto-segmentation for NPC and rectal cancer has a nonsignificant impact on most PTV and OARs dose-volume metrics. Correlations between the auto-segmentation geometric metric and dosimetric difference were not observed for most OARs.

          Supplementary Information

          The online version contains supplementary material available at 10.1186/s13014-021-01837-y.

          Related collections

          Most cited references28

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool

          Background Medical Image segmentation is an important image processing step. Comparing images to evaluate the quality of segmentation is an essential part of measuring progress in this research area. Some of the challenges in evaluating medical segmentation are: metric selection, the use in the literature of multiple definitions for certain metrics, inefficiency of the metric calculation implementations leading to difficulties with large volumes, and lack of support for fuzzy segmentation by existing metrics. Result First we present an overview of 20 evaluation metrics selected based on a comprehensive literature review. For fuzzy segmentation, which shows the level of membership of each voxel to multiple classes, fuzzy definitions of all metrics are provided. We present a discussion about metric properties to provide a guide for selecting evaluation metrics. Finally, we propose an efficient evaluation tool implementing the 20 selected metrics. The tool is optimized to perform efficiently in terms of speed and required memory, also if the image size is extremely large as in the case of whole body MRI or CT volume segmentation. An implementation of this tool is available as an open source project. Conclusion We propose an efficient evaluation tool for 3D medical image segmentation using 20 evaluation metrics and provide guidelines for selecting a subset of these metrics that is suitable for the data and the segmentation task.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Comparison and evaluation of methods for liver segmentation from CT datasets.

            This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the "MICCAI 2007 Grand Challenge" workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks.

              Accurate segmentation of organs-at-risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning-based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state-of-the-art automated segmentation algorithms, commercial software, and interobserver variability.
                Bookmark

                Author and article information

                Contributors
                zhen_zhang@fudan.edu.cn
                jackhuwg@gmail.com
                Journal
                Radiat Oncol
                Radiat Oncol
                Radiation Oncology (London, England)
                BioMed Central (London )
                1748-717X
                23 June 2021
                23 June 2021
                2021
                : 16
                : 113
                Affiliations
                [1 ]GRID grid.452404.3, ISNI 0000 0004 1808 0942, Department of Radiation Oncology, , Fudan University Shanghai Cancer Center, ; Shanghai, 200032 China
                [2 ]GRID grid.8547.e, ISNI 0000 0001 0125 2443, Department of Oncology, Shanghai Medical College, , Fudan University, ; Shanghai, 200032 China
                [3 ]Shanghai Key Laboratory of Radiation Oncology, Shanghai, 200032 China
                Article
                1837
                10.1186/s13014-021-01837-y
                8220801
                34162410
                528f0ea6-9176-401d-ba1f-ff740b524ddb
                © The Author(s) 2021

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

                History
                : 14 March 2021
                : 10 June 2021
                Categories
                Research
                Custom metadata
                © The Author(s) 2021

                Oncology & Radiotherapy
                treatment planning,dosimetric,deep learning,auto-segmentation
                Oncology & Radiotherapy
                treatment planning, dosimetric, deep learning, auto-segmentation

                Comments

                Comment on this article