20
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Artificial intelligence in radiation oncology

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references112

          • Record: found
          • Abstract: found
          • Article: not found

          The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository.

          The National Institutes of Health have placed significant emphasis on sharing of research data to support secondary research. Investigators have been encouraged to publish their clinical and imaging data as part of fulfilling their grant obligations. Realizing it was not sufficient to merely ask investigators to publish their collection of imaging and clinical data, the National Cancer Institute (NCI) created the open source National Biomedical Image Archive software package as a mechanism for centralized hosting of cancer related imaging. NCI has contracted with Washington University in Saint Louis to create The Cancer Imaging Archive (TCIA)-an open-source, open-access information resource to support research, development, and educational initiatives utilizing advanced medical imaging of cancer. In its first year of operation, TCIA accumulated 23 collections (3.3 million images). Operating and maintaining a high-availability image archive is a complex challenge involving varied archive-specific resources and driven by the needs of both image submitters and image consumers. Quality archives of any type (traditional library, PubMed, refereed journals) require management and customer service. This paper describes the management tasks and user support model for TCIA.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The ViewRay system: magnetic resonance-guided and controlled radiotherapy.

            A description of the first commercially available magnetic resonance imaging (MRI)-guided radiation therapy (RT) system is provided. The system consists of a split 0.35-T MR scanner straddling 3 (60)Co heads mounted on a ring gantry, each head equipped with independent doubly focused multileaf collimators. The MR and RT systems share a common isocenter, enabling simultaneous and continuous MRI during RT delivery. An on-couch adaptive RT treatment-planning system and integrated MRI-guided RT control system allow for rapid adaptive planning and beam delivery control based on the visualization of soft tissues. Treatment of patients with this system commenced at Washington University in January 2014. Copyright © 2014 Elsevier Inc. All rights reserved.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis.

              For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both feature representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared feature representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical feature representation from a 3D patch, and then devise a systematic method for a joint feature representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Nature Reviews Clinical Oncology
                Nat Rev Clin Oncol
                Springer Science and Business Media LLC
                1759-4774
                1759-4782
                August 25 2020
                Article
                10.1038/s41571-020-0417-8
                32843739
                45126298-0434-49ec-99c7-837fd270fee2
                © 2020

                http://www.springer.com/tdm

                http://www.springer.com/tdm

                History

                Comments

                Comment on this article