0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      AML‐Net: Attention‐based multi‐scale lightweight model for brain tumour segmentation in internet of medical things

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Brain tumour segmentation employing MRI images is important for disease diagnosis, monitoring, and treatment planning. Till now, many encoder‐decoder architectures have been developed for this purpose, with U‐Net being the most extensively utilised. However, these architectures require a lot of parameters to train and have a semantic gap. Some work tried to make a lightweight model and do channel pruning that made a small receptive field which compromised the accuracy. The authors propose an attention‐based multi‐scale lightweight model called AML‐Net in Internet of Medical Things to overcome the above issues. This model consists of three small encoder‐decoder architectures that are trained with different scale input images along with previously learned features to diminish the loss. Moreover, the authors designed an attention module which replaced the traditional skip connection. For the attention module, six different experiments were conducted, from which dilated convolution with spatial attention performed well. This attention module has three dilated convolutions which make a relatively large receptive field followed by spatial attention to extract global context from encoder low‐level features. Then these fine features are combined with the decoder's same layer of high‐level features. The authors perform the experiment on a low‐grade‐glioma dataset provided by the Cancer Genome Atlas which has at least Fluid‐Attenuated Inversion Recovery modality. The proposed model has 1/43.4, 1/30.3, 1/28.5, 1/20.2 and 1/16.7 fewer parameters than Z‐Net, U‐Net, Double U‐Net, BCDU‐Net and CU‐Net respectively. Moreover, the authors’ model gives results with IoU = 0.834, F1‐score = 0.909 and sensitivity = 0.939, which are greater than U‐Net, CU‐Net, RCA‐IUnet and PMED‐Net.

          Related collections

          Most cited references44

          • Record: found
          • Abstract: not found
          • Book Chapter: not found

          U-Net: Convolutional Networks for Biomedical Image Segmentation

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Squeeze-and-Excitation Networks

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

              We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                CAAI Transactions on Intelligence Technology
                CAAI Trans on Intel Tech
                Institution of Engineering and Technology (IET)
                2468-2322
                2468-2322
                January 17 2024
                Affiliations
                [1 ] Department of Computer Science COMSATS University Islamabad (CUI) Islamabad Pakistan
                [2 ] School of Technology and Innovations University of Vaasa Vaasa Finland
                [3 ] Department of Physics COMSATS University Islamabad (CUI) Islamabad Pakistan
                Article
                10.1049/cit2.12278
                6c332bce-723e-46f2-99eb-fe26d8863bc9
                © 2024

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article