15
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Detection of Diabetic Eye Disease from Retinal Images Using a Deep Learning Based CenterNet Model

      , , , , , , , , ,
      Sensors
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Diabetic retinopathy (DR) is an eye disease that alters the blood vessels of a person suffering from diabetes. Diabetic macular edema (DME) occurs when DR affects the macula, which causes fluid accumulation in the macula. Efficient screening systems require experts to manually analyze images to recognize diseases. However, due to the challenging nature of the screening method and lack of trained human resources, devising effective screening-oriented treatment is an expensive task. Automated systems are trying to cope with these challenges; however, these methods do not generalize well to multiple diseases and real-world scenarios. To solve the aforementioned issues, we propose a new method comprising two main steps. The first involves dataset preparation and feature extraction and the other relates to improving a custom deep learning based CenterNet model trained for eye disease classification. Initially, we generate annotations for suspected samples to locate the precise region of interest, while the other part of the proposed solution trains the Center Net model over annotated images. Specifically, we use DenseNet-100 as a feature extraction method on which the one-stage detector, CenterNet, is employed to localize and classify the disease lesions. We evaluated our method over challenging datasets, namely, APTOS-2019 and IDRiD, and attained average accuracy of 97.93% and 98.10%, respectively. We also performed cross-dataset validation with benchmark EYEPACS and Diaretdb1 datasets. Both qualitative and quantitative results demonstrate that our proposed approach outperforms state-of-the-art methods due to more effective localization power of CenterNet, as it can easily recognize small lesions and deal with over-fitted training data. Our proposed framework is proficient in correctly locating and classifying disease lesions. In comparison to existing DR and DME classification approaches, our method can extract representative key points from low-intensity and noisy images and accurately classify them. Hence our approach can play an important role in automated detection and recognition of DR and DME lesions.

          Related collections

          Most cited references33

          • Record: found
          • Abstract: found
          • Article: not found

          Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

          State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Global Prevalence and Major Risk Factors of Diabetic Retinopathy

            OBJECTIVE To examine the global prevalence and major risk factors for diabetic retinopathy (DR) and vision-threatening diabetic retinopathy (VTDR) among people with diabetes. RESEARCH DESIGN AND METHODS A pooled analysis using individual participant data from population-based studies around the world was performed. A systematic literature review was conducted to identify all population-based studies in general populations or individuals with diabetes who had ascertained DR from retinal photographs. Studies provided data for DR end points, including any DR, proliferative DR, diabetic macular edema, and VTDR, and also major systemic risk factors. Pooled prevalence estimates were directly age-standardized to the 2010 World Diabetes Population aged 20–79 years. RESULTS A total of 35 studies (1980–2008) provided data from 22,896 individuals with diabetes. The overall prevalence was 34.6% (95% CI 34.5–34.8) for any DR, 6.96% (6.87–7.04) for proliferative DR, 6.81% (6.74–6.89) for diabetic macular edema, and 10.2% (10.1–10.3) for VTDR. All DR prevalence end points increased with diabetes duration, hemoglobin A1c, and blood pressure levels and were higher in people with type 1 compared with type 2 diabetes. CONCLUSIONS There are approximately 93 million people with DR, 17 million with proliferative DR, 21 million with diabetic macular edema, and 28 million with VTDR worldwide. Longer diabetes duration and poorer glycemic and blood pressure control are strongly associated with DR. These data highlight the substantial worldwide public health burden of DR and the importance of modifiable risk factors in its occurrence. This study is limited by data pooled from studies at different time points, with different methodologies and population characteristics.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning

              Traditionally, medical discoveries are made by observing associations, making hypotheses from them and then designing and running experiments to test the hypotheses. However, with medical images, observing and quantifying associations can often be difficult because of the wide variety of features, patterns, colours, values and shapes that are present in real data. Here, we show that deep learning can extract new knowledge from retinal fundus images. Using deep-learning models trained on data from 284,335 patients and validated on two independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70). We also show that the trained deep-learning models used anatomical features, such as the optic disc or blood vessels, to generate each prediction.
                Bookmark

                Author and article information

                Contributors
                Journal
                SENSC9
                Sensors
                Sensors
                MDPI AG
                1424-8220
                August 2021
                August 05 2021
                : 21
                : 16
                : 5283
                Article
                10.3390/s21165283
                34450729
                4ceb4170-a4a7-49ec-aca4-b9fa73e0431c
                © 2021

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article