Details
Name
Tânia Fernandes MeloRole
Affiliated ResearcherSince
01st July 2017
Nationality
PortugalContacts
+351 22 209 4106
tania.f.melo@inesctec.pt
2023
Authors
Melo, T; Carneiro, A; Campilho, A; Mendonca, AM;
Publication
JOURNAL OF MEDICAL IMAGING
Abstract
Purpose: The development of accurate methods for retinal layer and fluid segmentation in optical coherence tomography images can help the ophthalmologists in the diagnosis and follow-up of retinal diseases. Recent works based on joint segmentation presented good results for the segmentation of most retinal layers, but the fluid segmentation results are still not satisfactory. We report a hierarchical framework that starts by distinguishing the retinal zone from the background, then separates the fluid-filled regions from the rest, and finally, discriminates the several retinal layers.Approach: Three fully convolutional networks were trained sequentially. The weighting scheme used for computing the loss function during training is derived from the outputs of the networks trained previously. To reinforce the relative position between retinal layers, the mutex Dice loss (included for optimizing the last network) was further modified so that errors between more distant layers are more penalized. The method's performance was evaluated using a public dataset.Results: The proposed hierarchical approach outperforms previous works in the segmentation of the inner segment ellipsoid layer and fluid (Dice coefficient = 0.95 and 0.82, respectively). The results achieved for the remaining layers are at a state-of-the-art level.Conclusions: The proposed framework led to significant improvements in fluid segmentation, without compromising the results in the retinal layers. Thus, its output can be used by ophthalmologists as a second opinion or as input for automatic extraction of relevant quantitative biomarkers.
2023
Authors
Melo, T; Cardoso, J; Carneiro, A; Campilho, A; Mendonça, AM;
Publication
2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS
Abstract
The development of accurate methods for OCT image analysis is highly dependent on the availability of large annotated datasets. As such datasets are usually expensive and hard to obtain, novel approaches based on deep generative models have been proposed for data augmentation. In this work, a flow-based network (SRFlow) and a generative adversarial network (ESRGAN) are used for synthesizing high-resolution OCT B-scans from low-resolution versions of real OCT images. The quality of the images generated by the two models is assessed using two standard fidelity-oriented metrics and a learned perceptual quality metric. The performance of two classification models trained on real and synthetic images is also evaluated. The obtained results show that the images generated by SRFlow preserve higher fidelity to the ground truth, while the outputs of ESRGAN present, on average, better perceptual quality. Independently of the architecture of the network chosen to classify the OCT B-scans, the model's performance always improves when images generated by SRFlow are included in the training set.
2020
Authors
Porwal, P; Pachade, S; Kokare, M; Deshmukh, G; Son, J; Bae, W; Liu, LH; Wang, J; Liu, XH; Gao, LX; Wu, TB; Xiao, J; Wang, FY; Yin, BC; Wang, YZ; Danala, G; He, LS; Choi, YH; Lee, YC; Jung, SH; Li, ZY; Sui, XD; Wu, JY; Li, XL; Zhou, T; Toth, J; Bara, A; Kori, A; Chennamsetty, SS; Safwan, M; Alex, V; Lyu, XZ; Cheng, L; Chu, QH; Li, PC; Ji, X; Zhang, SY; Shen, YX; Dai, L; Saha, O; Sathish, R; Melo, T; Araujo, T; Harangi, B; Sheng, B; Fang, RG; Sheet, D; Hajdu, A; Zheng, YJ; Mendonca, AM; Zhang, ST; Campilho, A; Zheng, B; Shen, D; Giancardo, L; Quellec, G; Meriaudeau, F;
Publication
MEDICAL IMAGE ANALYSIS
Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI-2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal subchallenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
2020
Authors
Mendonça, AM; Melo, T; Araújo, T; Campilho, A;
Publication
Image Analysis and Recognition - 17th International Conference, ICIAR 2020, Póvoa de Varzim, Portugal, June 24-26, 2020, Proceedings, Part II
Abstract
The optic disc (OD) and the fovea are relevant landmarks in fundus images. Their localization and segmentation can facilitate the detection of some retinal lesions and the assessment of their importance to the severity and progression of several eye disorders. Distinct methodologies have been developed for detecting these structures, mainly based on color and vascular information. The methodology herein described combines the entropy of the vessel directions with the image intensities for finding the OD center and uses a sliding band filter for segmenting the OD. The fovea center corresponds to the darkest point inside a region defined from the OD position and radius. Both the Messidor and the IDRiD datasets are used for evaluating the performance of the developed methods. In the first one, a success rate of 99.56% and 100.00% are achieved for OD and fovea localization. Regarding the OD segmentation, the mean Jaccard index and Dice’s coefficient obtained are 0.87 and 0.94, respectively. The proposed methods are also amongst the top-3 performing solutions submitted to the IDRiD online challenge. © Springer Nature Switzerland AG 2020.
2020
Authors
Melo, T; Mendonca, AM; Campilho, A;
Publication
COMPUTERS IN BIOLOGY AND MEDICINE
Abstract
Diabetic retinopathy (DR) is a diabetes complication, which in extreme situations may lead to blindness. Since the first stages are often asymptomatic, regular eye examinations are required for an early diagnosis. As microaneurysms (MAs) are one of the first signs of DR, several automated methods have been proposed for their detection in order to reduce the ophthalmologists' workload. Although local convergence filters (LCFs) have already been applied for feature extraction, their potential as MA enhancement operators was not explored yet. In this work, we propose a sliding band filter for MA enhancement aiming at obtaining a set of initial MA candidates. Then, a combination of the filter responses with color, contrast and shape information is used by an ensemble of classifiers for final candidate classification. Finally, for each eye fundus image, a score is computed from the confidence values assigned to the MAs detected in the image. The performance of the proposed methodology was evaluated in four datasets. At the lesion level, sensitivities of 64% and 81% were achieved for an average of 8 false positives per image (FPIs) in e-ophtha MA and SCREEN-DR, respectively. In the last dataset, an AUC of 0.83 was also obtained for DR detection.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.