2024
Autores
Fernandes, R; Salgado, M; Paçal, I; Cunha, A;
Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
Abstract
This research addresses the significant challenge of automating the annotation of medical images, with a focus on capsule endoscopy videos. The study introduces a novel approach that synergistically combines Deep Learning and Content-Based Image Retrieval (CBIR) techniques to streamline the annotation process. Two pre-trained Convolutional Neural Networks (CNNs), MobileNet and VGG16, were employed to extract and compare visual features from medical images. The methodology underwent rigorous validation using various performance metrics such as accuracy, AUC, precision, and recall. The MobileNet model demonstrated exceptional performance with a test accuracy of 98.4%, an AUC of 99.9%, a precision of 98.2%, and a recall of 98.6%. On the other hand, the VGG16 model achieved a test accuracy of 95.4%, an AUC of 99.2%, a precision of 97.3%, and a recall of 93.5%. These results indicate the high efficacy of the proposed method in the automated annotation of medical images, establishing it as a promising tool for medical applications. The study also highlights potential avenues for future research, including expanding the image retrieval scope to encompass entire endoscopy video databases. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.
2024
Autores
Leite, D; Camara, J; Rodrigues, J; Cunha, A;
Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
Abstract
Glaucoma is a condition that affects the optic nerve, with loss of retinal nerve fibers, increased excavation of the optic nerve, and a progressive decrease in the visual field. It is the leading cause of irreversible blindness in the world. Manual classification of glaucoma is a complex and time-consuming process that requires assessing a variety of ocular features by experienced clinicians. Automated detection can assist the specialist in early diagnosis and effective treatment of glaucoma and prevent vision loss. This study developed a deep learning model based on vision transformers, called ViT-BRSET, to detect patients with increased excavation of the optic nerve automatically. ViT-BRSET is a neural network architecture that is particularly effective for computer vision tasks. The results of this study were promising, with an accuracy of 0.94, an F1-score of 0.91, and a recall of 0.94. The model was trained on a new dataset called BRSET, which consists of 16,112 fundus images of patients with increased excavation of the optic nerve. The results of this study suggest that ViT-BRSET has the potential to improve early diagnosis through early detection of optic nerve excavation, one of the main signs of glaucomatous disease. ViT-BRSET can be used to mass-screen patients, identifying those who need further examination by a doctor. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.
2024
Autores
Pereira, S; Cunha, A; Pinto, J;
Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
Abstract
Building rehabilitation is a reality, and all phases of rehabilitation work need to be efficiently sustainable and promote healthy places to live in. Current procedures for assessing construction conditions are time-consuming, laborious and expensive and pose threats to the health and safety of engineers, especially when inspecting locations that are not easy to access. In the initial step, a survey of the condition of the building is carried out, which subsequently implies the elaboration of a report on existing pathologies, intervention solutions, and associated costs. This survey involves an inspection of the site (through photographs and videos). Also, biological growth can threaten the humans inhabiting the houses. The World Health Organization states that the most important effects are increased prevalences of respiratory symptoms, allergies and asthma, as well as perturbation of the immunological system. This work aims to alert to this fact and contribute to detecting and locating biological growth (BG) defects automatically in images of the facade of buildings. To make this possible, we need a dataset of images of building components with and without biological growths. At this moment, that database doesn't exist. So, we need to construct that dataset to use deep learning models in the future. This paper also identifies the steps to do that work and presents some real cases of building façades with BG and solutions to repair those defects. The conclusions and the future works are identified. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.
2024
Autores
Fonseca, F; Nunes, B; Salgado, M; Silva, A; Cunha, A;
Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
Abstract
The wireless capsule endoscopy is a non-invasive imaging method that allows observation of the inner lumen of the small intestine, but with the cost of a longer duration to process its resulting videos. Therefore, the scientific community has developed several machine learning strategies to help reduce that duration. Such strategies are typically trained and evaluated on small sets of images, ultimately not proving to be efficient when applied to full videos. Labelling full Capsule Endoscopy videos requires significant effort, leading to a lack of data on this medical area. Active learning strategies allow intelligent selection of datasets from a vast set of unlabelled data, maximizing learning and reducing annotation costs. In this experiment, we have explored active learning methods to reduce capsule endoscopy videos’ annotation effort by compiling smaller datasets capable of representing their content. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.
2024
Autores
Ferreira, H; Marta, A; Couto, I; Câmara, J; Beirão, JM; Cunha, A;
Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
Abstract
Inherited retinal diseases such as Retinitis Pigmentosa and Stargardt’s disease are genetic conditions that cause the photoreceptors in the retina to deteriorate over time. This can lead to vision symptoms such as tubular vision, loss of central vision, and nyctalopia (difficulty seeing in low light) or photophobia (high light). Timely healthcare intervention is critical, as most forms of these conditions are currently untreatable and usually focused on minimizing further vision loss. Machine learning (ML) algorithms can play a crucial role in the detection of retinal diseases, especially considering the recent advancements in retinal imaging devices and the limited availability of public datasets on these diseases. These algorithms have the potential to help researchers gain new insights into disease progression from previous classified eye scans and genetic profiles of patients. In this work, multi-class identification between the retinal diseases Retinitis Pigmentosa, Stargardt Disease, and Cone-Rod Dystrophy was performed using three pretrained models, ResNet101, ResNet50, and VGG19 as baseline models, after shown to be effective in our computer vision task. These models were trained and validated on two datasets of autofluorescent retinal images, the first containing raw data, and the second dataset was improved with cropping to obtain better results. The best results were achieved using the ResNet101 model on the improved dataset with an Accuracy (Acc) of 0.903, an Area under the ROC Curve (AUC) of 0.976, an F1-Score of 0.897, a Recall (REC) of 0.903, and a Precision (PRE) of 0.910. To further assess the reliability of these models for future data, an Explainable AI (XAI) analysis was conducted, employing Grad-Cam. Overall, the study showed promising capabilities of Deep Learning for the diagnosis of retinal diseases using medical imaging. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.
2024
Autores
Laroca, H; Rocio, V; Cunha, A;
Publicação
Procedia Computer Science
Abstract
Fake news spreads rapidly, creating issues and making detection harder. The purpose of this study is to determine if fake news contains sentiment polarity (positive or negative), identify the polarity of sentiment present in their textual content and determine whether sentiment polarity is a reliable indication of fake news. For this, we use a deep learning model called BERT (Bidirectional Encoder Representations from Transformers), trained on a sentiment polarity dataset to classify the polarity of sentiments from a dataset of true and fake news. The findings show that sentiment polarity is not a reliable single feature for recognizing false news correctly and must be combined with other parameters to improve classification accuracy. © 2024 The Author(s). Published by Elsevier B.V.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.