2024
Autores
Torto, IR; Gonçalves, T; Cardoso, JS; Teixeira, LF;
Publicação
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024
Abstract
In fields that rely on high-stakes decisions, such as medicine, interpretability plays a key role in promoting trust and facilitating the adoption of deep learning models by the clinical communities. In the medical image analysis domain, gradient-based class activation maps are the most widely used explanation methods and the field lacks a more in depth investigation into inherently interpretable models that focus on integrating knowledge that ensures the model is learning the correct rules. A new approach, B-cos networks, for increasing the interpretability of deep neural networks by inducing weight-input alignment during training showed promising results on natural image classification. In this work, we study the suitability of these B-cos networks to the medical domain by testing them on different use cases (skin lesions, diabetic retinopathy, cervical cytology, and chest X-rays) and conducting a thorough evaluation of several explanation quality assessment metrics. We find that, just like in natural image classification, B-cos explanations yield more localised maps, but it is not clear that they are better than other methods' explanations when considering more explanation properties. © 2024 IEEE.
2024
Autores
Patrício, C; Neves, C; Teixeira, F;
Publicação
ACM COMPUTING SURVEYS
Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.
2024
Autores
Patrício, C; Teixeira, LF; Neves, JC;
Publicação
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024
Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation. © 2024 IEEE.
2024
Autores
Oliveira, M; Cerqueira, R; Pinto, JR; Fonseca, J; Teixeira, LF;
Publicação
IEEE Transactions on Intelligent Vehicles
Abstract
2024
Autores
Santos, T; Oliveira, H; Cunha, A;
Publicação
COMPUTER SCIENCE REVIEW
Abstract
In recent years, the number of crimes with weapons has grown on a large scale worldwide, mainly in locations where enforcement is lacking or possessing weapons is legal. It is necessary to combat this type of criminal activity to identify criminal behavior early and allow police and law enforcement agencies immediate action.Despite the human visual structure being highly evolved and able to process images quickly and accurately if an individual watches something very similar for a long time, there is a possibility of slowness and lack of attention. In addition, large surveillance systems with numerous equipment require a surveillance team, which increases the cost of operation. There are several solutions for automatic weapon detection based on computer vision; however, these have limited performance in challenging contexts.A systematic review of the current literature on deep learning-based weapon detection was conducted to identify the methods used, the main characteristics of the existing datasets, and the main problems in the area of automatic weapon detection. The most used models were the Faster R-CNN and the YOLO architecture. The use of realistic images and synthetic data showed improved performance. Several challenges were identified in weapon detection, such as poor lighting conditions and the difficulty of small weapon detection, the last being the most prominent. Finally, some future directions are outlined with a special focus on small weapon detection.
2024
Autores
Victoriano, M; Oliveira, L; Oliveira, HP;
Publicação
Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2024, Volume 2: VISAPP, Rome, Italy, February 27-29, 2024.
Abstract
Climate change is causing the emergence of new pest species and diseases, threatening economies, public health, and food security. In Europe, olive groves are crucial for producing olive oil and table olives; however, the presence of the olive fruit fly (Bactrocera Oleae) poses a significant threat, causing crop losses and financial hardship. Early disease and pest detection methods are crucial for addressing this issue. This work presents a pioneering comparative performance study between two state-of-the-art object detection models, YOLOv5 and YOLOv8, for the detection of the olive fruit fly from trap images, marking the first-ever application of these models in this context. The dataset was obtained by merging two existing datasets: the DIRT dataset, collected in Greece, and the CIMO-IPB dataset, collected in Portugal. To increase its diversity and size, the dataset was augmented, and then both models were fine-tuned. A set of metrics were calculated, to assess both models performance. Early detection techniques like these can be incorporated in electronic traps, to effectively safeguard crops from the adverse impacts caused by climate change, ultimately ensuring food security and sustainable agriculture. © 2024 by SCITEPRESS – Science and Technology Publications, Lda.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.