Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2024

Anatomical Concept-based Pseudo-labels for Increased Generalizability in Breast Cancer Multi-center Data

Authors
Miranda, I; Agrotis, G; Tan, RB; Teixeira, LF; Silva, W;

Publication
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024

Abstract
Breast cancer, the most prevalent cancer among women, poses a significant healthcare challenge, demanding effective early detection for optimal treatment outcomes. Mammography, the gold standard for breast cancer detection, employs low-dose X-rays to reveal tissue details, particularly cancerous masses and calcium deposits. This work focuses on evaluating the impact of incorporating anatomical knowledge to improve the performance and robustness of a breast cancer classification model. In order to achieve this, a methodology was devised to generate anatomical pseudo-labels, simulating plausible anatomical variations in cancer masses. These variations, encompassing changes in mass size and intensity, closely reflect concepts from the BI-RADs scale. Besides anatomical-based augmentation, we propose a novel loss term promoting the learning of cancer grading by our model. Experiments were conducted on publicly available datasets simulating both in-distribution and out-of-distribution scenarios to thoroughly assess the model's performance under various conditions.

2024

Parameter-Efficient Generation of Natural Language Explanations for Chest X-ray Classification

Authors
Torto, IR; Cardoso, JS; Teixeira, LF;

Publication
Medical Imaging with Deep Learning, 3-5 July 2024, Paris, France.

Abstract

2024

A Transition Towards Virtual Representations of Visual Scenes

Authors
Pereira, A; Carvalho, P; Côrte Real, L;

Publication
Advances in Internet of Things & Embedded Systems

Abstract
We propose a unified architecture for visual scene understanding, aimed at overcoming the limitations of traditional, fragmented approaches in computer vision. Our work focuses on creating a system that accurately and coherently interprets visual scenes, with the ultimate goal to provide a 3D virtual representation, which is particularly useful for applications in virtual and augmented reality. By integrating various visual and semantic processing tasks into a single, adaptable framework, our architecture simplifies the design process, ensuring a seamless and consistent scene interpretation. This is particularly important in complex systems that rely on 3D synthesis, as the need for precise and semantically coherent scene descriptions keeps on growing. Our unified approach addresses these challenges, offering a flexible and efficient solution. We demonstrate the practical effectiveness of our architecture through a proof-of-concept system and explore its potential in various application domains, proving its value in advancing the field of computer vision.

2024

Systematic review on weapon detection in surveillance footage through deep learning

Authors
Santos, T; Oliveira, H; Cunha, A;

Publication
COMPUTER SCIENCE REVIEW

Abstract
In recent years, the number of crimes with weapons has grown on a large scale worldwide, mainly in locations where enforcement is lacking or possessing weapons is legal. It is necessary to combat this type of criminal activity to identify criminal behavior early and allow police and law enforcement agencies immediate action.Despite the human visual structure being highly evolved and able to process images quickly and accurately if an individual watches something very similar for a long time, there is a possibility of slowness and lack of attention. In addition, large surveillance systems with numerous equipment require a surveillance team, which increases the cost of operation. There are several solutions for automatic weapon detection based on computer vision; however, these have limited performance in challenging contexts.A systematic review of the current literature on deep learning-based weapon detection was conducted to identify the methods used, the main characteristics of the existing datasets, and the main problems in the area of automatic weapon detection. The most used models were the Faster R-CNN and the YOLO architecture. The use of realistic images and synthetic data showed improved performance. Several challenges were identified in weapon detection, such as poor lighting conditions and the difficulty of small weapon detection, the last being the most prominent. Finally, some future directions are outlined with a special focus on small weapon detection.

2024

Comparative Study Between Object Detection Models, for Olive Fruit Fly Identification

Authors
Victoriano, M; Oliveira, L; Oliveira, HP;

Publication
Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2024, Volume 2: VISAPP, Rome, Italy, February 27-29, 2024.

Abstract
Climate change is causing the emergence of new pest species and diseases, threatening economies, public health, and food security. In Europe, olive groves are crucial for producing olive oil and table olives; however, the presence of the olive fruit fly (Bactrocera Oleae) poses a significant threat, causing crop losses and financial hardship. Early disease and pest detection methods are crucial for addressing this issue. This work presents a pioneering comparative performance study between two state-of-the-art object detection models, YOLOv5 and YOLOv8, for the detection of the olive fruit fly from trap images, marking the first-ever application of these models in this context. The dataset was obtained by merging two existing datasets: the DIRT dataset, collected in Greece, and the CIMO-IPB dataset, collected in Portugal. To increase its diversity and size, the dataset was augmented, and then both models were fine-tuned. A set of metrics were calculated, to assess both models performance. Early detection techniques like these can be incorporated in electronic traps, to effectively safeguard crops from the adverse impacts caused by climate change, ultimately ensuring food security and sustainable agriculture. © 2024 by SCITEPRESS – Science and Technology Publications, Lda.

2024

Radiological Medical Imaging Annotation and Visualization Tool

Authors
Teiga, I; Sousa, JV; Silva, F; Pereira, T; Oliveira, HP;

Publication
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, PT III, UAHCI 2024

Abstract
Significant medical image visualization and annotation tools, tailored for clinical users, play a crucial role in disease diagnosis and treatment. Developing algorithms for annotation assistance, particularly machine learning (ML)-based ones, can be intricate, emphasizing the need for a user-friendly graphical interface for developers. Many software tools are available to meet these requirements, but there is still room for improvement, making the research for new tools highly compelling. The envisioned tool focuses on navigating sequences of DICOM images from diverse modalities, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT) scans, Ultrasound (US), and X-rays. Specific requirements involve implementing manual annotation features such as freehand drawing, copying, pasting, and modifying annotations. A scripting plugin interface is essential for running Artificial Intelligence (AI)-based models and adjusting results. Additionally, adaptable surveys complement graphical annotations with textual notes, enhancing information provision. The user evaluation results pinpointed areas for improvement, including incorporating some useful functionalities, as well as enhancements to the user interface for a more intuitive and convenient experience. Despite these suggestions, participants praised the application's simplicity and consistency, highlighting its suitability for the proposed tasks. The ability to revisit annotations ensures flexibility and ease of use in this context.

  • 13
  • 315