2024
Authors
Miranda, I; Agrotis, G; Tan, RB; Teixeira, LF; Silva, W;
Publication
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024
Abstract
Breast cancer, the most prevalent cancer among women, poses a significant healthcare challenge, demanding effective early detection for optimal treatment outcomes. Mammography, the gold standard for breast cancer detection, employs low-dose X-rays to reveal tissue details, particularly cancerous masses and calcium deposits. This work focuses on evaluating the impact of incorporating anatomical knowledge to improve the performance and robustness of a breast cancer classification model. In order to achieve this, a methodology was devised to generate anatomical pseudo-labels, simulating plausible anatomical variations in cancer masses. These variations, encompassing changes in mass size and intensity, closely reflect concepts from the BI-RADs scale. Besides anatomical-based augmentation, we propose a novel loss term promoting the learning of cancer grading by our model. Experiments were conducted on publicly available datasets simulating both in-distribution and out-of-distribution scenarios to thoroughly assess the model's performance under various conditions.
2024
Authors
Torto, IR; Cardoso, JS; Teixeira, LF;
Publication
Medical Imaging with Deep Learning, 3-5 July 2024, Paris, France.
Abstract
2024
Authors
Pereira, A; Carvalho, P; Côrte Real, L;
Publication
Advances in Internet of Things & Embedded Systems
Abstract
2024
Authors
Santos, T; Oliveira, H; Cunha, A;
Publication
COMPUTER SCIENCE REVIEW
Abstract
In recent years, the number of crimes with weapons has grown on a large scale worldwide, mainly in locations where enforcement is lacking or possessing weapons is legal. It is necessary to combat this type of criminal activity to identify criminal behavior early and allow police and law enforcement agencies immediate action.Despite the human visual structure being highly evolved and able to process images quickly and accurately if an individual watches something very similar for a long time, there is a possibility of slowness and lack of attention. In addition, large surveillance systems with numerous equipment require a surveillance team, which increases the cost of operation. There are several solutions for automatic weapon detection based on computer vision; however, these have limited performance in challenging contexts.A systematic review of the current literature on deep learning-based weapon detection was conducted to identify the methods used, the main characteristics of the existing datasets, and the main problems in the area of automatic weapon detection. The most used models were the Faster R-CNN and the YOLO architecture. The use of realistic images and synthetic data showed improved performance. Several challenges were identified in weapon detection, such as poor lighting conditions and the difficulty of small weapon detection, the last being the most prominent. Finally, some future directions are outlined with a special focus on small weapon detection.
2024
Authors
Victoriano, M; Oliveira, L; Oliveira, HP;
Publication
Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2024, Volume 2: VISAPP, Rome, Italy, February 27-29, 2024.
Abstract
Climate change is causing the emergence of new pest species and diseases, threatening economies, public health, and food security. In Europe, olive groves are crucial for producing olive oil and table olives; however, the presence of the olive fruit fly (Bactrocera Oleae) poses a significant threat, causing crop losses and financial hardship. Early disease and pest detection methods are crucial for addressing this issue. This work presents a pioneering comparative performance study between two state-of-the-art object detection models, YOLOv5 and YOLOv8, for the detection of the olive fruit fly from trap images, marking the first-ever application of these models in this context. The dataset was obtained by merging two existing datasets: the DIRT dataset, collected in Greece, and the CIMO-IPB dataset, collected in Portugal. To increase its diversity and size, the dataset was augmented, and then both models were fine-tuned. A set of metrics were calculated, to assess both models performance. Early detection techniques like these can be incorporated in electronic traps, to effectively safeguard crops from the adverse impacts caused by climate change, ultimately ensuring food security and sustainable agriculture. © 2024 by SCITEPRESS – Science and Technology Publications, Lda.
2024
Authors
Teiga, I; Sousa, JV; Silva, F; Pereira, T; Oliveira, HP;
Publication
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, PT III, UAHCI 2024
Abstract
Significant medical image visualization and annotation tools, tailored for clinical users, play a crucial role in disease diagnosis and treatment. Developing algorithms for annotation assistance, particularly machine learning (ML)-based ones, can be intricate, emphasizing the need for a user-friendly graphical interface for developers. Many software tools are available to meet these requirements, but there is still room for improvement, making the research for new tools highly compelling. The envisioned tool focuses on navigating sequences of DICOM images from diverse modalities, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT) scans, Ultrasound (US), and X-rays. Specific requirements involve implementing manual annotation features such as freehand drawing, copying, pasting, and modifying annotations. A scripting plugin interface is essential for running Artificial Intelligence (AI)-based models and adjusting results. Additionally, adaptable surveys complement graphical annotations with textual notes, enhancing information provision. The user evaluation results pinpointed areas for improvement, including incorporating some useful functionalities, as well as enhancements to the user interface for a more intuitive and convenient experience. Despite these suggestions, participants praised the application's simplicity and consistency, highlighting its suitability for the proposed tasks. The ability to revisit annotations ensures flexibility and ease of use in this context.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.