2024
Autores
Patrício, C; Barbano, CA; Fiandrotti, A; Renzulli, R; Grangetto, M; Teixeira, LF; Neves, JC;
Publicação
CoRR
Abstract
2024
Autores
Santos, T; Oliveira, H; Cunha, A;
Publicação
COMPUTER SCIENCE REVIEW
Abstract
In recent years, the number of crimes with weapons has grown on a large scale worldwide, mainly in locations where enforcement is lacking or possessing weapons is legal. It is necessary to combat this type of criminal activity to identify criminal behavior early and allow police and law enforcement agencies immediate action.Despite the human visual structure being highly evolved and able to process images quickly and accurately if an individual watches something very similar for a long time, there is a possibility of slowness and lack of attention. In addition, large surveillance systems with numerous equipment require a surveillance team, which increases the cost of operation. There are several solutions for automatic weapon detection based on computer vision; however, these have limited performance in challenging contexts.A systematic review of the current literature on deep learning-based weapon detection was conducted to identify the methods used, the main characteristics of the existing datasets, and the main problems in the area of automatic weapon detection. The most used models were the Faster R-CNN and the YOLO architecture. The use of realistic images and synthetic data showed improved performance. Several challenges were identified in weapon detection, such as poor lighting conditions and the difficulty of small weapon detection, the last being the most prominent. Finally, some future directions are outlined with a special focus on small weapon detection.
2024
Autores
Victoriano, M; Oliveira, L; Oliveira, HP;
Publicação
Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2024, Volume 2: VISAPP, Rome, Italy, February 27-29, 2024.
Abstract
Climate change is causing the emergence of new pest species and diseases, threatening economies, public health, and food security. In Europe, olive groves are crucial for producing olive oil and table olives; however, the presence of the olive fruit fly (Bactrocera Oleae) poses a significant threat, causing crop losses and financial hardship. Early disease and pest detection methods are crucial for addressing this issue. This work presents a pioneering comparative performance study between two state-of-the-art object detection models, YOLOv5 and YOLOv8, for the detection of the olive fruit fly from trap images, marking the first-ever application of these models in this context. The dataset was obtained by merging two existing datasets: the DIRT dataset, collected in Greece, and the CIMO-IPB dataset, collected in Portugal. To increase its diversity and size, the dataset was augmented, and then both models were fine-tuned. A set of metrics were calculated, to assess both models performance. Early detection techniques like these can be incorporated in electronic traps, to effectively safeguard crops from the adverse impacts caused by climate change, ultimately ensuring food security and sustainable agriculture. © 2024 by SCITEPRESS – Science and Technology Publications, Lda.
2024
Autores
Teiga, I; Sousa, JV; Silva, F; Pereira, T; Oliveira, HP;
Publicação
UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION, PT III, UAHCI 2024
Abstract
Significant medical image visualization and annotation tools, tailored for clinical users, play a crucial role in disease diagnosis and treatment. Developing algorithms for annotation assistance, particularly machine learning (ML)-based ones, can be intricate, emphasizing the need for a user-friendly graphical interface for developers. Many software tools are available to meet these requirements, but there is still room for improvement, making the research for new tools highly compelling. The envisioned tool focuses on navigating sequences of DICOM images from diverse modalities, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT) scans, Ultrasound (US), and X-rays. Specific requirements involve implementing manual annotation features such as freehand drawing, copying, pasting, and modifying annotations. A scripting plugin interface is essential for running Artificial Intelligence (AI)-based models and adjusting results. Additionally, adaptable surveys complement graphical annotations with textual notes, enhancing information provision. The user evaluation results pinpointed areas for improvement, including incorporating some useful functionalities, as well as enhancements to the user interface for a more intuitive and convenient experience. Despite these suggestions, participants praised the application's simplicity and consistency, highlighting its suitability for the proposed tasks. The ability to revisit annotations ensures flexibility and ease of use in this context.
2024
Autores
Vale, P; Boer, J; Oliveira, HP; Pereira, T;
Publicação
2024 IEEE 37TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS 2024
Abstract
The early and accurate detection and the grading characterization of brain cancer will generate a positive impact on the treatment plan of those patients. AI-based models can help analyze the Magnetic Resonance Imaging (MRI) to make an initial assessment of the tumor grading. The objective of this work was to develop an Al-based model to classify the grading of the tumor using the MRI. Two regions of interest were explored, with several levels of complexity for the neural network architecture, and Iwo strategies to deal with Unbalanced data. The best results were obtained for the most complex architecture (Resnet50) with a combination of weighted random sampler and data augmentation achieving a balanced accuracy of 62.26%. This work confirmed that complex problems required a more dense neural network and strategies to deal with the unbalanced data.
2024
Autores
Teixeira, M; Silva, F; Ferreira, RM; Pereira, T; Figueiredo, C; Oliveira, HP;
Publicação
NPJ PRECISION ONCOLOGY
Abstract
Recent studies have shown that the microbiome can impact cancer development, progression, and response to therapies suggesting microbiome-based approaches for cancer characterization. As cancer-related signatures are complex and implicate many taxa, their discovery often requires Machine Learning approaches. This review discusses Machine Learning methods for cancer characterization from microbiome data. It focuses on the implications of choices undertaken during sample collection, feature selection and pre-processing. It also discusses ML model selection, guiding how to choose an ML model, and model validation. Finally, it enumerates current limitations and how these may be surpassed. Proposed methods, often based on Random Forests, show promising results, however insufficient for widespread clinical usage. Studies often report conflicting results mainly due to ML models with poor generalizability. We expect that evaluating models with expanded, hold-out datasets, removing technical artifacts, exploring representations of the microbiome other than taxonomical profiles, leveraging advances in deep learning, and developing ML models better adapted to the characteristics of microbiome data will improve the performance and generalizability of models and enable their usage in the clinic.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.