2024
Authors
Neves, R;
Publication
CoRR
Abstract
2024
Authors
Alves, A; Pereira, J; Khanal, S; Morais, AJ; Filipe, V;
Publication
OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, PT II, OL2A 2023
Abstract
Modern agriculture faces important challenges for feeding a fast-growing planet's population in a sustainable way. One of the most important challenges faced by agriculture is the increasing destruction caused by pests to important crops. It is very important to control and manage pests in order to reduce the losses they cause. However, pest detection and monitoring are very resources consuming tasks. The recent development of computer vision-based technology has made it possible to automatize pest detection efficiently. In Mediterranean olive groves, the olive fly (Bactrocera oleae Rossi) is considered the key-pest of the crop. This paper presents olive fly detection using the lightweight YOLO-based model for versions 7 and 8, respectively, YOLOv7-tiny and YOLOv8n. The proposed object detection models were trained, validated, and tested using two different image datasets collected in various locations of Portugal and Greece. The images are constituted by sticky yellow trap photos and by McPhail trap photos with olive fly exemplars. The performance of the models was evaluated using precision, recall, and mAP.95. The YOLOV7-tiny model best performance is 88.3% of precision, 85% of Recall, 90% of mAP.50, and 53% of mAP.95. The YOLOV8n model best performance is 85% of precision, 85% of Recall, 90% mAP.50, and 55% of mAP.50 YOLO8n model achieved worst results than YOLOv7-tiny for a dataset without negative images (images without olive fly exemplars). Aiming at installing an experimental prototype in the olive grove, the YOLOv8n model was implemented in a Ubuntu Server 23.04 Raspberry PI 3 microcomputer.
2024
Authors
Fernandes, R; Pessoa, A; Salgado, M; de Paiva, A; Pacal, I; Cunha, A;
Publication
IEEE ACCESS
Abstract
Effective image and video annotation is a fundamental pillar in computer vision and artificial intelligence, crucial for the development of accurate machine learning models. Object tracking and image retrieval techniques are essential in this process, significantly improving the efficiency and accuracy of automatic annotation. This paper systematically investigates object tracking and image acquisition techniques. It explores how these technologies can collectively enhance the efficiency and accuracy of the annotation processes for image and video datasets. Object tracking is examined for its role in automating annotations by tracking objects across video sequences, while image retrieval is evaluated for its ability to suggest annotations for new images based on existing data. The review encompasses diverse methodologies, including advanced neural networks and machine learning techniques, highlighting their effectiveness in various contexts like medical analyses and urban monitoring. Despite notable advancements, challenges such as algorithm robustness and effective human-AI collaboration are identified. This review provides valuable insights into these technologies' current state and future potential in improving image annotation processes, even showing existing applications of these techniques and their full potential when combined.
2024
Authors
Soares, RP; Goncalves, R; Briga-Sa, A; Martins, J; Branco, F;
Publication
GOOD PRACTICES AND NEW PERSPECTIVES IN INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 3, WORLDCIST 2024
Abstract
Education is vital in fostering economic growth and societal development, particularly in developing countries like Timor-Leste. As technology has revolutionised education in the digital transformation era, the concept of a smart university, driven by advanced technologies and data analytics, has gained prominence globally. Timor-Leste, amid its progress in institutional structures and public infrastructure, is also exploring integrating smart technologies in higher education. This underscores a commitment of The East Timor National Education Strategic Plan (NESP) 2011-2030 to meet national and international standards, positioning the country at the forefront of educational innovation. This study aims to assess the feasibility of implementing a Smart University in Timor-Leste to evaluate the readiness of the country to embrace digital technologies and integrate them into higher education practices. The research employs a Design Science Research methodology where qualitative and quantitative data are gathered through interviews, surveys, and document analysis. Design artefacts, including system architecture and an evaluation framework, are developed to comprehensively understand the technological and informatics aspects of implementing a Smart University in Timor-Leste. The findings will contribute to decision-making and inform the implementation plan, offering valuable insights into stakeholders' perspectives and perceptions, and will support the advancement of the educational landscape in Timor Leste by integrating smart technologies and innovative practices in higher education.
2024
Authors
Dani, M; Rio Torto, I; Alaniz, S; Akata, Z;
Publication
PATTERN RECOGNITION, DAGM GCPR 2023
Abstract
Post-hoc explanation methods have often been criticised for abstracting away the decision-making process of deep neural networks. In this work, we would like to provide natural language descriptions for what different layers of a vision backbone have learned. Our DeViL method generates textual descriptions of visual features at different layers of the network as well as highlights the attribution locations of learned concepts. We train a transformer network to translate individual image features of any vision layer into a prompt that a separate off-the-shelf language model decodes into natural language. By employing dropout both per-layer and per-spatial-location, our model can generalize training on image-text pairs to generate localized explanations. As it uses a pre-trained language model, our approach is fast to train and can be applied to any vision backbone. Moreover, DeViL can create open-vocabulary attribution maps corresponding to words or phrases even outside the training scope of the vision model. We demonstrate that DeViL generates textual descriptions relevant to the image content on CC3M, surpassing previous lightweight captioning models and attribution maps, uncovering the learned concepts of the vision backbone. Further, we analyze fine-grained descriptions of layers as well as specific spatial locations and show that DeViL outperforms the current state-of-the-art on the neuron-wise descriptions of the MILANNOTATIONS dataset.
2024
Authors
Padua, L; Castro, JP; Castro, J; Sousa, JJ; Castro, M;
Publication
DRONES
Abstract
Climate change has intensified the need for robust fire prevention strategies. Sustainable forest fuel management is crucial in mitigating the occurrence and rapid spread of forest fires. This study assessed the impact of vegetation clearing and/or grazing over a three-year period in the herbaceous and shrub parts of a Mediterranean oak forest. Using high-resolution multispectral data from an unmanned aerial vehicle (UAV), four flight surveys were conducted from 2019 (pre- and post-clearing) to 2021. These data were used to evaluate different scenarios: combined vegetation clearing and grazing, the individual application of each method, and a control scenario that was neither cleared nor purposely grazed. The UAV data allowed for the detailed monitoring of vegetation dynamics, enabling the classification into arboreal, shrubs, herbaceous, and soil categories. Grazing pressure was estimated through GPS collars on the sheep flock. Additionally, a good correlation (r = 0.91) was observed between UAV-derived vegetation volume estimates and field measurements. These practices proved to be efficient in fuel management, with cleared and grazed areas showing a lower vegetation regrowth, followed by areas only subjected to vegetation clearing. On the other hand, areas not subjected to any of these treatments presented rapid vegetation growth.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.