2023
Autores
Serrano e Silva, P; Cruz, R; Shihavuddin, ASM; Gonçalves, T;
Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
2023
Autores
Cardoso, JS; Cruz, R; Albuquerque, T;
Publicação
CoRR
Abstract
2022
Autores
Geros, AF; Cruz, R; de Chaumont, F; Cardoso, JS; Aguiar, P;
Publicação
Abstract
2023
Autores
Barbero-Gómez, J; Cruz, R; Cardoso, JS; Gutiérrez, PA; Hervás-Martínez, C;
Publicação
ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT II
Abstract
This paper introduces an evaluation procedure to validate the efficacy of explanation methods for Convolutional Neural Network (CNN) models in ordinal regression tasks. Two ordinal methods are contrasted against a baseline using cross-entropy, across four datasets. A statistical analysis demonstrates that attribution methods, such as Grad-CAM and IBA, perform significantly better when used with ordinal regression CNN models compared to a baseline approach in most ordinal and nominal metrics. The study suggests that incorporating ordinal information into the attribution map construction process may improve the explanations further.
2024
Autores
Cruz, RPM; Shihavuddin, ASM; Maruf, MH; Cardoso, JS;
Publicação
PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2023, PT I
Abstract
After the learning process, certain types of images may not be modeled correctly because they were not well represented in the training set. These failures can then be compensated for by collecting more images from the real-world and incorporating them into the learning process - an expensive process known as active learning. The proposed twist, called active supervision, uses the model itself to change the existing images in the direction where the boundary is less defined and requests feedback from the user on how the new image should be labeled. Experiments in the context of class imbalance show the technique is able to increase model performance in rare classes. Active human supervision helps provide crucial information to the model during training that the training set lacks.
2024
Autores
Campos, F; Cerqueira, FG; Cruz, RPM; Cardoso, JS;
Publicação
PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2023, PT I
Abstract
Autonomous driving can reduce the number of road accidents due to human error and result in safer roads. One important part of the system is the perception unit, which provides information about the environment surrounding the car. Currently, most manufacturers are using not only RGB cameras, which are passive sensors that capture light already in the environment but also Lidar. This sensor actively emits laser pulses to a surface or object and measures reflection and time-of-flight. Previous work, YOLOP, already proposed a model for object detection and semantic segmentation, but only using RGB. This work extends it for Lidar and evaluates performance on KITTI, a public autonomous driving dataset. The implementation shows improved precision across all objects of different sizes. The implementation is entirely made available: https://github.com/filipepcampos/yolomm.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.