2023
Autores
Bellas, F; Sousa, A;
Publicação
Frontiers Robotics AI
Abstract
2023
Autores
Leão, G; Almeida, F; Trigo, E; Ferreira, H; Sousa, A; Reis, LP;
Publicação
IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2023, Tomar, Portugal, April 26-27, 2023
Abstract
2023
Autores
Costa, CM; Veiga, G; Sousa, A; Thomas, U; Rocha, L;
Publicação
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC
Abstract
The estimation of a 3D sensor constellation for maximizing the observable surface area percentage of a given set of target objects is a challenging and combinatorial explosive problem that has a wide range of applications for perception tasks that may require gathering sensor information from multiple views due to environment occlusions. To tackle this problem, the Gazebo simulator was configured for accurately modeling 8 types of depth cameras with different hardware characteristics, such as image resolution, field of view, range of measurements and acquisition rate. Later on, several populations of depth sensors were deployed within 4 different testing environments targeting object recognition and bin picking applications with increasing level of occlusions and geometry complexity. The sensor populations were either uniformly or randomly inserted on a set of regions of interest in which useful sensor data could be retrieved and in which the real sensors could be installed or moved by a robotic arm. The proposed approach of using fusion of 3D point clouds from multiple sensors using color segmentation and voxel grid merging for fast surface area coverage computation, coupled with a random sample consensus algorithm for best views estimation, managed to quickly estimate useful sensor constellations for maximizing the observable surface area of a set of target objects, making it suitable to be used for deciding the type and spatial disposition of sensors and also guide movable 3D cameras for avoiding environment occlusions.
2023
Autores
Leao, G; Almeida, F; Trigo, E; Ferreira, H; Sousa, A; Reis, LP;
Publicação
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC
Abstract
Reinforcement Learning (RL) is a well-suited paradigm to train robots since it does not require any previous information or database to train an agent. This paper explores using Deep Reinforcement Learning (DRL) to train a robot to navigate in maps containing different sorts of obstacles and which emulate hallways. Training and testing were performed using the Flatland 2D simulator and a Deep Q-Network (DQN) provided by OpenAI gym. Different sets of maps were used for training and testing. The experiments illustrate how well the robot is able to navigate in maps distinct from the ones used for training by learning new behaviours (namely following walls) and highlight the key challenges when solving this task using DRL, including the appropriate definition of the state space and reward function, as well as of the stopping criteria during training.
2023
Autores
Monteiro, F; Sousa, A;
Publicação
JOURNAL OF APPLIED RESEARCH IN HIGHER EDUCATION
Abstract
PurposeThe purpose of the article is to develop an innovative pedagogic tool: an escape room board game to be played in-class, targeting an introduction to an ethics course for engineering students. The design is student-centred and aims to increase students' appreciation, commitment and motivation to learning ethics, a challenging endeavour for many technological students.Design/methodology/approachThe methodology included the design, development and in-class application of the mentioned game. After application, perception data from students were collected with pre- and post-action questionnaire, using a quasi-experimental method.FindingsThe results allow to conclude that the developed game persuaded students be in class in an active way. The game mobilizes body and mind to the learning process with many associated advantages to foster students' motivation, curiosity, interest, commitment and the need for individual reflection after information search.Research limitations/implicationsThe main limitation of the game is its applicability to large classes (it has been successfully tested with a maximum of 65 students playing simultaneously in the same room).Originality/valueThe originalities and contributions include the presented game that helped to captivate students to ethics area, a serious problem felt by educators and researchers in this area. This study will be useful to educators of ethics in engineering and will motivate to design tools for a similar pedagogical approach, even more so in areas where students are not especially motivated. The developed tool is available from the authors at no expense.
2023
Autores
da Silva, DQ; Rodrigues, TF; Sousa, AJ; dos Santos, FN; Filipe, V;
Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT II
Abstract
Selective thinning is a crucial operation to reduce forest ignitable material, to control the eucalyptus species and maximise its profitability. The selection and removal of less vigorous stems allows the remaining stems to grow healthier and without competition for water, sunlight and nutrients. This operation is traditionally performed by a human operator and is time-intensive. This work simplifies selective thinning by removing the stem selection part from the human operator's side using a computer vision algorithm. For this, two distinct datasets of eucalyptus stems (with and without foliage) were built and manually annotated, and three Deep Learning object detectors (YOLOv5, YOLOv7 and YOLOv8) were tested on real context images to perform instance segmentation. YOLOv8 was the best at this task, achieving an Average Precision of 74% and 66% on non-leafy and leafy test datasets, respectively. A computer vision algorithm for automatic stem selection was developed based on the YOLOv8 segmentation output. The algorithm managed to get a Precision above 97% and a 81% Recall. The findings of this work can have a positive impact in future developments for automatising selective thinning in forested contexts.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.