Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Hugo Miguel Silva

2007

Autonomous surface vehicle docking manoeuvre with visual information

Autores
Martins, A; Almeida, JM; Ferreira, H; Silva, H; Dias, N; Dias, A; Almeida, C; Silva, EP;

Publicação
PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10

Abstract
This work presents a hybrid coordinated manoeuvre for docking an autonomous surface vehicle with an autonomous underwater vehicle. The control manoeuvre uses visual information to estimate the AUV relative position and attitude in relation to the ASV and steers the ASV in order to dock with the AUV. The AUV is assumed to be at surface with only a small fraction of its volume visible. The system implemented in the Autonomous Surface vehicle ROAZ, developed by LSA-ISEP to perform missions in river environment, test autonomous AUV docking capabilities and multiple AUV/ASV coordinated missions is presented. Information from a low cost embedded robotics vision system (LSAVision), along with inertial navigation sensors is fused in an extended kalman filter and used to determine AUV relative position and orientation to the surface vehicle The real time vision processing system is described and results are presented in operational scenario.

2011

Real-Time 3D Ball Trajectory Estimation for RoboCup Middle Size League Using a Single Camera

Autores
Silva, H; Dias, A; Almeida, JM; Martins, A; da Silva, EP;

Publicação
RoboCup 2011: Robot Soccer World Cup XV [papers from the 15th Annual RoboCup International Symposium, Istanbul, Turkey, July 2011]

Abstract
This paper proposes a novel architecture for real-time 3D ball trajectory estimation with a monocular camera in Middle Size League scenario. Our proposed system consists on detecting possible multiple ball candidates in the image, that are filtered in a multi-target data association layer. Validated ball candidates have their 3D trajectory estimated by Maximum Likelihood method (MLM) followed by a recursive refinement obtained with an Extended Kalman Filter (EKF). Our approach was validated in real RoboCup scenario, evaluated recurring to ground truth information obtained by alternative methods allowing overall performance and quality assessment. © 2012 Springer-Verlag Berlin Heidelberg.

2007

Forest fire detection with a small fixed wing autonomous aerial vehicle

Autores
Martins, A; Almeida, J; Almeida, C; Figueiredo, A; Santos, F; Bento, D; Silva, H; Silva, E;

Publicação
IFAC Proceedings Volumes (IFAC-PapersOnline)

Abstract
In this work a forest fire detection solution using small autonomous aerial vehicles is proposed. The FALCOS unmanned aerial vehicle developed for remote-monitoring purposes is described. This is a small size UAV with onboard vision processing and autonomous flight capabilities. A set of custom developed navigation sensors was developed for the vehicle. Fire detection is performed through the use of low cost digital cameras and near-infrared sensors. Test results for navigation and ignition detection in real scenario are presented.

2008

A real time vision system for autonomous systems: Characterization during a middle size match

Autores
Silva, H; Almeida, JM; Lima, L; Martins, A; Silva, EP;

Publicação
ROBOCUP 2007: ROBOT SOCCER WORLD CUP XI

Abstract
This paper propose a real-time vision framework for mobile robotics and describes the current implementation. The pipeline structure further reduces latency and allows a paralleled hardware implementation. A dedicated hardware vision sensor was developed in order to take advantage of the proposed architecture. The real-time characteristics and hardware partial implementation, coupled with low energy consumption address typical autonomous systems applications. A characterization of the implemented system in the Robocup scenario, during competition matches, is presented.

2023

The MONET dataset: Multimodal drone thermal dataset recorded in rural scenarios

Autores
Riz L.; Caraffa A.; Bortolon M.; Mekhalfi M.L.; Boscaini D.; Moura A.; Antunes J.; Dias A.; Silva H.; Leonidou A.; Constantinides C.; Keleshis C.; Abate D.; Poiesi F.;

Publicação
IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Abstract
We present MONET, a new multimodal dataset captured using a thermal camera mounted on a drone that flew over rural areas, and recorded human and vehicle activities. We captured MONET to study the problem of object localisation and behaviour understanding of targets undergoing large-scale variations and being recorded from different and moving viewpoints. Target activities occur in two different land sites, each with unique scene structures and cluttered backgrounds. MONET consists of approximately 53K images featuring 162K manually annotated bounding boxes. Each image is timestamp-aligned with drone metadata that includes information about attitudes, speed, altitude, and GPS coordinates. MONET is different from previous thermal drone datasets because it features multimodal data, including rural scenes captured with thermal cameras containing both person and vehicle targets, along with trajectory information and metadata. We assessed the difficulty of the dataset in terms of transfer learning between the two sites and evaluated nine object detection algorithms to identify the open challenges associated with this type of data. Project page: https://github.com/fabiopoiesi/monet-dataset.

  • 6
  • 6