2024
Autores
Campos, F; Cerqueira, FG; Cruz, RPM; Cardoso, JS;
Publicação
PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2023, PT I
Abstract
Autonomous driving can reduce the number of road accidents due to human error and result in safer roads. One important part of the system is the perception unit, which provides information about the environment surrounding the car. Currently, most manufacturers are using not only RGB cameras, which are passive sensors that capture light already in the environment but also Lidar. This sensor actively emits laser pulses to a surface or object and measures reflection and time-of-flight. Previous work, YOLOP, already proposed a model for object detection and semantic segmentation, but only using RGB. This work extends it for Lidar and evaluates performance on KITTI, a public autonomous driving dataset. The implementation shows improved precision across all objects of different sizes. The implementation is entirely made available: https://github.com/filipepcampos/yolomm.
2024
Autores
Montenegro, H; Cardoso, JS;
Publicação
MEDICAL IMAGE ANALYSIS
Abstract
Case-based explanations are an intuitive method to gain insight into the decision-making process of deep learning models in clinical contexts. However, medical images cannot be shared as explanations due to privacy concerns. To address this problem, we propose a novel method for disentangling identity and medical characteristics of images and apply it to anonymize medical images. The disentanglement mechanism replaces some feature vectors in an image while ensuring that the remaining features are preserved, obtaining independent feature vectors that encode the images' identity and medical characteristics. We also propose a model to manufacture synthetic privacy-preserving identities to replace the original image's identity and achieve anonymization. The models are applied to medical and biometric datasets, demonstrating their capacity to generate realistic-looking anonymized images that preserve their original medical content. Additionally, the experiments show the network's inherent capacity to generate counterfactual images through the replacement of medical features.
2024
Autores
Tame, ID; Tolosana, R; Melzi, P; Rodríguez, RV; Kim, M; Rathgeb, C; Liu, X; Morales, A; Fiérrez, J; Garcia, JO; Zhong, Z; Huang, Y; Mi, Y; Ding, S; Zhou, S; He, S; Fu, L; Cong, H; Zhang, R; Xiao, Z; Smirnov, E; Pimenov, A; Grigorev, A; Timoshenko, D; Asfaw, KM; Low, CY; Liu, H; Wang, C; Zuo, Q; He, Z; Shahreza, HO; George, A; Unnervik, A; Rahimi, P; Marcel, S; Neto, PC; Huber, M; Kolf, JN; Damer, N; Boutros, F; Cardoso, JS; Sequeira, AF; Atzori, A; Fenu, G; Marras, M; Struc, V; Yu, J; Li, Z; Li, J; Zhao, W; Lei, Z; Zhu, X; Zhang, XY; Biesseck, B; Vidal, P; Coelho, L; Granada, R; Menotti, D;
Publicação
CoRR
Abstract
2024
Autores
Dumont, M; Correia, CM; Sauvage, JF; Schwartz, N; Gray, M; Cardoso, J;
Publicação
JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION
Abstract
Capturing high-resolution imagery of the Earth's surface often calls for a telescope of considerable size, even from low Earth orbits (LEOs). A large aperture often requires large and expensive platforms. For instance, achieving a resolution of 1 m at visible wavelengths from LEO typically requires an aperture diameter of at least 30 cm. Additionally, ensuring high revisit times often prompts the use of multiple satellites. In light of these challenges, a small, segmented, deployable CubeSat telescope was recently proposed creating the additional need of phasing the telescope's mirrors. Phasing methods on compact platforms are constrained by the limited volume and power available, excluding solutions that rely on dedicated hardware or demand substantial computational resources. Neural networks (NNs) are known for their computationally efficient inference and reduced onboard requirements. Therefore, we developed a NN-based method to measure co-phasing errors inherent to a deployable telescope. The proposed technique demonstrates its ability to detect phasing errors at the targeted performance level [typically a wavefront error (WFE) below 15 nm RMS for a visible imager operating at the diffraction limit] using a point source. The robustness of the NN method is verified in presence of high-order aberrations or noise and the results are compared against existing state-of-the-art techniques. The developed NN model ensures its feasibility and provides arealistic pathway towards achieving diffraction-limited images. (c) 2024 Optica Publishing Group
2024
Autores
Ribeiro, FSF; Garcia, PJV; Silva, M; Cardoso, JS;
Publicação
IEEE ACCESS
Abstract
Point source detection algorithms play a pivotal role across diverse applications, influencing fields such as astronomy, biomedical imaging, environmental monitoring, and beyond. This article reviews the algorithms used for space imaging applications from ground and space telescopes. The main difficulties in detection arise from the incomplete knowledge of the impulse function of the imaging system, which depends on the aperture, atmospheric turbulence (for ground-based telescopes), and other factors, some of which are time-dependent. Incomplete knowledge of the impulse function decreases the effectiveness of the algorithms. In recent years, deep learning techniques have been employed to mitigate this problem and have the potential to outperform more traditional approaches. The success of deep learning techniques in object detection has been observed in many fields, and recent developments can further improve the accuracy. However, deep learning methods are still in the early stages of adoption and are used less frequently than traditional approaches. In this review, we discuss the main challenges of point source detection, as well as the latest developments, covering both traditional and current deep learning methods. In addition, we present a comparison between the two approaches to better demonstrate the advantages of each methodology.
2024
Autores
Freitas, N; Montenegro, H; Cardoso, MJ; Cardoso, JS;
Publicação
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024
Abstract
Breast cancer locoregional treatment causes alterations to the physical aspect of the breast, often negatively impacting the self-esteem of patients unaware of the possible aesthetic outcomes of those treatments. To improve patients' self-esteem and enable a more informed choice of treatment when multiple options are available, the possibility to predict how the patient might look like after surgery would be of invaluable help. However, no work has been proposed to predict the aesthetic outcomes of breast cancer treatment. As a first step, we compare traditional computer vision and deep learning approaches to reproduce asymmetries of post-operative patients on pre-operative breast images. The results suggest that the traditional approach is better at altering the contour of the breast. In contrast, the deep learning approach succeeds in realistically altering the position and direction of the nipple. © 2024 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.