Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2023

METIS SCAO – implementing AO for ELT

Autores
Bertram T.; Bizenberger P.; van Boekel R.; Brandner W.; Briegel F.; Vázquez M.C.C.; Coppejans H.; Correia C.; Feldt M.; Henning T.; Huber A.; Kulas M.; Laun W.; Mohr L.; Naranjo V.; Neureuther P.; Obereder A.; Rohloff R.R.; Scheithauer S.; Steuer H.; Absil O.; Orban de Xivry G.; Brandl B.; Glauser A.M.;

Publicação
7th Adaptive Optics for Extremely Large Telescopes Conference, AO4ELT7 2023

Abstract
METIS, the Mid-infrared ELT Imager and Spectrograph is among the first-generation instruments for ESO’s 39m Extremely Large Telescope (ELT). It will provide diffraction-limited spectroscopy and imaging, including coronagraphic capabilities, in the thermal/mid-infrared wavelength domain (3 µm – 13.3 µm). Its Single Conjugate Adaptive Optics (SCAO) system will be used for all observing modes, with High Contrast Imaging imposing the most demanding requirements on its performance. The final design review of METIS took place in the fall of 2022; the development of the instrument, including its SCAO system, has since entered the Manufacturing, Assembly, Integration and Testing (MAIT) phase. Numerous challenging aspects of an ELT AO system are addressed in the mature designs for the SCAO control system and the SCAO hardware module: the complex interaction with the telescope entities that participate in the AO control, wavefront reconstruction with a fragmented and moving pupil, secondary control tasks to deal with differential image motion, non-common path aberrations and mis-registration. A K-band pyramid wavefront sensor and a GPU-based RTC, tailored to needs of METIS at the ELT, are core components. The implementation of the METIS SCAO system includes thorough testing at several levels before the installation at the telescope. These tests require elaborate setups to mimic the conditions at the telescope. This paper provides an overview of the design of METIS SCAO as it will be implemented, the main results of the extensive analyses performed to support the final design, and the next steps on the path towards commissioning.

2023

Phase A study of the GNAO bench

Autores
Jouve, P; Fusco, T; Correia, C; Neichel, B; Heritier, T; Sauvage, J; Lawrence, J; Rakich, A; Zheng, J; Chin, T; Vedrene, N; Charton, J; Bruno, P;

Publicação
7th Adaptive Optics for Extremely Large Telescopes Conference, AO4ELT7 2023

Abstract
AOB-1 is an Adaptive Optics (AO) facility currently designed to feed the Gemini infrared Multi Object Spectrograph (GIRMOS) on the GEMINI North 8m class telescope located in Hawaii. This AO system will be made of two AO modes. A laser tomography AO (LTAO) mode using 4 LGS (laser guide stars) and [1-3] NGS (natural guide stars) for high performance over a narrow field of view (a few arcsec). The LTAO reconstruction will benefit from the most recent developments in the field, such as the super-resolution concept for the multi-LGS tomographic system, the calibration and optimization of the system on the sky, etc. The system will also operate in Ground Layer Adaptive Optics (GLAO) mode providing a robust solution for homogeneous partial AO correction over a wide 2’ FOV. This last mode will also be used as a first step of a MOAO (Multi-object adaptive optics) mode integrated in the GIRMOS instrument. Both GLAO and LTAO modes are optimized to provide the best possible sky coverage, up to 60% at the North Galactic Pole. Finally, the project has been designed from day one as a fast-track, cost effective project, aiming to provide a first scientific light on the telescope by 2027 at the latest, with a good balance of innovative and creative concepts combined with standard and well controlled components and solutions. In this paper, we will present the innovative Phase A concepts, design and performance analysis of the two AO modes (LTAO and GLAO) of the AOB-1 project. © 2023 7th Adaptive Optics for Extremely Large Telescopes Conference, AO4ELT7 2023. All rights reserved.

2023

The Adaptive Optics System for the Gemini Infrared Multi-Object Spectrograph: Performance Modeling

Autores
Conod, U; Jackson, K; Turri, P; Chapman, S; Lardière, O; Lamb, M; Correia, C; Sivo, G; Sivanandam, S; Véran, JP;

Publicação
PUBLICATIONS OF THE ASTRONOMICAL SOCIETY OF THE PACIFIC

Abstract
The Gemini Infrared Multi-Object Spectrograph (GIRMOS) will be a near-infrared, multi-object, medium spectral resolution, integral field spectrograph (IFS) for Gemini North Telescope, designed to operate behind the future Gemini North Adaptive Optics system (GNAO). In addition to a first ground layer Adaptive Optics (AO) correction in closed loop carried out by GNAO, each of the four GIRMOS IFSs will independently perform additional multi-object AO correction in open loop, resulting in an improved image quality that is critical to achieve top level science requirements. We present the baseline parameters and simulated performance of GIRMOS obtained by modeling both the GNAO and GIRMOS AO systems. The image quality requirement for GIRMOS is that 57% of the energy of an unresolved point-spread function ensquared within a 0.1 x 0.1 arcsecond at 2.0 mu m. It was established that GIRMOS will be an order 16 x 16 adaptive optics (AO) system after examining the tradeoffs between performance, risks and costs. The ensquared energy requirement will be met in median atmospheric conditions at Maunakea at 30 degrees from zenith.

2023

Integrated turbulence parameters' estimation from NAOMI adaptive optics telemetry data

Autores
Morujao, N; Correia, C; Andrade, P; Woillez, J; Garcia, P;

Publicação
ASTRONOMY & ASTROPHYSICS

Abstract
Context. Monitoring turbulence parameters is crucial in high-angular resolution astronomy for various purposes, such as optimising adaptive optics systems or fringe trackers. The former systems are present at most modern observatories and will remain significant in the future. This makes them a valuable complementary tool for the estimation of turbulence parameters. Aims. The feasibility of estimating turbulence parameters from low-resolution sensors remains untested. We performed seeing estimates for both simulated and on-sky telemetry data sourced from the new adaptive optics module installed on the four Auxiliary Telescopes of the Very Large Telescope Interferometer. Methods. The seeing estimates were obtained from a modified and optimised algorithm that employs a chi-squared modal fitting approach to the theoretical von Karman model variances. The algorithm was built to retrieve turbulence parameters while simultaneously estimating and accounting for the remaining and measurement error. A Monte Carlo method was proposed for the estimation of the statistical uncertainty of the algorithm. Results. The algorithm is shown to be able to achieve per-cent accuracy in the estimation of the seeing with a temporal horizon of 20 s on simulated data. A (0.76 '' +/- 1.2%vertical bar(stat) +/- 1.2%vertical bar(sys)) median seeing was estimated from on-sky data collected from 2018 to 2020. The spatial distribution of the Auxiliary Telescopes across the Paranal Observatory was found to not play a role in the value of the seeing.

2023

Brain activation by a VR-based motor imagery and observation task: An fMRI study

Autores
Nunes, D; Vourvopoulos, A; Blanco Mora, DA; Jorge, C; Fernandes, J; Bermudez I Badia, S; Figueiredo, P;

Publicação
PloS one

Abstract
Training motor imagery (MI) and motor observation (MO) tasks is being intensively exploited to promote brain plasticity in the context of post-stroke rehabilitation strategies. This may benefit from the use of closed-loop neurofeedback, embedded in brain-computer interfaces (BCI's) to provide an alternative non-muscular channel, which may be further augmented through embodied feedback delivered through virtual reality (VR). Here, we used functional magnetic resonance imaging (fMRI) in a group of healthy adults to map brain activation elicited by an ecologically-valid task based on a VR-BCI paradigm called NeuRow, whereby participants perform MI of rowing with the left or right arm (i.e., MI), while observing the corresponding movement of the virtual arm of an avatar (i.e., MO), on the same side, in a first-person perspective. We found that this MI-MO task elicited stronger brain activation when compared with a conventional MI-only task based on the Graz BCI paradigm, as well as to an overt motor execution task. It recruited large portions of the parietal and occipital cortices in addition to the somatomotor and premotor cortices, including the mirror neuron system (MNS), associated with action observation, as well as visual areas related with visual attention and motion processing. Overall, our findings suggest that the virtual representation of the arms in an ecologically-valid MI-MO task engage the brain beyond conventional MI tasks, which we propose could be explored for more effective neurorehabilitation protocols. Copyright: © 2023 Nunes et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

2023

Zero-shot face recognition: Improving the discriminability of visual face features using a Semantic-Guided Attention Model

Autores
Patricio, C; Neves, JC;

Publicação
EXPERT SYSTEMS WITH APPLICATIONS

Abstract
Zero-shot learning enables the recognition of classes not seen during training through the use of semantic information comprising a visual description of the class either in textual or attribute form. Despite the advances in the performance of zero-shot learning methods, most of the works do not explicitly exploit the correlation between the visual attributes of the image and their corresponding semantic attributes for learning discriminative visual features. In this paper, we introduce an attention-based strategy for deriving features from the image regions regarding the most prominent attributes of the image class. In particular, we train a Convolutional Neural Network (CNN) for image attribute prediction and use a gradient-weighted method for deriving the attention activation maps of the most salient image attributes. These maps are then incorporated into the feature extraction process of Zero-Shot Learning (ZSL) approaches for improving the discriminability of the features produced through the implicit inclusion of semantic information. For experimental validation, the performance of state-of-the-art ZSL methods was determined using features with and without the proposed attention model. Surprisingly, we discover that the proposed strategy degrades the performance of ZSL methods in classical ZSL datasets (AWA2), but it can significantly improve performance when using face datasets. Our experiments show that these results are a consequence of the interpretability of the dataset attributes, suggesting that existing ZSL datasets attributes are, in most cases, difficult to be identifiable in the image. Source code is available at https://github.com/CristianoPatricio/SGAM.

  • 80
  • 390