Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Luís Paulo Peixoto dos Santos é actualmente Professor Auxiliar do Departamento de Informática, Universidade do Minho e investigador do CSIG, INESC-TEC. A sua área de investigação é a Iluminação Global, com especial ênfase no desempenho dos algortimos e o recurso à Computação Paralela Heterogénea (CPU + GPU + Knights Landing) para diminuir o tempo de convergência para soluções perceptualmente correctas. Publicou algumas dezenas de artigos nos mais prestigiados fóruns internacionais (conferências e revistas) desta área do cohecimento, sendo tambem autor de um livro em Bayesian Monte Carlo Rendering. Integra a Comissão de Programa de várias conferências internacionais, tendo presidido a algumas destas comissões e organizado 6 conferências em Portugal.

Foi Vice-Director do Departamento de Informática, Vice-Director da Licenciatura em Engenharia Informática, bem como do Mestrado em Engenhraia Informática. Foi Director do Programa Doutoral em Engenharia Informática. Integrou a Comissão designada por iniciativa reitoral para coordenar a instalação da Unidade Operacional em Governação Electrónica da Universidade das Nações Unidas em Portugal, especificamente no Campus de Couros da Universidade do Minho, Guimarães, integrando actualmente o corpo directivo da unidade EGOV-UM que assegura o interface entre as duas instituições.  

É Editor Associado da revista Computers & Graphics e Presidente da Direcção do Grupo Português de Computação Gráfica (secção portuguesa da Eurographics) para o biénio 2017-2018.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Luís Paulo Santos
  • Cargo

    Investigador Sénior
  • Desde

    01 janeiro 2017
Publicações

2025

Reducing measurement costs by recycling the Hessian in adaptive variational quantum algorithms

Autores
Ramôa, M; Santos, LP; Mayhall, NJ; Barnes, E; Economou, SE;

Publicação
QUANTUM SCIENCE AND TECHNOLOGY

Abstract
Adaptive protocols enable the construction of more efficient state preparation circuits in variational quantum algorithms (VQAs) by utilizing data obtained from the quantum processor during the execution of the algorithm. This idea originated with Adaptive Derivative-Assembled Problem-Tailored variational quantum eigensolver (ADAPT-VQE), an algorithm that iteratively grows the state preparation circuit operator by operator, with each new operator accompanied by a new variational parameter, and where all parameters acquired thus far are optimized in each iteration. In ADAPT-VQE and other adaptive VQAs that followed it, it has been shown that initializing parameters to their optimal values from the previous iteration speeds up convergence and avoids shallow local traps in the parameter landscape. However, no other data from the optimization performed at one iteration is carried over to the next. In this work, we propose an improved quasi-Newton optimization protocol specifically tailored to adaptive VQAs. The distinctive feature in our proposal is that approximate second derivatives of the cost function are recycled across iterations in addition to optimal parameter values. We implement a quasi-Newton optimizer where an approximation to the inverse Hessian matrix is continuously built and grown across the iterations of an adaptive VQA. The resulting algorithm has the flavor of a continuous optimization where the dimension of the search space is augmented when the gradient norm falls below a given threshold. We show that this inter-optimization exchange of second-order information leads the approximate Hessian in the state of the optimizer to be consistently closer to the exact Hessian. As a result, our method achieves a superlinear convergence rate even in situations where the typical implementation of a quasi-Newton optimizer converges only linearly. Our protocol decreases the measurement costs in implementing adaptive VQAs on quantum hardware as well as the runtime of their classical simulation.

2024

On Quantum Natural Policy Gradients

Autores
Sequeira, A; Santos, LP; Barbosa, LS;

Publicação
IEEE TRANSACTIONS ON QUANTUM ENGINEERING

Abstract
This article delves into the role of the quantum Fisher information matrix (FIM) in enhancing the performance of parameterized quantum circuit (PQC)-based reinforcement learning agents. While previous studies have highlighted the effectiveness of PQC-based policies preconditioned with the quantum FIM in contextual bandits, its impact in broader reinforcement learning contexts, such as Markov decision processes, is less clear. Through a detailed analysis of L & ouml;wner inequalities between quantum and classical FIMs, this study uncovers the nuanced distinctions and implications of using each type of FIM. Our results indicate that a PQC-based agent using the quantum FIM without additional insights typically incurs a larger approximation error and does not guarantee improved performance compared to the classical FIM. Empirical evaluations in classic control benchmarks suggest even though quantum FIM preconditioning outperforms standard gradient ascent, in general, it is not superior to classical FIM preconditioning.

2024

VQC-based reinforcement learning with data re-uploading: performance and trainability

Autores
Coelho, R; Sequeira, A; Santos, LP;

Publicação
QUANTUM MACHINE INTELLIGENCE

Abstract
Reinforcement learning (RL) consists of designing agents that make intelligent decisions without human supervision. When used alongside function approximators such as Neural Networks (NNs), RL is capable of solving extremely complex problems. Deep Q-Learning, a RL algorithm that uses Deep NNs, has been shown to achieve super-human performance in game-related tasks. Nonetheless, it is also possible to use Variational Quantum Circuits (VQCs) as function approximators in RL algorithms. This work empirically studies the performance and trainability of such VQC-based Deep Q-Learning models in classic control benchmark environments. More specifically, we research how data re-uploading affects both these metrics. We show that the magnitude and the variance of the model's gradients remain substantial throughout training even as the number of qubits increases. In fact, both increase considerably in the training's early stages, when the agent needs to learn the most. They decrease later in the training, when the agent should have done most of the learning and started converging to a policy. Thus, even if the probability of being initialized in a Barren Plateau increases exponentially with system size for Hardware-Efficient ansatzes, these results indicate that the VQC-based Deep Q-Learning models may still be able to find large gradients throughout training, allowing for learning.

2024

Trainability issues in quantum policy gradients

Autores
Sequeira, A; Paulo Santos, L; Soares Barbosa, L;

Publicação
Machine Learning: Science and Technology

Abstract
This research explores the trainability of Parameterized Quantum Circuit-based policies in Reinforcement Learning, an area that has recently seen a surge in empirical exploration. While some studies suggest improved sample complexity using quantum gradient estimation, the efficient trainability of these policies remains an open question. Our findings reveal significant challenges, including standard Barren Plateaus with exponentially small gradients and gradient explosion. These phenomena depend on the type of basis-state partitioning and the mapping of these partitions onto actions. For a polynomial number of actions, a trainable window can be ensured with a polynomial number of measurements if a contiguous-like partitioning of basis-states is employed. These results are empirically validated in a multi-armed bandit environment. © 2024 The Author(s). Published by IOP Publishing Ltd.

2024

Trainability issues in quantum policy gradients

Autores
Sequeira, A; Santos, LP; Barbosa, LS;

Publicação
MACHINE LEARNING-SCIENCE AND TECHNOLOGY

Abstract
This research explores the trainability of Parameterized Quantum Circuit-based policies in Reinforcement Learning, an area that has recently seen a surge in empirical exploration. While some studies suggest improved sample complexity using quantum gradient estimation, the efficient trainability of these policies remains an open question. Our findings reveal significant challenges, including standard Barren Plateaus with exponentially small gradients and gradient explosion. These phenomena depend on the type of basis-state partitioning and the mapping of these partitions onto actions. For a polynomial number of actions, a trainable window can be ensured with a polynomial number of measurements if a contiguous-like partitioning of basis-states is employed. These results are empirically validated in a multi-armed bandit environment.

Teses
supervisionadas

2023

Quantum Reinforcement Learning: Foundations, algorithms, applications

Autor
André Manuel Resende Sequeira

Instituição
UM

2023

Algoritmos e aplicações de estimativa de amplitude quântica

Autor
Alexandra Francisco Ramôa da Costa Alves

Instituição
UM

2023

Classification and Clustering using Swap Test as distance metric

Autor
Tomás Rodrigues Alves de Sousa

Instituição
UM

2023

Algoritmos de otimização quântica

Autor
Mafalda Francisco Ramôa da Costa Alves

Instituição
UM

2022

Algoritmos de otimização quântica

Autor
Mafalda Francisco Ramôa da Costa Alves

Instituição
UM