Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Sou um aluno de Doutoramento MAPi no meu quarto ano, e um investigador do HASLab/INESC TEC, de momento a trabalhar nos projetos SafeCloud e NanoSTIMA. Tenho um Mestrado em Engenharia Informática concluído na Universidade do Minho.

Os meus interesses para investigação são, principalmente, criptografia e segurança da informação. Mais especificamente, o tópico do meu trabalho é o desenvolvimento de protocolos de computação segura baseados em hardware confiável. O objetivo do meu projeto de doutoramento passa por melhorar o estado da arte em protocolos seguros altamente confiáveis, reduzindo a lacuna existente entre os modelos de segurança teóricos e as implementações práticas mais eficientes. As minhas contribuições de maior relevância no contexto deste trabalho incluem a primeira abordagem para formalizar garantias seguras oferecidas por ambientes de execução isolados, e a primeira implementação genérica de computação segura utilizando ambientes de execução isolados.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Bernardo Luís Portela
  • Cargo

    Investigador Sénior
  • Desde

    01 janeiro 2014
003
Publicações

2024

Flow Correlation Attacks on Tor Onion Service Sessions with Sliding Subset Sum

Autores
Lopes, D; Dong, JD; Medeiros, P; Castro, D; Barradas, D; Portela, B; Vinagre, J; Ferreira, B; Christin, N; Santos, N;

Publicação
31st Annual Network and Distributed System Security Symposium, NDSS 2024, San Diego, California, USA, February 26 - March 1, 2024

Abstract

2024

Extending C2 Traffic Detection Methodologies: From TLS 1.2 to TLS 1.3-enabled Malware

Autores
Barradas, D; Novo, C; Portela, B; Romeiro, S; Santos, N;

Publicação
PROCEEDINGS OF 27TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2024

Abstract
As the Internet evolves from TLS 1.2 to TLS 1.3, it offers enhanced security against network eavesdropping for online communications. However, this advancement also enables malicious command and control (C2) traffic to more effectively evade malware detectors and intrusion detection systems. Among other capabilities, TLS 1.3 introduces encryption for most handshake messages and conceals the actual TLS record content type, complicating the task for state-of-the-art C2 traffic classifiers that were initially developed for TLS 1.2 traffic. Given the pressing need to accurately detect malicious C2 communications, this paper examines to what extent existing C2 classifiers for TLS 1.2 are less effective when applied to TLS 1.3 traffic, posing a central research question: is it possible to adapt TLS 1.2 detection methodologies for C2 traffic to work with TLS 1.3 flows? We answer this question affirmatively by introducing new methods for inferring certificate size and filtering handshake/protocolrelated records in TLS 1.3 flows. These techniques enable the extraction of key features for enhancing traffic detection and can be utilized to pre-process data flows before applying C2 classifiers. We demonstrate that this approach facilitates the use of existing TLS 1.2 C2 classifiers with high efficacy, allowing for the passive classification of encrypted network traffic. In our tests, we inferred certificate sizes with an average error of 1.0%, and achieved detection rates of 100% when classifying traffic based on certificate size, and over 93% when classifying TLS 1.3 traffic behavior after training solely on TLS 1.2 traffic. To our knowledge, these are the first findings to showcase specialized TLS 1.3 C2 traffic classification.

2023

General-Purpose Secure Conflict-free Replicated Data Types

Autores
Portela, B; Pacheco, H; Jorge, P; Pontes, R;

Publicação
2023 IEEE 36TH COMPUTER SECURITY FOUNDATIONS SYMPOSIUM, CSF

Abstract
Conflict-free Replicated Data Types (CRDTs) are a very popular class of distributed data structures that strike a compromise between strong and eventual consistency. Ensuring the protection of data stored within a CRDT, however, cannot be done trivially using standard encryption techniques, as secure CRDT protocols would require replica-side computation. This paper proposes an approach to lift general-purpose implementations of CRDTs to secure variants using secure multiparty computation (MPC). Each replica within the system is realized by a group of MPC parties that compute its functionality. Our results include: i) an extension of current formal models used for reasoning over the security of CRDT solutions to the MPC setting; ii) a MPC language and type system to enable the construction of secure versions of CRDTs and; iii) a proof of security that relates the security of CRDT constructions designed under said semantics to the underlying MPC library. We provide an open-source system implementation with an extensive evaluation, which compares different designs with their baseline throughput and latency.

2023

Soteria: Preserving Privacy in Distributed Machine Learning

Autores
Brito, C; Ferreira, P; Portela, B; Oliveira, R; Paulo, J;

Publicação
38TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2023

Abstract
We propose Soteria, a system for distributed privacy-preserving Machine Learning (ML) that leverages Trusted Execution Environments (e.g. Intel SGX) to run code in isolated containers (enclaves). Unlike previous work, where all ML-related computation is performed at trusted enclaves, we introduce a hybrid scheme, combining computation done inside and outside these enclaves. The conducted experimental evaluation validates that our approach reduces the runtime of ML algorithms by up to 41%, when compared to previous related work. Our protocol is accompanied by a security proof, as well as a discussion regarding resilience against a wide spectrum of ML attacks.

2023

Privacy-Preserving Machine Learning on Apache Spark

Autores
Brito, CV; Ferreira, PG; Portela, BL; Oliveira, RC; Paulo, JT;

Publicação
IEEE ACCESS

Abstract
The adoption of third-party machine learning (ML) cloud services is highly dependent on the security guarantees and the performance penalty they incur on workloads for model training and inference. This paper explores security/performance trade-offs for the distributed Apache Spark framework and its ML library. Concretely, we build upon a key insight: in specific deployment settings, one can reveal carefully chosen non-sensitive operations (e.g. statistical calculations). This allows us to considerably improve the performance of privacy-preserving solutions without exposing the protocol to pervasive ML attacks. In more detail, we propose Soteria, a system for distributed privacy-preserving ML that leverages Trusted Execution Environments (e.g. Intel SGX) to run computations over sensitive information in isolated containers (enclaves). Unlike previous work, where all ML-related computation is performed at trusted enclaves, we introduce a hybrid scheme, combining computation done inside and outside these enclaves. The experimental evaluation validates that our approach reduces the runtime of ML algorithms by up to 41% when compared to previous related work. Our protocol is accompanied by a security proof and a discussion regarding resilience against a wide spectrum of ML attacks.

Teses
supervisionadas

2023

Privacy in Telecom Fraud Detection

Autor
Eduardo Carvalho Santos

Instituição
UP-FCUP

2023

Speculative Execution Resilient Cryptography

Autor
Rui Pedro Gomes Fernandes

Instituição
UP-FCUP

2023

Detection of Encrypted Malware Command and Control Traffic

Autor
Carlos António de Sousa Costa Novo

Instituição
UP-FCUP

2022

Trustworthy and Robust Intra-Vehicle Communication

Autor
Patrícia Adelaide Lopes Machado

Instituição
UP-FCUP

2022

Security in Data Aggregation for Eventually Consistent Systems

Autor
Pedro Miguel de Jesus Jorge

Instituição
UP-FCUP