2023
Authors
Coelho, F; Alonso, AN; Ferreira, L; Pereira, J; Oliveira, R;
Publication
PROCEEDINGS OF12TH LATIN-AMERICAN SYMPOSIUM ON DEPENDABLE AND SECURE COMPUTING, LADC 2023
Abstract
Cloud native database systems provide highly available and scalable services as part of cloud platforms by transparently replicating and partitioning data across automatically managed resources. Some systems, such as Google Spanner, are designed and implemented from scratch. Others, such as Amazon Aurora, derive from traditional database systems for better compatibility but disaggregate storage to cloud services. Unfortunately, because they follow an open-box approach and fork the original code base, they are difficult to implement and maintain. We address this problem with Loom, a replicated and partitioned database system built on top of PostgreSQL that delegates durable storage to a distributed log native to the cloud. Unlike previous disaggregation proposals, Loom is a closed-box approach that uses the original server through existing interfaces to simplify implementation and improve robustness and maintainability. Experimental evaluation achieves 6x higher throughput and 5x lower response time than standard replication and competes with the state of the art in cloud and HPC hardware.
2023
Authors
Esteves, T; Macedo, R; Oliveira, R; Paulo, J;
Publication
53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2023 - Workshops, Porto, Portugal, June 27-30, 2023
Abstract
2023
Authors
Brito, CV; Ferreira, PG; Portela, BL; Oliveira, RC; Paulo, JT;
Publication
IEEE ACCESS
Abstract
The adoption of third-party machine learning (ML) cloud services is highly dependent on the security guarantees and the performance penalty they incur on workloads for model training and inference. This paper explores security/performance trade-offs for the distributed Apache Spark framework and its ML library. Concretely, we build upon a key insight: in specific deployment settings, one can reveal carefully chosen non-sensitive operations (e.g. statistical calculations). This allows us to considerably improve the performance of privacy-preserving solutions without exposing the protocol to pervasive ML attacks. In more detail, we propose Soteria, a system for distributed privacy-preserving ML that leverages Trusted Execution Environments (e.g. Intel SGX) to run computations over sensitive information in isolated containers (enclaves). Unlike previous work, where all ML-related computation is performed at trusted enclaves, we introduce a hybrid scheme, combining computation done inside and outside these enclaves. The experimental evaluation validates that our approach reduces the runtime of ML algorithms by up to 41% when compared to previous related work. Our protocol is accompanied by a security proof and a discussion regarding resilience against a wide spectrum of ML attacks.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.