Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Ricardo Macedo is currently a Researcher at INESC TEC. He obtained is PhD degree in 2023 under the MAP-i Doctoral Programme in Computer Science from the Universities of Minho, Aveiro and Porto with the thesis “User-level Software-Defined Storage Data Planes”. 

His research is mainly focused on storage and operating systems, with an emphasis on designing new building blocks fitted for the performance, reliability, and energy consumption requirements of modern, large-scale I/O infrastructures, including key-value stores, kernel-bypass storage stacks, and disaggregated I/O resources. For more information, please check my personal web page at https://rgmacedo.github.io/.

Interest
Topics
Details

Details

  • Name

    Ricardo Gonçalves Macedo
  • Role

    Assistant Researcher
  • Since

    01st December 2016
004
Publications

2023

Diagnosing applications' I/O behavior through system call observability

Authors
Esteves, T; Macedo, R; Oliveira, R; Paulo, J;

Publication
CoRR

Abstract

2023

Taming Metadata-intensive HPC Jobs Through Dynamic, Application-agnostic QoS Control

Authors
Macedo, R; Miranda, M; Tanimura, Y; Haga, J; Ruhela, A; Harrell, SL; Evans, RT; Pereira, J; Paulo, J;

Publication
2023 IEEE/ACM 23RD INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING, CCGRID

Abstract
Modern I/O applications that run on HPC infrastructures are increasingly becoming read and metadata intensive. However, having multiple applications submitting large amounts of metadata operations can easily saturate the shared parallel file system's metadata resources, leading to overall performance degradation and I/O unfairness. We present PADLL, an application and file system agnostic storage middleware that enables QoS control of data and metadata workflows in HPC storage systems. It adopts ideas from Software-Defined Storage, building data plane stages that mediate and rate limit POSIX requests submitted to the shared file system, and a control plane that holistically coordinates how all I/O workflows are handled. We demonstrate its performance and feasibility under multiple QoS policies using synthetic benchmarks, real-world applications, and traces collected from a production file system. Results show that PADLL can enforce complex storage QoS policies over concurrent metadata-aggressive jobs, ensuring fairness and prioritization.

2023

Diagnosing applications' I/O behavior through system call observability

Authors
Esteves, T; Macedo, R; Oliveira, R; Paulo, J;

Publication
2023 53RD ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS, DSN-W

Abstract
We present DIO, a generic tool for observing inefficient and erroneous I/O interactions between applications and in-kernel storage systems that lead to performance, dependability, and correctness issues. DIO facilitates the analysis and enables near real-time visualization of complex I/O patterns for data-intensive applications generating millions of storage requests. This is achieved by non-intrusively intercepting system calls, enriching collected data with relevant context, and providing timely analysis and visualization for traced events. We demonstrate its usefulness by analyzing two production-level applications. Results show that DIO enables diagnosing resource contention in multi-threaded I/O that leads to high tail latency and erroneous file accesses that cause data loss.

2023

Toward a Practical and Timely Diagnosis of Application's I/O Behavior

Authors
Esteves, T; Macedo, R; Oliveira, R; Paulo, J;

Publication
IEEE ACCESS

Abstract
We present DIO, a generic tool for observing inefficient and erroneous I/O interactions between applications and in-kernel storage backends that lead to performance, dependability, and correctness issues. DIO eases the analysis and enables near real-time visualization of complex I/O patterns for data-intensive applications generating millions of storage requests. This is achieved by non-intrusively intercepting system calls, enriching collected data with relevant context, and providing timely analysis and visualization for traced events. We demonstrate its usefulness by analyzing four production-level applications. Results show that DIO enables diagnosing inefficient I/O patterns that lead to poor application performance, unexpected and redundant I/O calls caused by high-level libraries, resource contention in multithreaded I/O that leads to high tail latency, and erroneous file accesses that cause data loss. Moreover, through a detailed evaluation, we show that, when comparing DIO's inline diagnosis pipeline with a similar state-of-the-art solution, our system captures up to 28x more events while keeping tracing performance overhead between 14% and 51%.

2022

Accelerating Deep Learning Training Through Transparent Storage Tiering

Authors
Dantas, M; Leitao, D; Cui, P; Macedo, R; Liu, XL; Xu, WJ; Paulo, J;

Publication
2022 22ND IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING (CCGRID 2022)

Abstract
We present MONARCH, a framework-agnostic storage middleware that transparently employs storage tiering to accelerate Deep Learning (DL) training. It leverages existing storage tiers of modern supercomputers (i.e., compute node's local storage and shared parallel file system (PFS)), while considering the I/O patterns of DL frameworks to improve data placement across tiers. MONARCH aims at accelerating DL training and decreasing the I/O pressure imposed over the PFS. We apply MONARCH to TensorFlow and PyTorch, while validating its performance and applicability under different models and dataset sizes. Results show that, even when the training dataset can only be partially stored at local storage, MONARCH reduces TensorFlow's and PyTorch's training time by up to 28% and 37% for I/O-intensive models, respectively. Furthermore, MONARCH decreases the number of I/O operations submitted to the PFS by up to 56%.

Supervised
thesis

2023

Comprehensive Study of the Energy Impact of Key-Value Stores

Author
José Pedro Fernandes

Institution
UM

2023

Energy Control System for Disaggregated Storage Resources

Author
Mariana Amorim

Institution
UM

2023

Otimizações de Armazenamento Distribuído para Aprendizagem Profunda

Author
Maria Beatriz Moreira

Institution
UM

2023

Energy Control System for Large-scale Infrastructures

Author
Sara Pereira

Institution
UM

2023

Injeção de Faltas Reprodutível em Sistemas de Armazenamento Local

Author
Maria Ramos

Institution
UM