Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

2024

Companion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, EICS Companion 2024, Cagliari, Italy, June 24-28, 2024

Authors
Nebeling, M; Spano, LD; Campos, JC;

Publication
EICS (Companion)

Abstract

2024

Using Source-to-Source to Target RISC-V Custom Extensions: UVE Case-Study

Authors
Henriques, M; Bispo, J; Paulino, N;

Publication
PROCEEDINGS OF THE RAPIDO 2024 WORKSHOP, HIPEAC 2024

Abstract
Hardware specialization is seen as a promising venue for improving computing efficiency, with reconfigurable devices as excellent deployment platforms for application-specific architectures. One approach to hardware specialization is via the popular RISC-V, where Instruction Set Architecture (ISA) extensions for domains such as Edge Artifical Intelligence (AI) are already appearing. However, to use the custom instructions while maintaining a high (e.g., C/C++) abstraction level, the assembler and compiler must be modified. Alternatively, inline assembly can be manually introduced by a software developer with expert knowledge of the hardware modifications in the RISC-V core. In this paper, we consider a RISC-V core with a vectorization and streaming engine to support the Unlimited Vector Extension (UVE), and propose an approach to automatically transform annotated C loops into UVE compatible code, via automatic insertion of inline assembly. We rely on a source-to-source transformation tool, Clava, to perform sophisticated code analysis and transformations via scripts. We use pragmas to identify code sections amenable for vectorization and/or streaming, and use Clava to automatically insert inline UVE instructions, avoiding extensive modifications of existing compiler projects. We produce UVE binaries which are functionally correct, when compared to handwritten versions with inline assembly, and achieve equal and sometimes improved number of executed instructions, for a set of six benchmarks from the Polybench suite. These initial results are evidence towards that this kind of translation is feasible, and we consider that it is possible in future work to target more complex transformations or other ISA extensions, accelerating the adoption of hardware/software co-design flows for generic application cases.

2024

Optimisation for operational decision-making in a watershed system with interconnected dams

Authors
Vaz T.G.; Oliveira B.B.; Brandão L.;

Publication
Applied Energy

Abstract
In the energy production sector, increasing the quantity and efficiency of renewable energies, such as hydropower plants, is crucial to mitigate climate change. This paper proposes a new and flexible model for optimising operational decisions in watershed systems with interconnected dams. We propose a systematic representation of watersheds by a network of different connection points, which is the basis for an efficient Mixed-Integer Linear Programming model. The model is designed to be adaptable to different connections between dams in both main and tributary rivers. It supports decisions on power generation, pumping and water discharge, maximising profit, and considering realistic constraints on water use and factors such as future energy prices and weather conditions. A relax-and-fix heuristic is proposed to solve the model, along with two heuristic variants to accommodate different watershed structures and sizes. Methodological tests with simulated instances validate their performance, with both variants achieving results within 1% of the optimal solution faster than the model for the tested instances. To evaluate the performance of the approaches in a real-world scenario, we analyse the case study of the Cávado watershed (Portugal), providing relevant insights for managing dam operations. The model generally follows the actual decisions made in typical situations and flood scenarios. However, in the case of droughts, it tends to be more conservative, saving water unless necessary or profitable. The model can be used in a decision-support system to provide decision-makers with an integrated view of the entire watershed and optimised solutions to the operational problem at hand.

2024

Open Design Communities: A bibliometric analysis of community-based management

Authors
Castro, H; Madureira, F; Vrabic, R; Avila, P; Simonnetto, E;

Publication
Procedia Computer Science

Abstract
Online collaboration growing significantly in the development of open-source hardware and software has led to a surge of research interest. However, no comprehensive bibliometric review has investigated the management of digital communities in these ecosystems. In this study, academic contributions to the field of online community management in open-source hardware and software were mapped, highlighting influential research streams and trends. A bibliometric review was conducted based on a keyword search analysis of research databases (IEEExplore, Scopus, ScienceDirect, Web of Science), with a sample comprising an overall 399 papers. The study identifies the most impactful articles in the field, maps the diverse streams of research on online collaboration and community management, visualizes focus areas and trends, and pinpoints areas for further investigation. These findings will support future research within this rapidly evolving domain. © 2024 The Author(s). Published by Elsevier B.V.

2024

A C Subset for Ergonomic Source-to-Source Analyses and Transformations

Authors
Matos, JN; Bispo, J; Sousa, LM;

Publication
PROCEEDINGS OF THE RAPIDO 2024 WORKSHOP, HIPEAC 2024

Abstract
Modern compiled software, written in languages such as C, relies on complex compiler infrastructure. However, developing new transformations and improving existing ones can be challenging for researchers and engineers. Often, transformations must be implemented bymodifying the compiler itself, which may not be feasible, for technical or legal reasons. Source-to-source compilers make it possible to directly analyse and transform the original source, making transformations portable across different compilers, and allowing rapid research and prototyping of code transformations. However, this approach has the drawback of exposing the researcher to the full breadth of the source language, which is often more extensive and complex than the IRs used in traditional compilers. In this work, we propose a solution to tame the complexity of the source language and make source-to-source compilers an ergonomic platform for program analysis and transformation. We define a simpler subset of the C language that can implement the same programs with fewer constructs and implement a set of sourceto-source transformations that automatically normalise the input source code into equivalent programs expressed in the proposed subset. Finally, we implement a function inlining transformation that targets the subset as a case study. We show that for this case study, the assumptions afforded by using a simpler language subset greatly improves the number of cases the transformation can be applied, increasing the average success rate from 37%, before normalisation, to 97%, after normalisation. We also evaluate the performance of several benchmarks after applying a naive inlining algorithm, and obtained a 12% performance improvement in certain applications, after compiling with the flag O2, both in Clang and GCC, suggesting there is room for exploring source-level transformations as a complement to traditional compilers.

2024

Exact vs Approximated ML Estimation for the Box-Cox Transformation

Authors
Gonçalves, R;

Publication
INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2022, ICNAAM-2022

Abstract
The Box-Cox (BC) transformation is widely used in data analysis for achieving approximate normality in the transformed scale. The transformation is only possible for non-negative data. This positiveness requirement implies a truncation to the distribution on the transformed scale and the distribution in the transformed scale is truncated normal. This fact has consequences for the estimation of the parameters specially if the truncated probability is high. In the seminal paper Box and Cox proposed to estimate parameters using the normal distribution which in practice means to ignore any consequences of the truncation on the estimation process. In this work we present the framework for exact likelihood estimation on the PN distribution to which we call method m(1) and how to calculate the parameters estimates using consistent estimators. We also present a pseudo-Likelihood function for the same model not taking into account truncation and allowing to replace parameters mu and sigma for their estimates. We call m(2) to this estimation method. We conclude that for cases where the truncated probability is low both methods give good estimation results. However for larger values of the truncated probability the m(2) method does not present the same efficiency.

  • 84
  • 3960