Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2024

Using Source-to-Source to Target RISC-V Custom Extensions: UVE Case-Study

Autores
Henriques, M; Bispo, J; Paulino, N;

Publicação
PROCEEDINGS OF THE RAPIDO 2024 WORKSHOP, HIPEAC 2024

Abstract
Hardware specialization is seen as a promising venue for improving computing efficiency, with reconfigurable devices as excellent deployment platforms for application-specific architectures. One approach to hardware specialization is via the popular RISC-V, where Instruction Set Architecture (ISA) extensions for domains such as Edge Artifical Intelligence (AI) are already appearing. However, to use the custom instructions while maintaining a high (e.g., C/C++) abstraction level, the assembler and compiler must be modified. Alternatively, inline assembly can be manually introduced by a software developer with expert knowledge of the hardware modifications in the RISC-V core. In this paper, we consider a RISC-V core with a vectorization and streaming engine to support the Unlimited Vector Extension (UVE), and propose an approach to automatically transform annotated C loops into UVE compatible code, via automatic insertion of inline assembly. We rely on a source-to-source transformation tool, Clava, to perform sophisticated code analysis and transformations via scripts. We use pragmas to identify code sections amenable for vectorization and/or streaming, and use Clava to automatically insert inline UVE instructions, avoiding extensive modifications of existing compiler projects. We produce UVE binaries which are functionally correct, when compared to handwritten versions with inline assembly, and achieve equal and sometimes improved number of executed instructions, for a set of six benchmarks from the Polybench suite. These initial results are evidence towards that this kind of translation is feasible, and we consider that it is possible in future work to target more complex transformations or other ISA extensions, accelerating the adoption of hardware/software co-design flows for generic application cases.

2024

Optimisation for operational decision-making in a watershed system with interconnected dams

Autores
Vaz, TG; Oliveira, BB; Brandão, L;

Publicação
Applied Energy

Abstract
In the energy production sector, increasing the quantity and efficiency of renewable energies, such as hydropower plants, is crucial to mitigate climate change. This paper proposes a new and flexible model for optimising operational decisions in watershed systems with interconnected dams. We propose a systematic representation of watersheds by a network of different connection points, which is the basis for an efficient Mixed-Integer Linear Programming model. The model is designed to be adaptable to different connections between dams in both main and tributary rivers. It supports decisions on power generation, pumping and water discharge, maximising profit, and considering realistic constraints on water use and factors such as future energy prices and weather conditions. A relax-and-fix heuristic is proposed to solve the model, along with two heuristic variants to accommodate different watershed structures and sizes. Methodological tests with simulated instances validate their performance, with both variants achieving results within 1% of the optimal solution faster than the model for the tested instances. To evaluate the performance of the approaches in a real-world scenario, we analyse the case study of the Cávado watershed (Portugal), providing relevant insights for managing dam operations. The model generally follows the actual decisions made in typical situations and flood scenarios. However, in the case of droughts, it tends to be more conservative, saving water unless necessary or profitable. The model can be used in a decision-support system to provide decision-makers with an integrated view of the entire watershed and optimised solutions to the operational problem at hand. © 2024 The Author(s)

2024

A C Subset for Ergonomic Source-to-Source Analyses and Transformations

Autores
Matos, JN; Bispo, J; Sousa, LM;

Publicação
PROCEEDINGS OF THE RAPIDO 2024 WORKSHOP, HIPEAC 2024

Abstract
Modern compiled software, written in languages such as C, relies on complex compiler infrastructure. However, developing new transformations and improving existing ones can be challenging for researchers and engineers. Often, transformations must be implemented bymodifying the compiler itself, which may not be feasible, for technical or legal reasons. Source-to-source compilers make it possible to directly analyse and transform the original source, making transformations portable across different compilers, and allowing rapid research and prototyping of code transformations. However, this approach has the drawback of exposing the researcher to the full breadth of the source language, which is often more extensive and complex than the IRs used in traditional compilers. In this work, we propose a solution to tame the complexity of the source language and make source-to-source compilers an ergonomic platform for program analysis and transformation. We define a simpler subset of the C language that can implement the same programs with fewer constructs and implement a set of sourceto-source transformations that automatically normalise the input source code into equivalent programs expressed in the proposed subset. Finally, we implement a function inlining transformation that targets the subset as a case study. We show that for this case study, the assumptions afforded by using a simpler language subset greatly improves the number of cases the transformation can be applied, increasing the average success rate from 37%, before normalisation, to 97%, after normalisation. We also evaluate the performance of several benchmarks after applying a naive inlining algorithm, and obtained a 12% performance improvement in certain applications, after compiling with the flag O2, both in Clang and GCC, suggesting there is room for exploring source-level transformations as a complement to traditional compilers.

2024

Exact vs Approximated ML Estimation for the Box-Cox Transformation

Autores
Gonçalves, R;

Publicação
AIP Conference Proceedings

Abstract
The Box-Cox (BC) transformation is widely used in data analysis for achieving approximate normality in the transformed scale. The transformation is only possible for non-negative data. This positiveness requirement implies a truncation to the distribution on the transformed scale and the distribution in the transformed scale is truncated normal. This fact has consequences for the estimation of the parameters specially if the truncated probability is high. In the seminal paper Box and Cox proposed to estimate parameters using the normal distribution which in practice means to ignore any consequences of the truncation on the estimation process. In this work we present the framework for exact likelihood estimation on the PN distribution to which we call method m1 and how to calculate the parameters estimates using consistent estimators. We also present a pseudo-Likelihood function for the same model not taking into account truncation and allowing to replace parameters µ and s for their estimates. We call m2 to this estimation method. We conclude that for cases where the truncated probability is low both methods give good estimation results. However for larger values of the truncated probability the m2 method does not present the same efficiency. © 2024 American Institute of Physics Inc.. All rights reserved.

2024

On Quantum Natural Policy Gradients

Autores
Sequeira, A; Santos, LP; Barbosa, LS;

Publicação
CoRR

Abstract

2024

Unveiling Health Literacy through Web Search Behavior: A Classification-Based Analysis of User Interactions

Autores
Lopes, CT; Henriques, M;

Publicação
Proceedings of the 2024 ACM SIGIR Conference on Human Information Interaction and Retrieval, CHIIR 2024, Sheffield, United Kingdom, March 10-14, 2024

Abstract
More and more people are relying on the Web to find health information. Challenges faced by individuals with low health literacy in the real world likely persist in the virtual realm. To assist these users, our first step is to identify them. This study aims to uncover disparities in the information-seeking behavior of users with varying levels of health literacy. We utilized data gathered from a prior user experiment. Our approach involves a classification scheme encompassing events during web search sessions, spanning the browser, search engine, and web pages. Employing this scheme, we logged interactions from video recordings in the user study and subjected the event logs to descriptive and inferential analyses. Our data analysis unveils distinctive patterns within the low health literacy group. They exhibit a higher frequency of query reformulations with entirely new terms, engage in more left clicks, utilize the browser's backward functionality more frequently, and invest more time in interactions, including increased scrolling on results pages. Conversely, the high health literacy group demonstrates a greater propensity to click on universal results, extract text from URLs more often, and make more clicks with the mouse middle button. These findings offer valuable insights for inferring users' health literacy in a non-intrusive manner. The automatic inference of health literacy can pave the way for personalized services, enhancing accessibility to information and education for individuals with low health literacy, among other benefits. © 2024 Owner/Author.

  • 29
  • 3789