2025
Autores
Cunha, J; Madeira, A; Barbosa, LS;
Publicação
SCIENCE OF COMPUTER PROGRAMMING
Abstract
The need for more flexible and robust models to reason about systems in the presence of conflicting information is becoming more and more relevant in different contexts. This has prompted the introduction of paraconsistent transition systems, where transitions are characterized by two pairs of weights: one representing the evidence that the transition effectively occurs and the other its absence. Such a pair of weights can express scenarios of vagueness and inconsistency. . This paper establishes a foundation for a compositional and structured specification approach of paraconsistent transition systems, framed as paraconsistent institution. . The proposed methodology follows the stepwise implementation process outlined by Sannella and Tarlecki.
2024
Autores
Busquim e Silva, RA; Arai, NN; Burgareli, LA; Parente de Oliveira, JM; Sousa Pinto, J;
Publicação
Computer Science Foundations and Applied Logic
Abstract
2024
Autores
Rufino, J; Ramírez, JM; Aguilar, J; Baquero, C; Champati, J; Frey, D; Lillo, RE; Fernández Anta, A;
Publicação
HELIYON
Abstract
In this paper, we evaluate the performance and analyze the explainability of machine learning models boosted by feature selection in predicting COVID-19-positive cases from self-reported information. In essence, this work describes a methodology to identify COVID-19 infections that considers the large amount of information collected by the University of Maryland Global COVID-19 Trends and Impact Survey (UMD-CTIS). More precisely, this methodology performs a feature selection stage based on the recursive feature elimination (RFE) method to reduce the number of input variables without compromising detection accuracy. A tree-based supervised machine learning model is then optimized with the selected features to detect COVID-19-active cases. In contrast to previous approaches that use a limited set of selected symptoms, the proposed approach builds the detection engine considering a broad range of features including self-reported symptoms, local community information, vaccination acceptance, and isolation measures, among others. To implement the methodology, three different supervised classifiers were used: random forests (RF), light gradient boosting (LGB), and extreme gradient boosting (XGB). Based on data collected from the UMD-CTIS, we evaluated the detection performance of the methodology for four countries (Brazil, Canada, Japan, and South Africa) and two periods (2020 and 2021). The proposed approach was assessed in terms of various quality metrics: F1-score, sensitivity, specificity, precision, receiver operating characteristic (ROC), and area under the ROC curve (AUC). This work also shows the normalized daily incidence curves obtained by the proposed approach for the four countries. Finally, we perform an explainability analysis using Shapley values and feature importance to determine the relevance of each feature and the corresponding contribution for each country and each country/year.
2024
Autores
Hill, RK; Baquero, C;
Publicação
Commun. ACM
Abstract
[No abstract available]
2024
Autores
Rua, R; Saraiva, J;
Publicação
EMPIRICAL SOFTWARE ENGINEERING
Abstract
Software performance concerns have been attracting research interest at an increasing rate, especially regarding energy performance in non-wired computing devices. In the context of mobile devices, several research works have been devoted to assessing the performance of software and its underlying code. One important contribution of such research efforts is sets of programming guidelines aiming at identifying efficient and inefficient programming practices, and consequently to steer software developers to write performance-friendly code.Despite recent efforts in this direction, it is still almost unfeasible to obtain universal and up-to-date knowledge regarding software and respective source code performance. Namely regarding energy performance, where there has been growing interest in optimizing software energy consumption due to the power restrictions of such devices. There are still many difficulties reported by the community in measuring performance, namely in large-scale validation and replication. The Android ecosystem is a particular example, where the great fragmentation of the platform, the constant evolution of the hardware, the software platform, the development libraries themselves, and the fact that most of the platform tools are integrated into the IDE's GUI, makes it extremely difficult to perform performance studies based on large sets of data/applications. In this paper, we analyze the execution of a diversified corpus of applications of significant magnitude. We analyze the source-code performance of 1322 versions of 215 different Android applications, dynamically executed with over than 27900 tested scenarios, using state-of-the-art black-box testing frameworks with different combinations of GUI inputs. Our empirical analysis allowed to observe that semantic program changes such as adding functionality and repairing bugfixes are the changes more associated with relevant impact on energy performance. Furthermore, we also demonstrate that several coding practices previously identified as energy-greedy do not replicate such behavior in our execution context and can have distinct impacts across several performance indicators: runtime, memory and energy consumption. Some of these practices include some performance issues reported by the Android Lint and Android SDK APIs. We also provide evidence that the evaluated performance indicators have little to no correlation with the performance issues' priority detected by Android Lint. Finally, our results allowed us to demonstrate that there are significant differences in terms of performance between the most used libraries suited for implementing common programming tasks, such as HTTP communication, JSON manipulation, image loading/rendering, among others, providing a set of recommendations to select the most efficient library for each performance indicator. Based on the conclusions drawn and in the extension of the developed work, we also synthesized a set of guidelines that can be used by practitioners to replicate energy studies and build more efficient mobile software.
2024
Autores
Macedo, JN; Rodrigues, E; Viera, M; Saraiva, J;
Publicação
JOURNAL OF SYSTEMS AND SOFTWARE
Abstract
Strategic term re-writing and attribute grammars are two powerful programming techniques widely used in language engineering. The former relies on strategies to apply term re-write rules in defining largescale language transformations, while the latter is suitable to express context-dependent language processing algorithms. These two techniques can be expressed and combined via a powerful navigation abstraction: generic zippers. This results in a concise zipper-based embedding offering the expressiveness of both techniques. In addition, we increase the functionalities of strategic programming, enabling the definition of outwards traversals; i.e. outside the starting position. Such elegant embedding has a severe limitation since it recomputes attribute values. This paper presents a proper and efficient embedding of both techniques. First, attribute values are memoized in the zipper data structure, thus avoiding their re-computation. Moreover, strategic zipper based functions are adapted to access such memoized values. We have hosted our memoized zipper-based embedding of strategic attribute grammars both in the Haskell and Python programming languages. Moreover, we benchmarked the libraries supporting both embedding against the state-of-the-art Haskell-based Strafunski and Scala-based Kiama libraries. The first results show that our Haskell Ztrategic library is very competitive against those two well established libraries.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.