2024
Authors
Santos, M; de Carvalho, ACPLF; Soares, C;
Publication
Proceedings of the 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 20th, 2024.
Abstract
When never produced as much data as today, and tomorrow will probably produce even more data. The increase is due not only to the larger number of data sources, but also because the source can continuously produce more recent data. The discovery of temporal patterns in continuously generated data is the main goal in many forecasting tasks, such as the average value of a currency or the average temperature in a city, in the next day. In these tasks, it is assumed that the time difference between two consecutive values produced by the same source is constant, and the sequence of values form a time series. The importance, and the very large number, of time series forecasting tasks make them one of the most popular data analysis application, which has been dealt with by a large number of different methods. Despite its popularity, there is a dearth of research aimed at comprehending the conditions under which these methods present high or poor forecasting performances. Empirical studies, although common, are challenged by the limited availability of time series datasets, restricting the extraction of reliable insights. To address this limitation, we present tsMorph, a tool for generating semi-synthetic time series through dataset morphing. tsMorph works by creating a sequence of datasets from two original datasets. The characteristics of the generated datasets progressively depart from those of one of the datasets and a convergence toward the attributes of the other dataset. This method provides a valuable alternative for obtaining substantial datasets. In this paper, we show the benefits of tsMorph by assessing the predictive performance of the Long Short-Term Memory Network and DeepAR forecasting algorithms. The time series used for the experiments come from the NN5 Competition. The experimental results provide important insights. Notably, the performances of the two algorithms improve proportionally with the frequency of the time series. These experiments confirm that tsMorph can be an effective tool for better understanding the behaviour of forecasting algorithms, delivering a pathway to overcoming the limitations posed by empirical studies and enabling more extensive and reliable experiments. Furthermore, tsMorph can promote Responsible Artificial Intelligence by emphasising characteristics of time series where forecasting algorithms may not perform well, thereby highlighting potential limitations. © 2024 Copyright for this paper by its authors.
2024
Authors
Azevedo, C; Roxo, MT; Brandão, A;
Publication
Smart Innovation, Systems and Technologies
Abstract
This study develops some sustainable tourism advertising effects and consumer environmental awareness-raising and examines them by advertising certification and advertising format in a field experiment. The tourism advertising effects are analyzed by five dependent variables: trust and credibility, environmentalism, ad relevance, realism, and flow. Several ANOVA and multiple comparison tests were performed to understand whether these variables varied between groups. Experimental research findings indicate that flow and video format affect tourism advertising and consumer environmental awareness-raising. This study demonstrates the importance of understanding the concept of sustainable tourism and awareness-raising. It also points to identifying the best communication strategies to promote a sustainable destination, as different communication methods may lead to different results. In addition, it provides valuable information for marketers to consider when implementing their communication strategies. © 2024, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
2024
Authors
Teixeira, J; Ribeiro, A; Jorge, AS; Silva, A;
Publication
Proceedings of SPIE - The International Society for Optical Engineering
Abstract
Recent advances in optical trapping have opened new opportunities for manipulating micro and nanoparticles, establishing optical tweezers (OT) as a powerful tool for single-cell analysis. Furthermore, intelligent systems have been developed to characterize these particles, as information about their size and composition can be extracted from the scattered radiation signal. In this manuscript, we aim to explore the potential of optical tweezers for the characterization of sub-micron size variations in microparticles. We devised a case study, aiming to assess the limits of the size discrimination ability of an optical tweezer system, using transparent 4.8 µm PMMA particles, functionalized with streptavidin. We focused on the heavily studied streptavidin-biotin system, with streptavidin-functionalized PMMA particles targeting biotinylated bovine serum albumin. This binding process results in an added molecular layer to the particle’s surface, increasing its radius by approximately 7 nm. An automatic OT system was used to trap the particles and acquire their forward-scattered signals. Then, the signals’ frequency components were analyzed using the power spectral density method followed by a dimensionality reduction via the Uniform Manifold Approximation and Projection algorithm. Finally, a Random Forest Classifier achieved a mean accuracy of 94% for the distinction of particles with or without the added molecular layer. Our findings demonstrate the ability of our technique to discriminate between particles that are or are not bound to the biotin protein, by detecting nanoscale changes in the size of the microparticles. This indicates the possibility of coupling shape-changing bioaffinity tools (such as APTMERS, Molecular Imprinted Polymers, or antibodies) with optical trapping systems to enable optical tweezers with analytical capability. © 2024 SPIE.
2024
Authors
Baghcheband, H; Soares, C; Reis, LP;
Publication
IEEE INTERNET COMPUTING
Abstract
Today, autonomous agents, the Internet of Things, and smart devices produce more and more distributed data and use them to learn models for different purposes. One challenge is that learning from local data only may lead to suboptimal models. Thus, better models are expected if agents can exchange data, leading to approaches such as federated learning. However, these approaches assume that data have no value and, thus, is exchanged for free. A machine learning data market (MLDM), a framework based on multiagent systems with a market-based perspective on data exchange, was recently proposed. In an MLDM, each agent trains its model based on both local data and data bought from other agents. Although the empirical results are interesting, several challenges are still open, including data acquisition and data valuation. The MLDM is an illustrative example of how the value of data can and should be integrated into the design of distributed ML systems.
2024
Authors
Cunha, S; Silva, L; Saraiva, J; Fernandes, JP;
Publication
PROCEEDINGS OF THE 17TH ACM SIGPLAN INTERNATIONAL CONFERENCE ON SOFTWARE LANGUAGE ENGINEERING, SLE 2024
Abstract
Energy efficiency of software is crucial in minimizing environmental impact and reducing operational costs of ICT systems. Energy efficiency is therefore a key area of contemporary software language engineering research. A recurrent discussion that excites our community is whether runtime performance is always a proxy for energy efficiency. While a generalized intuition seems to suggest this is the case, this intuition does not align with the fact that energy is the accumulation of power over time; hence, time is only one of the factors in this accumulation. We focus on the other factor, power, and the impact that capping it has on the energy efficiency of running software. We conduct an extensive investigation comparing regular and power-capped executions of 9 benchmark programs obtained from The Computer Language Benchmarks Game, across 20 distinct programming languages. Our results show that employing power caps can be used to trade running time, which is degraded, for energy efficiency, which is improved, in all the programming languages and in all benchmarks that were considered. We observe overall energy savings of almost 14% across the 20 programming languages, with notable savings of 27% in Haskell. This saving, however, comes at the cost of an overall increase of the program's execution time of 91% in average. We are also able to draw similar observations using language specific benchmarks for programming languages of different paradigms and with different execution models. This is achieved analyzing a wide range of benchmark programs from the nofib Benchmark Suite of Haskell Programs, DaCapo Benchmark Suite for Java, and the Python Performance Benchmark Suite. We observe energy savings of approximately 8% to 21% across the test suites, with execution time increases ranging from 21% to 46%. Notably, the DaCapo suite exhibits the most significant values, with 20.84% energy savings and a 45.58% increase in execution time. Our results have the potential to drive significant energy savings in the context of computational tasks for which runtime is not critical, including Batch Processing Systems, Background Data Processing and Automated Backups.
2024
Authors
Silva, IOe; Jesus, SM; Ferreira, HM; Saleiro, P; Sousa, I; Bizarro, P; Soares, C;
Publication
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.