Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Factos & Números
000
Apresentação

Centro de Sistemas de Computação Avançada

A  missão do CRACS é procurar a excelência científica nas áreas de linguagens de programação, computação paralela e distribuída, segurança e privacidade, mineração de informação e sistemas web baseados no desenvolvimento de sistemas de software escaláveis para aplicações desafiadoras e multidisciplinares.

O nosso ambiente de investigação é enriquecido com jovens e talentosos investigadores que, em conjunto com investigadores seniores, constituem a massa crítica necessária e dotam a instituição das competências científicas para cumprir a sua missão.

Últimas Notícias

INESC TEC com 5 projetos exploratórios FCT aprovados em 4 áreas de I&D

Telecomunicações e multimédia, fotónica aplicada, software confiável e sistemas de computação avançada – são estas as quatro áreas que os investigadores do INESC TEC vão trabalhar no âmbito dos cinco projetos que foram aprovados através do Concurso de Projetos Exploratórios da Fundação para a Ciência e a Tecnologia (FCT).

02 outubro 2024

Ciência e Engenharia dos Computadores

Falou-se de segurança e privacidade em evento internacional organizado pela primeira vez em Portugal

Criptografia, software malicioso, privacidade de dados, segurança na web e em dispositivos móveis, controlo de acesso e autenticação seguros – estes foram alguns dos tópicos discutidos na 14ª edição da Conferência ACM sobre segurança e privacidade de dados e aplicações. Organizada pelo INESC TEC e pela Faculdade de Ciências da Universidade do Porto (FCUP), esta foi a primeira vez que a Conferência decorreu noutro país que não os Estados Unidos da América.

27 junho 2024

A privacidade nas redes 6G pode ser um desafio: INESC TEC integra projeto europeu com foco na “proteção”

As futuras redes 6G devem fazer da privacidade dos dados uma das prioridades. O INESC TEC integra o PRIVATEER, um projeto europeu que quer fazer uma análise de segurança robusta e descentralizada, baseada em Inteligência Artificial, para redes 6G. “Privacidade” é a palavra-chave.  

13 junho 2023

Investigadores do INESC TEC premiados por trabalho de investigação que visa a proteção de privacidade em telemóveis

Um grupo de Investigadores do INESC TEC foi distinguido por um trabalho de investigação sobre a gestão de permissões em dispositivos móveis. A equipa desenvolveu um conjunto de técnicas para automatizar a resposta a pedidos de permissões por parte das aplicações de smartphones com uma fiabilidade de 90%. Este trabalho recebeu o prémio de melhor artigo científico na conferência ACM CODASPY que teve lugar nos Estados Unidos da América.

08 julho 2022

INESC TEC integra projeto que vai tornar veículos autónomos mais seguros

  No âmbito do projeto THEIA - Automated Perception Driving, uma parceria entre a Universidade do Porto e a Bosch, que tem como objetivo tornar os veículos autónomos mais seguros através de uma melhor perceção da envolvente exterior, o INESC TEC irá contribuir para o desenvolvimento de algoritmos de perceção, computação e arquiteturas baseadas em inteligência artificial.

07 junho 2022

Equipa
Publicações

CRACS Publicações

Ler todas as publicações

2024

Topic Extraction: BERTopic's Insight into the 117th Congress's Twitterverse

Autores
Mendonça, M; Figueira, A;

Publicação
INFORMATICS-BASEL

Abstract
As social media (SM) becomes increasingly prevalent, its impact on society is expected to grow accordingly. While SM has brought positive transformations, it has also amplified pre-existing issues such as misinformation, echo chambers, manipulation, and propaganda. A thorough comprehension of this impact, aided by state-of-the-art analytical tools and by an awareness of societal biases and complexities, enables us to anticipate and mitigate the potential negative effects. One such tool is BERTopic, a novel deep-learning algorithm developed for Topic Mining, which has been shown to offer significant advantages over traditional methods like Latent Dirichlet Allocation (LDA), particularly in terms of its high modularity, which allows for extensive personalization at each stage of the topic modeling process. In this study, we hypothesize that BERTopic, when optimized for Twitter data, can provide a more coherent and stable topic modeling. We began by conducting a review of the literature on topic-mining approaches for short-text data. Using this knowledge, we explored the potential for optimizing BERTopic and analyzed its effectiveness. Our focus was on Twitter data spanning the two years of the 117th US Congress. We evaluated BERTopic's performance using coherence, perplexity, diversity, and stability scores, finding significant improvements over traditional methods and the default parameters for this tool. We discovered that improvements are possible in BERTopic's coherence and stability. We also identified the major topics of this Congress, which include abortion, student debt, and Judge Ketanji Brown Jackson. Additionally, we describe a simple application we developed for a better visualization of Congress topics.

2024

Comparing Semantic Graph Representations of Source Code: The Case of Automatic Feedback on Programming Assignments

Autores
Paiva, JC; Leal, JP; Figueira, A;

Publicação
COMPUTER SCIENCE AND INFORMATION SYSTEMS

Abstract
Static source code analysis techniques are gaining relevance in automated assessment of programming assignments as they can provide less rigorous evaluation and more comprehensive and formative feedback. These techniques focus on source code aspects rather than requiring effective code execution. To this end, syntactic and semantic information encoded in textual data is typically represented internally as graphs, after parsing and other preprocessing stages. Static automated assessment techniques, therefore, draw inferences from intermediate representations to determine the correctness of a solution and derive feedback. Consequently, achieving the most effective semantic graph representation of source code for the specific task is critical, impacting both techniques' accuracy, outcome, and execution time. This paper aims to provide a thorough comparison of the most widespread semantic graph representations for the automated assessment of programming assignments, including usage examples, facets, and costs for each of these representations. A benchmark has been conducted to assess their cost using the Abstract Syntax Tree (AST) as a baseline. The results demonstrate that the Code Property Graph (CPG) is the most feature -rich representation, but also the largest and most space -consuming (about 33% more than AST).

2024

GANs in the Panorama of Synthetic Data Generation Methods

Autores
Vaz, B; Figueira, Á;

Publicação
ACM Transactions on Multimedia Computing, Communications, and Applications

Abstract
This paper focuses on the creation and evaluation of synthetic data to address the challenges of imbalanced datasets in machine learning applications (ML), using fake news detection as a case study. We conducted a thorough literature review on generative adversarial networks (GANs) for tabular data, synthetic data generation methods, and synthetic data quality assessment. By augmenting a public news dataset with synthetic data generated by different GAN architectures, we demonstrate the potential of synthetic data to improve ML models’ performance in fake news detection. Our results show a significant improvement in classification performance, especially in the underrepresented class. We also modify and extend a data usage approach to evaluate the quality of synthetic data and investigate the relationship between synthetic data quality and data augmentation performance in classification tasks. We found a positive correlation between synthetic data quality and performance in the underrepresented class, highlighting the importance of high-quality synthetic data for effective data augmentation.

2024

Clustering source code from automated assessment of programming assignments

Autores
Paiva, JC; Leal, JP; Figueira, A;

Publicação
INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS

Abstract
Clustering of source code is a technique that can help improve feedback in automated program assessment. Grouping code submissions that contain similar mistakes can, for instance, facilitate the identification of students' difficulties to provide targeted feedback. Moreover, solutions with similar functionality but possibly different coding styles or progress levels can allow personalized feedback to students stuck at some point based on a more developed source code or even detect potential cases of plagiarism. However, existing clustering approaches for source code are mostly inadequate for automated feedback generation or assessment systems in programming education. They either give too much emphasis to syntactical program features, rely on expensive computations over pairs of programs, or require previously collected data. This paper introduces an online approach and implemented tool-AsanasCluster-to cluster source code submissions to programming assignments. The proposed approach relies on program attributes extracted from semantic graph representations of source code, including control and data flow features. The obtained feature vector values are fed into an incremental k-means model. Such a model aims to determine the closest cluster of solutions, as they enter the system, timely, considering clustering is an intermediate step for feedback generation in automated assessment. We have conducted a twofold evaluation of the tool to assess (1) its runtime performance and (2) its precision in separating different algorithmic strategies. To this end, we have applied our clustering approach on a public dataset of real submissions from undergraduate students to programming assignments, measuring the runtimes for the distinct tasks involved: building a model, identifying the closest cluster to a new observation, and recalculating partitions. As for the precision, we partition two groups of programs collected from GitHub. One group contains implementations of two searching algorithms, while the other has implementations of several sorting algorithms. AsanasCluster matches and, in some cases, improves the state-of-the-art clustering tools in terms of runtime performance and precision in identifying different algorithmic strategies. It does so without requiring the execution of the code. Moreover, it is able to start the clustering process from a dataset with only two submissions and continuously partition the observations as they enter the system.

2024

Multilayer quantile graph for multivariate time series analysis and dimensionality reduction

Autores
Silva, VF; Silva, ME; Ribeiro, P; Silva, F;

Publicação
INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS

Abstract
In recent years, there has been a surge in the prevalence of high- and multidimensional temporal data across various scientific disciplines. These datasets are characterized by their vast size and challenging potential for analysis. Such data typically exhibit serial and cross-dependency and possess high dimensionality, thereby introducing additional complexities to conventional time series analysis methods. To address these challenges, a recent and complementary approach has emerged, known as network-based analysis methods for multivariate time series. In univariate settings, quantile graphs have been employed to capture temporal transition properties and reduce data dimensionality by mapping observations to a smaller set of sample quantiles. To confront the increasingly prominent issue of high dimensionality, we propose an extension of quantile graphs into a multivariate variant, which we term Multilayer Quantile Graphs. In this innovative mapping, each time series is transformed into a quantile graph, and inter-layer connections are established to link contemporaneous quantiles of pairwise series. This enables the analysis of dynamic transitions across multiple dimensions. In this study, we demonstrate the effectiveness of this new mapping using synthetic and benchmark multivariate time series datasets. We delve into the resulting network's topological structures, extract network features, and employ these features for original dataset analysis. Furthermore, we compare our results with a recent method from the literature. The resulting multilayer network offers a significant reduction in the dimensionality of the original data while capturing serial and cross-dimensional transitions. This approach facilitates the characterization and analysis of large multivariate time series datasets through network analysis techniques.

Factos & Números

16Investigadores Séniores

2016

7Artigos em conferências indexadas

2020

1Capítulos de livros

2020