2024
Autores
Guerino, LR; Kuroishi, PH; Paiva, ACR; Vincenzi, AMR;
Publicação
23TH BRAZILIAN SYMPOSIUM ON SOFTWARE QUALITY, SBQS 2024
Abstract
Context: Mutation testing is a rigorous approach for assessing the quality of test suites by injecting faults (i.e., mutants) into software under test. Tools, such as CosmicRay and Mutpy, are examples of Mutation Testing tools for Python software programs. Problem: With different Python mutation testing tools, comparative analysis is lacking to evaluate their effectiveness in different usage scenarios. Furthermore, the evolution of these tools makes continuous evaluation of their functionalities and characteristics necessary. Method: In this work, we evaluate (statically and dynamically) four Python mutation testing tools, namely CosmicRay, MutPy, MutMut, and Mutatest. In static evaluation, we introduce a comparison framework, adapted from one previously applied to Java tools, and collected information from tool documentation and developer surveys. For dynamic evaluation, we use tests built based on those produced by Pynguin, which are improved through the application of Large Language Models (LLMs) and manual analyses. Then, the adequate test suites were cross-tested among different tools to evaluate their effectiveness in killing mutants each other. Results: Our findings reveal that CosmicRay offers superior functionalities and customization options for mutant generation compared to its counterparts. Although CosmicRay's performance was slightly lower than MutPy in the dynamic tests, its recent updates and active community support highlight its potential for future enhancements. Cross-examination of the test suites further shows that mutation scores varied narrowly among tools, with a slight emphasis on MutPy as the most effective mutant fault model.
2024
Autores
Tramontana, P; Marín, B; Paiva, ACR; Mendes, A; Vos, TEJ; Amalfitano, D; Cammaerts, F; Snoeck, M; Fasolino, AR;
Publicação
Abstract
2024
Autores
Moas, PM; Lopes, CT;
Publicação
ACM COMPUTING SURVEYS
Abstract
Wikipedia is the world's largest online encyclopedia, but maintaining article quality through collaboration is challenging. Wikipedia designed a quality scale, but with such a manual assessment process, many articles remain unassessed. We review existing methods for automatically measuring the quality of Wikipedia articles, identifying and comparing machine learning algorithms, article features, quality metrics, and used datasets, examining 149 distinct studies, and exploring commonalities and gaps in them. The literature is extensive, and the approaches follow past technological trends. However, machine learning is still not widely used by Wikipedia, and we hope that our analysis helps future researchers change that reality.
2024
Autores
Pereira, SC; Mendonca, AM; Campilho, A; Sousa, P; Lopes, CT;
Publicação
ARTIFICIAL INTELLIGENCE IN MEDICINE
Abstract
Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones.
2024
Autores
Lopes, CT; Henriques, M;
Publicação
PROCEEDINGS OF THE 2024 CONFERENCE ON HUMAN INFORMATION INTERACTION AND RETRIEVAL, CHIIR 2024
Abstract
More and more people are relying on the Web to find health information. Challenges faced by individuals with low health literacy in the real world likely persist in the virtual realm. To assist these users, our first step is to identify them. This study aims to uncover disparities in the information-seeking behavior of users with varying levels of health literacy. We utilized data gathered from a prior user experiment. Our approach involves a classification scheme encompassing events during web search sessions, spanning the browser, search engine, and web pages. Employing this scheme, we logged interactions from video recordings in the user study and subjected the event logs to descriptive and inferential analyses. Our data analysis unveils distinctive patterns within the low health literacy group. They exhibit a higher frequency of query reformulations with entirely new terms, engage in more left clicks, utilize the browser's backward functionality more frequently, and invest more time in interactions, including increased scrolling on results pages. Conversely, the high health literacy group demonstrates a greater propensity to click on universal results, extract text from URLs more often, and make more clicks with the mouse middle button. These findings offer valuable insights for inferring users' health literacy in a non-intrusive manner. The automatic inference of health literacy can pave the way for personalized services, enhancing accessibility to information and education for individuals with low health literacy, among other benefits.
2024
Autores
Koch, I; Ribero, C; Poveda-Villalon, M; Rico, M; Lopes, CT;
Publicação
LINKING THEORY AND PRACTICE OF DIGITAL LIBRARIES, PT I, TPDL 2024
Abstract
Various sectors within the heritage domain have developed linked data models to describe their cultural artefacts comprehensively. Within the archival domain, ArchOnto, a data model rooted in CIDOC CRM, uses linked data to open archival information to new uses through the prism of linked data. This paper seeks to investigate the potential to use information in archival records in a larger context. It aims to leverage classes and properties sourced from repositories deemed informal due to their crowd-sourcing nature and the possibility of inconsistencies or lack of precision in the data but rich in content, such as the cases of Wikidata and DBpedia. The anticipated outcome is attaining a more comprehensive and expressive archival description, fostering enhanced understanding and assimilation of archival information among domain specialists and lay users. To achieve this, we first analyse existing archive records currently described under the ISAD(G) standard to discern the typologies of entities involved. Subsequently, we map these entities within the ArchOnto ontology and establish correspondences with alternative models. We observed that entities associated with people, places, and events benefited the most from integrating properties sourced from Wikidata and DBpedia. This integration enhanced their comprehensibility and enriched them at a semantic level.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.