2021
Autores
Paiva, ACR; Cavalli, AR; Martins, PV; Castillo, RP;
Publicação
QUATIC
Abstract
2021
Autores
Paiva, ACR; Cavalli, AR; Martins, PV; Pérez Castillo, R;
Publicação
Communications in Computer and Information Science
Abstract
2022
Autores
Marín, B; Vos, TEJ; Paiva, ACR; Fasolino, AR; Snoeck, M;
Publicação
Joint Proceedings of RCIS 2022 Workshops and Research Projects Track co-located with the 16th International Conference on Research Challenges in Information Science (RCIS 2022), Barcelona, Spain, May 17-20, 2022.
Abstract
Testing software is very important, but not done well, resulting in problematic and erroneous software applications. The cause radicates from a skills mismatch between what is needed in industry, the learning needs of students, and the way testing is currently being taught at higher and vocational education institutes. The goal of this project is to identify and design seamless teaching materials for testing that are aligned with industry and learning needs. To represent the entire socio-economic environment that will benefit from the results, this project consortium is composed of a diverse set of partners ranging from universities to small enterprises. The project starts with research in sensemaking and cognitive models when doing and learning testing. Moreover, a study will be done to identify the needs of industry for training and knowledge transfer processes for testing. Based on the outcomes of this research and the study, we will design and develop capsules on teaching software testing including the instructional materials that take into account the cognitive models of students and the industry needs. Finally, we will validate these teaching testing capsules developed during the project. © 2021 The Authors.
2022
Autores
Ferreira, AMS; da Silva, AR; Paiva, ACR;
Publicação
ENASE: PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON EVALUATION OF NOVEL APPROACHES TO SOFTWARE ENGINEERING
Abstract
Nowadays, more organizations adopt agile methodologies to guarantee short and frequent delivery times. A plethora of novel approaches and concepts regarding requirements engineering in this context are emerging. User stories are usually informally described as general explanations of software features, written from end-users perspective, while acceptance criteria are high-level conditions that enable their verification. This paper focuses on the art of writing user stories and acceptance criteria, but also on their relationships with other related concepts, such as quality requirements. In the pursuance of deriving guidelines and linguistic patterns to facilitate the writing of requirements specifications, a systematic literature review was conducted to provide a cohesive and comprehensive analysis of such concepts. Despite considerable research on the subject, no formalized model and systematic approach to assist this writing. We provide a coherent analysis of these concepts and related linguistic patterns supported by a running example of specifications built on top of ITLingo RSL, a publicly available tool to enforce the rigorous writing of specification artefacts. We consider that adopting and using the guidelines and patterns from the present discussion contribute to writing better and more consistent requirements.
2022
Autores
Perez Castillo, R; Paiva, ACR; Cavalli, AR;
Publicação
SOFTWARE QUALITY JOURNAL
Abstract
2022
Autores
Amalfitano, D; Paiva, ACR; Inquel, A; Pinto, L; Fasolino, AR; Just, R;
Publicação
COMMUNICATIONS OF THE ACM
Abstract
OVER A DECADE ago, Jeff Offutt noted, The field of mutation analysis has been growing, both in the number of published papers and the number of active researchers.(33) This trend has since continued, as confirmed by a survey of recent literature.(36) Mutation analysis is the use of well-defined rules defined on syntactic descriptions to make systematic changes to the syntax or to objects developed from the syntax.(33) It has been successfully used in research for assessing test efficacy and as a building block for testing and debugging approaches. It systematically generates syntactic variations, called mutants, of an original program based on a set of mutation operators, which are well-defined program transformation rules. The most common use case of mutation analysis is to assess test efficacy. In this use case, mutants represent faulty versions of the original program, and the ratio of detected mutants quantifies a test suite's efficacy. Empirical evidence supports the use of systematically
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.