2019
Autores
Raza, M; Faria, JP;
Publicação
The 31st International Conference on Software Engineering and Knowledge Engineering, SEKE 2019, Hotel Tivoli, Lisbon, Portugal, July 10-12, 2019.
Abstract
ProcessPAIR is a novel method and tool for automating the performance analysis in software development. Based on performance models structured by process experts and calibrated from the performance data of many developers, it automatically identifies and ranks potential performance problems and root causes of individual developers. However, the current calibration method is not fully automatic, because, in the case of performance indicators that affect other indicators in a conflicting way, the process expert has to manually calibrate the optimal value in a way that balances those impacts. In this paper we propose a novel method to automate this step, taking advantage of training data sets. We demonstrate the feasibility of the method with an example related with the Code Review Rate indicator, with conflicting impacts on Productivity and Quality.
2019
Autores
Hierons, R; Núñez, M; Pretschner, A; Gargantini, A; Faria, JP; Wang, S;
Publicação
Proceedings - 2019 IEEE 12th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2019
Abstract
2019
Autores
Lima, B; Faria, JP; Hierons, RM;
Publicação
Quality of Information and Communications Technology - 12th International Conference, QUATIC 2019, Ciudad Real, Spain, September 11-13, 2019, Proceedings
Abstract
To ensure interoperability and the correct end-to-end behavior of heterogenous distributed systems, it is important to conduct integration tests that verify the interactions with the environment and between the system components in key scenarios. The automation of such integration tests requires that test components are also distributed, with local testers deployed close to the system components, coordinated by a central tester. In such a test architecture, it is important to maximize the autonomy of the local testers to minimize the communication overhead and maximize the fault detection capability. A test scenario is called locally observable and locally controllable, if conformance errors can be detected locally and test inputs can be decided locally, respectively, by the local testers, without the need for exchanging coordination messages between the test components during test execution (i.e., without any communication overhead). For test scenarios specified by means of UML sequence diagrams that don’t exhibit those properties, we present in this paper an approach with tool support to automatically find coordination messages that, added to the given scenario, make it locally controllable and locally observable. © Springer Nature Switzerland AG 2019.
2019
Autores
Raza, M; Faria, JP; Salazar, R;
Publicação
SOFTWARE QUALITY JOURNAL
Abstract
Collecting product and process measures in software development projects, particularly in education and training environments, is important as a basis for assessing current performance and opportunities for improvement. However, analyzing the collected data manually is challenging because of the expertise required, the lack of benchmarks for comparison, the amount of data to analyze, and the time required to do the analysis. ProcessPAIR is a novel tool for automated performance analysis and improvement recommendation; based on a performance model calibrated from the performance data of many developers, it automatically identifies and ranks potential performance problems and root causes of individual developers. In education and training environments, it increases students' autonomy and reduces instructors' effort in grading and feedback. In this article, we present the results of a controlled experiment involving 61 software engineering master students, half of whom used ProcessPAIR in a Personal Software Process (PSP) performance analysis assignment, and the other half used a traditional PSP support tool (Process Dashboard) for performing the same assignment. The results show significant benefits in terms of students' satisfaction (average score of 4.78 in a 1-5 scale for ProcessPAIR users, against 3.81 for Process Dashboard users), quality of the analysis outcomes (average grades achieved of 88.1 in a 0-100 scale for ProcessPAIR users, against 82.5 for Process Dashboard users), and time required to do the analysis (average of 252 min for ProcessPAIR users, against 262 min for Process Dashboard users, but with much room for improvement).
2020
Autores
Dias, JP; Lima, B; Faria, JP; Restivo, A; Ferreira, HS;
Publicação
Computational Science - ICCS 2020 - 20th International Conference, Amsterdam, The Netherlands, June 3-5, 2020, Proceedings, Part V
Abstract
Internet-of-Things systems are comprised of highly heterogeneous architectures, where different protocols, application stacks, integration services, and orchestration engines co-exist. As they permeate our everyday lives, more of them become safety-critical, increasing the need for making them testable and fault-tolerant, with minimal human intervention. In this paper, we present a set of self-healing extensions for Node-RED, a popular visual programming solution for IoT systems. These extensions add runtime verification mechanisms and self-healing capabilities via new reusable nodes, some of them leveraging meta-programming techniques. With them, we were able to implement self-modification of flows, empowering the system with self-monitoring and self-testing capabilities, that search for malfunctions, and take subsequent actions towards the maintenance of health and recovery. We tested these mechanisms on a set of scenarios using a live physical setup that we called SmartLab. Our results indicate that this approach can improve a system’s reliability and dependability, both by being able to detect failing conditions, as well as reacting to them by self-modifying flows, or triggering countermeasures. © Springer Nature Switzerland AG 2020.
2020
Autores
Lima, B; Faria, JP; Hierons, R;
Publicação
IEEE ACCESS
Abstract
Evermore end-to-end digital services depend on the proper interoperation of multiple products, forming a distributed system, often subject to timing requirements. To ensure interoperability and the timely behavior of such systems, it is important to conduct integration tests that verify the interactions with the environment and between the system components in key scenarios. The automation of such integration tests requires that test components are also distributed, with local testers deployed close to the system components, coordinated by a central tester. Test coordination in such a test architecture is a big challenge. To address it, in this article we propose an approach based on the pre-processing of the test scenarios. We first analyze the test scenarios in order to check if conformance errors can be detected locally (local observability) and test inputs can be decided locally (local controllability) by the local testers for the test scenario under consideration, without the need for exchanging coordination messages between the test components during test execution. If such properties do not hold, we next try to determine a minimum set of coordination messages or time constraints to be attached to the given test scenario to enforce those properties and effectively solve the test coordination problem with minimal overhead. The analysis and enforcement procedures were implemented in the DCO Analyzer tool for test scenarios described by means of UML sequence diagrams. Since many local observability and controllability problems may be caused by design flaws or incomplete specifications, and multiple ways may exist to enforce local observability and controllability, the tool was designed as a static analysis assistant to be used before test execution. DCO Analyzer was able to correctly identify local observability and controllability problems in real-world scenarios and help the users fix the detected problems.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.