Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por HumanISE

2017

A Survey on Testing Distributed and Heterogeneous Systems: The State of the Practice

Autores
Lima, B; Faria, JP;

Publicação
SOFTWARE TECHNOLOGIES

Abstract
Distributed and heterogeneous systems (DHS), running over interconnected mobile and cloud-based platforms, are used in a growing number of domains for provisioning end-to-end services to users. Testing DHS is particularly important and challenging, with little support being provided by current tools. In order to assess the current state of the practice regarding the testing of DHS and identify opportunities and priorities for research and innovation initiatives, we conducted an exploratory survey that was responded by 147 software testing professionals that attended industry-oriented software testing conferences. The survey allowed us to assess the relevance of DHS in software testing practice, the most important features to be tested in DHS, the current status of test automation and tool sourcing for testing DHS, and the most desired features in test automation solutions for DHS. Some follow up interviews allowed us to further investigate drivers and barriers for DHS test automation. We expect that the results presented in the paper are of interest to researchers, tool vendors and service providers in this field.

2017

Towards Decentralized Conformance Checking in Model-Based Testing of Distributed Systems

Autores
Lima, BMC; Faria, JCP;

Publicação
Proceedings - 10th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2017

Abstract
In a growing number of domains, the provisioning of end-to-end services to the users depends on the proper interoperation of multiple products, forming a new distributed system. To ensure interoperability and the integrity of this new distributed system, it is important to conduct integration tests that verify not only the interactions with the environment but also the interactions between the system components. Integration test scenarios for that purpose may be conveniently specified by means of UML sequence diagrams, possibly allowing multiple execution paths. The automation of such integration tests requires that test components are also distributed, with a local tester deployed close to each system component, and a central tester coordinating the local testers. In such a test architecture, it is important to minimize the communication overhead during test execution. Hence, in this paper we investigate conditions upon which conformance errors can be detected locally (local observability) and test inputs can be decided locally (local controllability) by the local testers, without the need for exchanging coordination messages between the test components during test execution. The conditions are specified in a formal specification language that allows executing and validating the specification. Examples of test scenarios are also presented, illustrating local observability and controllability problems associated with optional messages without corresponding acknowledgment messages, races and non-local choices. © 2017 IEEE.

2017

Helping Software Engineering Students Analyzing their Performance Data Tool Support in an Educational Environment

Autores
Raza, M; Faria, JP; Salazar, R;

Publicação
PROCEEDINGS OF THE 2017 IEEE/ACM 39TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING COMPANION (ICSE-C 2017)

Abstract
ProcessPAIR is a novel tool for automating the performance analysis of software developers. Based on a performance model calibrated from the performance data of many developers, it automatically identifies and ranks potential performance problems and root causes of individual developers. We present the results of a controlled experiment involving 61 software engineering master students, half of whom used ProcessPAIR in a performance analysis assignment. The results show significant benefits in terms of students' satisfaction (average score of 4.78 out of 5 for ProcessPAIR users, against 3.81 for other users), quality of the analysis outcomes (average grades achieved of 88.1 out of 100 for ProcessPAIR users, against 82.5 for other users), and time required to do the analysis (average of 252 min for ProcessPAIR users, against 262 min for other users, but with much room for improvement).

2017

WebProcessPAIR: recommendation system for software process improvement

Autores
Raza, M; Faria, JP; Amaro, L; Henriques, PC;

Publicação
Proceedings of the 2017 International Conference on Software and System Process, Paris, France, ICSSP 2017, July 5-7, 2017

Abstract
ProcessPAIR is a novel tool for helping software developers analyzing their personal performance. Based on a performance model calibrated from the anonymized performance data of many developers and the performance data submitted by an individual developer, it automatically identifies and ranks potential performance problems and their root causes for that developer. In this work we present WebProcessPAIR, which extends ProcessPAIR with the ability to recommend improvement actions to address the root causes identified, based on a crowdsourcing approach. A case study illustrates WebProcessPAIR usage. © 2017 Association for Computing Machinery.

2017

Conformance Checking in Integration Testing of Time-constrained Distributed Systems based on UML Sequence Diagrams

Autores
Lima, B; Faria, JP;

Publicação
Proceedings of the 12th International Conference on Software Technologies, ICSOFT 2017, Madrid, Spain, July 24-26, 2017.

Abstract
The provisioning of a growing number of services depends on the proper interoperation of multiple products, forming a new distributed system, often subject to timing requirements. To ensure the interoperability and timely behavior of this new distributed system, it is important to conduct integration tests that verify the interactions with the environment and between the system components. Integration test scenarios for that purpose may be conveniently specified by means of UML sequence diagrams (SDs) enriched with time constraints. The automation of such integration tests requires that test components are also distributed, with a local tester deployed close to each system component, coordinated by a central tester. The distributed observation of execution events, combined with the impossibility to ensure clock synchronization in a distributed system, poses special challenges for checking the conformance of the observed execution traces against the specification, possibly yielding inconclusive verdicts. Hence, in this paper we investigate decision procedures and criteria to check the conformance of observed execution traces against a specification set by a UML SD enriched with time constraints. The procedures and criteria are specified in a formal language that allows executing and validating the specification. Examples are presented to illustrate the approach. Copyright

2017

Learning Frameworks in a Social-Intensive Knowledge Environment - An Empirical Study

Autores
Flores, N; Aguiar, A;

Publicação
INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND KNOWLEDGE ENGINEERING

Abstract
Application frameworks are a powerful technique for large-scale reuse, but require a considerable effort to understand them. Good documentation is costly, as it needs to address different audiences with disparate learning needs. When code and documentation prove insuficient, developers turn to their network of experts. Nevertheless, this proves difficult, mainly due to the lack of expertise awareness (who to ask), wasteful interruptions of the wrong people and unavailability ( either due to intrusion or time constraints). The DRIVER platform is a collaborative learning environment where framework users can, in a non-intrusive way, store and share their learning knowledge while following the best practices of framework understanding (patterns). Developed by the authors, it provides a framework documentation repository, mounted on a wiki, where the learning paths of the community of learners can be captured, shared, rated, and recommended. Combining these social activities, the DRIVER platform promotes collaborative learning, mitigating intrusiveness, unavailability of experts and loss of tacit knowledge. This paper presents the assessment of DRIVER using a controlled academic experiment that measured the performance, effectiveness and framework knowledge intake of MSc students. The study concluded that, especially for novice learners, the platform allows for a faster and more effective learning process.

  • 270
  • 589