2013
Autores
Moreira, RMLM; Paiva, ACR; Memon, A;
Publicação
2013 IEEE 24TH INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING (ISSRE)
Abstract
User Interface (UI) patterns are used extensively in the design of today's software. UI patterns embody commonly recurring solutions that solve common GUI design problems, such as "login," "file-open," and "search." Yet, testing of GUIs for functional correctness has largely ignored UI patterns. This paper formalizes the notion of a Pattern-Based Graphical User Interface (GUI) Testing method (PBGT) for systematizing and automating the GUI testing process. The space of all possible interactions with a GUI is typically very large. PBGT presents a new methodology to sample the input space using " UI Test Patterns," that embody commonly recurring solutions to test GUIs. Our empirical studies show that the PBGT methodology is effective in revealing faults in fielded GUIs.
2014
Autores
Vilela, L; Paiva, ACR;
Publicação
PROCEEDINGS OF THE 2014 9TH IBERIAN CONFERENCE ON INFORMATION SYSTEMS AND TECHNOLOGIES (CISTI 2014)
Abstract
Currently, software tends to assume increasingly critical roles in our society so assuring its quality becomes ever more crucial. There are several tools and processes of software testing to help increase quality in virtually any type of software. One example is the so called Model-Based Testing (MBT) tools, that generate test cases from models. However, most of these tools have a configuration phase, where test input data is provided manually by the tester, which influences the quality of the test suite generated. By adding coverage analysis to MBT tools it is possible to give feedback and help the tester to define the configuration data needed to achieve the most valuable test suite as possible. This paper presents a tool, PARADIGM-COV, that produces coverage information both over the PARADIGM model elements (to assess if input data is adequate to cover the test goals and assess if preconditions are achievable), and during test case execution (to identify the parts of the model/code that were actually exercised).
2017
Autores
Paiva, ACR; Vilela, L;
Publicação
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS
Abstract
Currently, software tends to assume increasingly critical roles in our society so assuring its quality becomes ever more crucial. There are several tools and processes of software testing to help increase quality in virtually any type of software. One example is the so calledmodel-based testing (MBT) tools, that generate test cases from models. Pattern Based Graphical User Interface Testing (PBGT) is an example of a MBT new methodology that aims at systematizing and automating the Graphical User Interface (GUI) testing process. It is supported by a Tool (PBGT Tool) which provides an integrated modeling and testing environment for crafting test models based on User Interface Test Patterns (UITP) using a GUI modeling Domain Specific Language (DSL) called PARADIGM. Most of the MBT tools have a configuration phase, where test input data is provided manually by the tester, which influences the quality of the test suite generated. By adding coverage analysis to MBT tools, it is possible to give feedback and help the tester to define the configuration data needed to achieve the most valuable test suite as possible and, ultimately, contribute for increasing the quality of the software. This paper presents a multidimensional test coverage analysis approach and tool (PARADIGM-COV), developed in the context of the PBGT project, that produces coverage information both over the PARADIGM model elements and during test case execution (to identify the parts of the model that were actually exercised). It also presents a case study illustrating the benefits of having multidimensional analysis and assessing the overall test coverage approach.
2014
Autores
Garcia, A; Paiva, ACR;
Publicação
ICEIS 2014 - Proceedings of the 16th International Conference on Enterprise Information Systems, Volume 2, Lisbon, Portugal, 27-30 April, 2014
Abstract
The incorrect requirements elicitation, requirements changes and evolution during the project lifetime are the main causes pointed out for the failure of software projects. The requirements in the context of Software as a Service are in constant change and evolution which makes even more critical the attention given to Requirements Engineering (RE). The dynamic context evolution due to new stakeholders needs brings additional challenges to the RE such as the need to review the prioritization of requirements and manage their changes related to their baseline. It is important to apply methodologies and techniques for requirements change management to allow a flexible development of SaaS and to ensure their timely adaptation to change. However, the existing techniques and solutions can take a long time to be implemented so that they become ineffective. In this work, a new methodology to manage functional requirements is proposed. This new methodology is based on collecting and analysis of information about the usage of the service to extract pages visited, execution traces and functionalities more used. The analysis performed will allow review the existing requirements, propose recommendations based on quality concerns and improve service usability with the ultimate goal of increasing the software lifetime. Copyright © 2014 SCITEPRESS - Science and Technology Publications.
2014
Autores
Costa, P; Nabuco, M; Paiva, ACR;
Publicação
2014 9TH INTERNATIONAL CONFERENCE ON THE QUALITY OF INFORMATION AND COMMUNICATIONS TECHNOLOGY (QUATIC)
Abstract
This paper presents a study aiming to assess the feasibility of using the Pattern Based GUI Testing approach, PBGT, to test mobile applications. PBGT is a new model based testing approach that aims to increase systematization, reusability and diminish the effort in modelling and testing. It is based on the concept of User Interface Test Patterns (UITP) that contain generic test strategies for testing common recurrent behaviour, the so-called UI Patterns, on GUIs through its possible different implementations after a configuration step. Although PBGT was developed having web applications in mind, it is possible to develop drivers for other platforms in order to test a wide set of applications. However, web and mobile applications are different and only the development of a new driver to execute test cases over mobile applications may not be enough. This paper describes a study aiming to identify the adaptations and updates the PBGT should undergo in order to test mobile applications.
2014
Autores
Nabuco, M; Paiva, ACR;
Publicação
COMPUTATIONAL SCIENCE AND ITS APPLICATIONS, PART VI - ICCSA 2014
Abstract
This paper presents a tool to filter/configure the test cases generated within the Model-Based Testing project PBGT. The models are written in a Domain Specific Language called PARADIGM and are composed by User Interface Test Patterns (UITP) describing the testing goals. To generate test cases, the tester has to provide test input data for each UITP in the model. After that, it is possible to generate test cases. However, without a filter/configuration of the test case generation algorithm, the number of test cases can be so huge that becomes unfeasible. So, this paper presents an approach to define parameters for the test case generation in order to generate a feasible number of test cases. The approach is evaluated by comparing the different test strategies and measuring the performance of the modeling tool against a capture-replay tool used for web testing.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.