Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2023

Applying Machine Learning to Estimate the Effort and Duration of Individual Tasks in Software Projects

Authors
Sousa, AO; Veloso, DT; Goncalves, HM; Faria, JP; Mendes Moreira, J; Graca, R; Gomes, D; Castro, RN; Henriques, PC;

Publication
IEEE ACCESS

Abstract
Software estimation is a vital yet challenging project management activity. Various methods, from empirical to algorithmic, have been developed to fit different development contexts, from plan-driven to agile. Recently, machine learning techniques have shown potential in this realm but are still underexplored, especially for individual task estimation. We investigate the use of machine learning techniques in predicting task effort and duration in software projects to assess their applicability and effectiveness in production environments, identify the best-performing algorithms, and pinpoint key input variables (features) for predictions. We conducted experiments with datasets of various sizes and structures exported from three project management tools used by partner companies. For each dataset, we trained regression models for predicting the effort and duration of individual tasks using eight machine learning algorithms. The models were validated using k-fold cross-validation and evaluated with several metrics. Ensemble algorithms like Random Forest, Extra Trees Regressor, and XGBoost consistently outperformed non-ensemble ones across the three datasets. However, the estimation accuracy and feature importance varied significantly across datasets, with a Mean Magnitude of Relative Error (MMRE) ranging from 0.11 to 9.45 across the datasets and target variables. Nevertheless, even in the worst-performing dataset, effort estimates aggregated to the project level showed good accuracy, with MMRE = 0.23. Machine learning algorithms, especially ensemble ones, seem to be a viable option for estimating the effort and duration of individual tasks in software projects. However, the quality of the estimates and the relevant features may depend largely on the characteristics of the available datasets and underlying projects. Nevertheless, even when the accuracy of individual estimates is poor, the aggregated estimates at the project level may present a good accuracy due to error compensation.

2023

Case Studies of Development of Verified Programs with Dafny for Accessibility Assessment

Authors
Faria, JP; Abreu, R;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Formal verification techniques aim at formally proving the correctness of a computer program with respect to a formal specification, but the expertise and effort required for applying formal specification and verification techniques and scalability issues have limited their practical application. In recent years, the tremendous progress with SAT and SMT solvers enabled the construction of a new generation of tools that promise to make formal verification more accessible for software engineers, by automating most if not all of the verification process. The Dafny system is a prominent example of that trend. However, little evidence exists yet about its accessibility. To help fill this gap, we conducted a set of 10 case studies of developing verified implementations in Dafny of some real-world algorithms and data structures, to determine its accessibility for software engineers. We found that, on average, the amount of code written for specification and verification purposes is of the same order of magnitude as the traditional code written for implementation and testing purposes (ratio of 1.14) – an “overhead” that certainly pays off for high-integrity software. The performance of the Dafny verifier was impressive, with 2.4 proof obligations generated per line of code written, and 24 ms spent per proof obligation generated and verified, on average. However, we also found that the manual work needed in writing auxiliary verification code may be significant and difficult to predict and master. Hence, further automation and systematization of verification tasks are possible directions for future advances in the field. © 2023, IFIP International Federation for Information Processing.

2023

Towards Computer Assisted Compliance Assessment in the Development of Software as a Medical Device

Authors
Farshid, S; Lima, B; Faria, JP;

Publication
Proceedings of the 18th International Conference on Software Technologies, ICSOFT 2023, Rome, Italy, July 10-12, 2023.

Abstract

2023

What about the usability in low-code platforms? A systematic literature review

Authors
Pinho, D; Aguiar, A; Amaral, V;

Publication
JOURNAL OF COMPUTER LANGUAGES

Abstract
Context: Low-code development is a concept whose presence has grown both in academia and the software industry and is discussed alongside others, such as model-driven engineering and domain-specific languages. Usability is an important concept in low-code contexts since users of these tools often lack a background in programming. Grey literature articles have also stated that low-code tools have high usability.Objective: This paper examines the current literature about low-code and no-code to discover more about them and their relationship with usability, particularly its quality, which factors are the most relevant, and how users view these tools. This focus on usability aims to provide a different point of view from other works on low-code.Method: We performed a systematic literature review based on a formal protocol for this study. The search protocol returned a total of 207 peer-review articles across five databases, which was supplemented with a snowballing process. These were filtered using inclusion and exclusion criteria, resulting in 38 relevant articles that were analysed, synthesised and reported.Conclusion: Despite growing interest and a strong enterprise presence in academia, we did not find a formal definition of low-code, although common characteristics have been specified. We found that users have a heightened awareness of usability regarding low-code tools, with some authors performing feasibility studies on their implementations or listing factors that influence the user experience in a given tool. Researchers are considering usability factors unconsciously, and the low-code field would grow if research on usability increased. This paper also suggests a definition for low-code development.

2023

Beyond Tradition: Evaluating Agile feasibility in DO-178C for Aerospace Software Development

Authors
Ferreira Ribeiro, JE; Silva, JG; Aguiar, A;

Publication
CoRR

Abstract

2023

EU3DIGITAL - ENSURING THE SUCCESS AND SUSTAINABILITY OF THIRD SECTOR ORGANISATIONS AND SOCIAL ENTERPRISES BY BOOSTING DIGITAL SKILLS AND COMPETENCES USING TRAINING RESOURCES

Authors
Aguiar, A; Soeiro, A; Jacklin-Jarvis, C; Foster, T;

Publication
EDULEARN Proceedings - EDULEARN23 Proceedings

Abstract

  • 14
  • 585