2026
Authors
Sadhu, S; Mallick, D; Namtirtha, A; Malta, MC; Dutta, A;
Publication
IEEE Transactions on Emerging Topics in Computational Intelligence
Abstract
2025
Authors
Matos, T; Mendes, D; Jacob, J; de Sousa, AA; Rodrigues, R;
Publication
2025 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW
Abstract
Virtual Reality allows users to experience realistic environments in an immersive and controlled manner, particularly beneficial for contexts where the real scenario is not easily or safely accessible. The choice between 360 content and 3D models impacts outcomes such as perceived quality and computational cost, but can also affect user attention. This study explores how attention manifests in VR using a 3D model or a 360 image rendered from said model during visuospatial tasks. User tests revealed no significant difference in workload or cybersickness between these types of content, while sense of presence was reportedly higher in the 3D environment.
2025
Authors
Rogers, TB; Meneveaux, D; Ammi, M; Ziat, M; Jänicke, S; Purchase, HC; Radeva, P; Furnari, A; Bouatouch, K; de Sousa, AA;
Publication
VISIGRAPP (3): VISAPP
Abstract
2025
Authors
Rogers, TB; Meneveaux, D; Ammi, M; Ziat, M; Jänicke, S; Purchase, HC; Radeva, P; Furnari, A; Bouatouch, K; de Sousa, AA;
Publication
VISIGRAPP (2): VISAPP
Abstract
2025
Authors
Rogers, TB; Meneveaux, D; Ammi, M; Ziat, M; Jänicke, S; Purchase, HC; Radeva, P; Furnari, A; Bouatouch, K; de Sousa, AA;
Publication
VISIGRAPP (1): GRAPP, HUCAPP, IVAPP
Abstract
2025
Authors
Rincon, AM; Vincenzi, AMR; Faria, JP;
Publication
2025 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS, ICSTW
Abstract
This study explores prompt engineering for automated white-box integration testing of RESTful APIs using Large Language Models (LLMs). Four versions of prompts were designed and tested across three OpenAI models (GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o) to assess their impact on code coverage, token consumption, execution time, and financial cost. The results indicate that different prompt versions, especially with more advanced models, achieved up to 90% coverage, although at higher costs. Additionally, combining test sets from different models increased coverage, reaching 96% in some cases. We also compared the results with EvoMaster, a specialized tool for generating tests for REST APIs, where LLM-generated tests achieved comparable or higher coverage in the benchmark projects. Despite higher execution costs, LLMs demonstrated superior adaptability and flexibility in test generation.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.