Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HASLab

2024

Formally Verifying Kyber Episode V: Machine-Checked IND-CCA Security and Correctness of ML-KEM in EasyCrypt

Authors
Almeida, JB; Olmos, SA; Barbosa, M; Barthe, G; Dupressoir, F; Grégoire, B; Laporte, V; Lechenet, JC; Low, C; Oliveira, T; Pacheco, H; Quaresma, M; Schwabe, P; Strub, PY;

Publication
ADVANCES IN CRYPTOLOGY - CRYPTO 2024, PT II

Abstract
We present a formally verified proof of the correctness and IND-CCA security of ML-KEM, the Kyber-based Key Encapsulation Mechanism (KEM) undergoing standardization by NIST. The proof is machine-checked in EasyCrypt and it includes: 1) A formalization of the correctness (decryption failure probability) and IND-CPA security of the Kyber base public-key encryption scheme, following Bos et al. at Euro S&P 2018; 2) A formalization of the relevant variant of the Fujisaki-Okamoto transform in the Random Oracle Model (ROM), which follows closely (but not exactly) Hofheinz, Hovelmanns and Kiltz at TCC 2017; 3) A proof that the IND-CCA security of the ML-KEM specification and its correctness as a KEM follows from the previous results; 4) Two formally verified implementations of ML-KEM written in Jasmin that are provably constant-time, functionally equivalent to the ML-KEM specification and, for this reason, inherit the provable security guarantees established in the previous points. The top-level theorems give self-contained concrete bounds for the correctness and security of ML-KEM down to (a variant of) Module-LWE. We discuss how they are built modularly by leveraging various EasyCrypt features.

2024

Assessing the impact of hints in learning formal specification

Authors
Cunha, A; Macedo, N; Campos, JC; Margolis, I; Sousa, E;

Publication
2024 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING EDUCATION AND TRAINING, ICSE-SEET 2024

Abstract
Background: Many progranunmg environments include automated feedback in the form of hints to help novices learn autonomously. Some experimental studies investigated the impact of automated liints in the immediate, performance and learning retention in that context. Automated feedback is also becoming a popular research topic in the context of formal specification languages, but so far no experimental studies have been conducted to assess its impact while learning such languages. Objective: We aim to investigate the impact of different types of automated hints while learning a formal specification language, not only in terms of immediate performance and learning retention, but also in the emotional response of the students. Method: We conducted a simple one-factor randomised experiment in 2 sessions involving 85 BSc students majoring in CSE. In the 1st session students were divided in 1 control group and 3 experimental groups, each receiving a different type of hint while learning to specify simple, requirements with the Alloy formal specification language. To assess the impact of hints on learning retention, in the 2nd session, 1 week later, students had no hints while formalising requirements. Before and after each session the students answered a standard self-reporting emotional survey to assess their emotional response to the experiment. Results: Of the 3 types of hints considered, only those pointing to the precise location of an error had a positive impact on the immediate performance and none had significant impact in learning retention. Hint availability also causes a significant impact on the emotional response, but no significant emotional :impact exists once hints are no longer available (i.e. no deprivation effects were detected). Conclusion: Although none of the evaluated hints had an impact on learning retention, learning a formal specification language with an environment that provides hints with precise error locations seems to contribute to a better overall experience without apparent drawbacks. Further studies are needed to investigate if other kind of feedback, namely hints combined with some sort of self explanation prompts, can have a positive impact in learning retention.

2024

50 years of Research in Engineering Interactive Computing Systems: the CCL 1974 to EICS 2024 journey

Authors
Campos, JC; Luyten, K; Nigay, L; Palanque, P; Paternò , F; Spano, LD; Vanderdonckt, J;

Publication
COMPANION OF THE 2024 ACM SIGCHI SYMPOSIUM ON ENGINEERING INTERACTIVE COMPUTING SYSTEMS, EICS 2024

Abstract
This panel commemorates the 50th anniversary of the IFIP TC2 Working Conference on Command Languages (CCL) and the 30th anniversary of the workshop series on Design Specification and Verification of Interactive Systems (DSV-IS), and uses that opportunity to position EICS within the HCI community. The discussion traces the origins of the EICS conference, from the union of seminal conferences to its current status and looks forward into its (possible) future. Reflecting on its contributions to the evolution of HCI methodologies, tools, and practices, the panel highlights the conference's role and impact on shaping the engineering of interactive systems.

2024

Explaining Temporal Logic Model Checking Counterexamples Through the Use of Structured Natural Language

Authors
Moreira, EJVF; Campo, JC;

Publication
ENGINEERING INTERACTIVE COMPUTER SYSTEMS, EICS 2023 INTERNATIONAL WORKSHOPS AND DOCTORAL CONSORTIUM

Abstract
The use of model checking tools allows for the formal verification of properties over models of systems, improving their robustness. However, these tools are challenging to use, and their results require much work of interpretation to communicate to stakeholders. To address this issue, the IVY Workbench offers a plethora of options to make the process of creating and understanding the models, properties and results of the verification process more accessible, with a particular focus on interactive computing systems. Despite this, there is still a significant requirement of expertise to use the tool. To solve this, an approach to provide structured natural language explanations for the results of model checking-based tools is being developed, to be later incorporated into the IVY Workbench. This paper presents the current state of the approach's development, stating its objective and what results can already be achieved.

2024

Companion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, EICS Companion 2024, Cagliari, Italy, June 24-28, 2024

Authors
Nebeling, M; Spano, LD; Campos, JC;

Publication
EICS (Companion)

Abstract

2024

A Language for Explaining Counterexamples

Authors
Ferreira Moreira, EJV; Campos, JC;

Publication
13th Symposium on Languages, Applications and Technologies, SLATE 2024, July 4-5, 2024, Águeda, Portugal

Abstract
Model checkers can automatically verify a system’s behavior against temporal logic properties. However, analyzing the counterexamples produced in case of failure is still a manual process that requires both technical and domain knowledge. However, this step is crucial to understand the flaws of the system being verified. This paper presents a language created to support the generation of natural language explanations of counterexamples produced by a model checker. The language supports querying the properties and counterexamples to generate the explanations. The paper explains the language components and how they can be used to produce explanations. © Ezequiel José Veloso Ferreira Moreira and José Creissac Campos.

  • 5
  • 255