2021
Autores
Ferreira, JF; Mendes, A; Menghi, C;
Publicação
Lecture Notes in Computer Science
Abstract
2021
Autores
Tavares, C; Oliveira, S; Fernandes, V; Postnikov, A; Vasilevskiy, MI;
Publicação
SOFT COMPUTING
Abstract
As quantum computing approaches its first commercial implementations, quantum simulation emerges as a potentially ground-breaking technology for several domains, including biology and chemistry. However, taking advantage of quantum algorithms in quantum chemistry raises a number of theoretical and practical challenges at different levels, from the conception to its actual execution. We go through such challenges in a case study of a quantum simulation for the hydrogen (H-2) and lithium hydride (LiH) molecules, at an actual commercially available quantum computer, the IBM Q. The former molecule has always been a playground for testing approximate calculation methods in quantum chemistry, while the latter is just a little bit more complex, lacking the mirror symmetry of the former. Using the variational quantum eigensolver method, we study the molecule's ground state energy versus interatomic distance, under the action of stationary electric fields (Stark effect). Additionally, we review the necessary calculations of the matrix elements of the second quantization Hamiltonian encompassing the extra terms concerning the action of electric fields, using STO-LG-type atomic orbitals to build the minimal basis sets.
2021
Autores
Dahlqvist, F; Neves, R;
Publicação
CoRR
Abstract
2021
Autores
Barbosa, M; Ferreira, B; Marques, J; Portela, B; Preguica, N;
Publicação
PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING AND NETWORKING (ICDCN '21)
Abstract
Conflict-free Replicated Data Types (CRDTs) are abstract data types that support developers when designing and reasoning about distributed systems with eventual consistency guarantees. In their core they solve the problem of how to deal with concurrent operations, in a way that is transparent for developers. However in the real world, distributed systems also suffer from other relevant problems, including security and privacy issues and especially when participants can be untrusted. In this paper we present new privacy-preserving CRDT protocols that can be used to help secure distributed cloud-backed applications, including NoSQL geo-replicated databases. Our proposals are based on standard CRDTs, such as sets and counters, augmented with cryptographic mechanisms that allow their operations to be performed on encrypted data. We accompany our proposals with formal security proofs and implement and integrate them in An-tidoteDB, a geo-replicated NoSQL database that leverages CRDTs for its operations. Experimental evaluations based on the Danish Shared Medication Record dataset (FMK) exhibit the tradeoffs that our different proposals make and show that they are ready to be used in practical applications.
2021
Autores
Pereira, A; Proenca, A;
Publicação
Advances in Parallel & Distributed Processing, and Applications - Transactions on Computational Science and Computational Intelligence
Abstract
2021
Autores
Pereira, A; Proenca, A;
Publicação
COMPUTER PHYSICS COMMUNICATIONS
Abstract
Software to analyse very large sets of experimental data often relies on a pipeline of irregular computational tasks with decisions to remove irrelevant data from further processing. A user-centred framework was designed and deployed, HEP-Frame, which aids domain experts to develop applications for scientific data analyses and to monitor and control their efficient execution. The key feature of HEP-Frame is the performance portability of the code across different heterogeneous platforms, due to a novel adaptive multi-layer scheduler, seamlessly integrated into the tool, an approach not available in competing frameworks. The multi-layer scheduler transparently allocates parallel data/tasks across the available heteroge-neous resources, dynamically balances threads among data input and computational tasks, adaptively reorders in run-time the parallel execution of the pipeline stages for each data stream, respecting data dependencies, and efficiently manages the execution of library functions in accelerators. Each layer implements a specific scheduling strategy: one balances the execution of the computational stages of the pipeline, distributing the execution of the stages of the same or different dataset elements among the available computing threads; another controls the order of the pipeline stages execution, so that most data is filtered out earlier and later stages execute the computationally heavy tasks; yet another adaptively balances the automatically created threads among data input and the computational tasks, taking into account the requirements of each application. Simulated data analyses from sensors in the ATLAS Experiment at CERN evaluated the scheduler efficiency, on dual multicore Xeon servers with and without accelerators, and on servers with the many-core Intel KNL. Experimental results show significant improved performance of these data analyses due to HEP-Frame features and the codes scaled well on multiple servers. Results also show the improved HEP-Frame scheduler performance over the key competitor, the HEFT list scheduler. The best overall performance improvement over a real fine tuned sequential data analysis was impressive in both homogeneous and heterogeneous multicore servers and in many-core servers: 81x faster in the homogeneous 24+24 core Skylake server, 86x faster in the heterogeneous 12+12 core Ivy Bridge server with the Kepler GPU, and 252x faster in the 64-core KNL server. Program summary Program Title: HEP-Frame CPC Library link to program files: https://doi.org/10.17632/m2jwxshtfz.1 Licencing provisions: GPLv3 Programming language: C++. Supplementary material: The current HEP-Frame public release available at https://bitbucket.org/ ampereira/hep-frame/wiki/Home . Nature of problem: Scientific data analysis applications are often developed to process large amounts of data obtained through experimental measurements or Monte Carlo simulations, aiming to identify patterns in the data or to test and/or validate theories. These large inputs are usually processed by a pipeline of computational tasks that may filter out irrelevant data (a task and its filter is addressed as a proposition in this communication), preventing it from being processed by subsequent tasks in the pipeline. This data filtering, coupled with the fact that propositions may have different computational intensities, contribute to the irregularity of the pipeline execution. This can lead to scientific data analyses I/O-, memory-, or compute-bound performance limitations, depending on the implemented algorithms and input data. To allow scientists to process more data with more accurate results their code and data structures should be optimized for the computing resources they can access. Since the main goal of most scientists is to obtain results relevant to their scientific fields, often within strict deadlines, optimizing the performance of their applications is very time consuming and is usually overlooked. Scientists require a software framework to aid the design and development of efficient applications and to control their parallel execution on distinct computing platforms. Solution method: This work proposes HEP-Frame, a framework to aid the development and efficient execution of pipelined scientific analysis applications on homogeneous and heterogeneous servers. HEP-Frame is a user-centred framework to aid scientists to develop applications to analyse data from a large number of dataset elements, with a flexible pipeline of propositions. It not only stresses the interface to domain experts so that code is more robust and is developed faster, but it also aims high-performance portability across different types of parallel computing platforms and desirable sustainability features. This framework aims to provide efficient parallel code execution without requiring user expertise in parallel computing. Frameworks to aid the design and deployment of scientific code usually fall into two categories: (i) resource-centred, closer to the computing platforms, where execution efficiency and performance portability are the main goals, but forces developers to adapt their code to strict framework con-straints; (ii) user-centred, which stresses the interface to domain experts to improve their code development speed and robustness, aiming to provide desirable sustainability features but disregarding the execution performance. There are also a set of frameworks that merge these two categories (Liu et al., 2015 [1]; Deelman et al., 2015 [2]) for scientific computing. While they do not have steep learning curves, concessions have to be made to their ease of use to allow for their broader scope of targeted applications. HEP-Frame attempts to merge this gap, placing itself between a fully user-or resource-centred framework, so that users develop code quickly and do not have to worry about the computational efficiency of the code It handles (i) by ensuring efficient execution of applications according to their computational requirements and the available resources on the server through a multi-layer scheduler, while (ii) is addressed by automatically generating code skeletons and transparently managing the data structure and automating repetitive tasks. Additional comments: An early stage proof-of-concept was published in a conference proceedings (Pereira et al., 2015). However, the HEP-Frame version presented in this communication only shares a very small portion of the code related to the skeleton generation (less than 5% of the overall code), while the rest of the user interface, multi-layer scheduler, and parallelization strategies were completely redesigned and re-implemented.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.