2013
Authors
Nunes, A; Pereira, J;
Publication
Proceedings of the ACM Symposium on Applied Computing
Abstract
Althought optimistic concurrency control protocols have increasingly been used in distributed database management systems, they imply a trade-off between the number of transactions that can be executed concurrently, hence, the peak throughput, and transactions aborted due to conflicts. We propose a novel optimistic concurrency control mechanism that controls transaction abort rate by minimizing the time during which transactions are vulnerable to abort, without compromising throughput. Briefly, we throttle transaction execution with an adaptive mechanism based on the state of the transaction queues while allowing out-of-order execution based on expected transaction latency. Preliminary evaluation shows that this provides a substantial improvement in committed transaction throughput. Copyright 2013 ACM.
2013
Authors
Nunes, A; Oliveira, R; Pereira, J;
Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Distributed transaction processing has benefited greatly from optimistic concurrency control protocols thus avoiding costly fine-grained synchronization. However, the performance of these protocols degrades significantly when the workload increases, namely, by leading to a substantial amount of aborted transactions due to concurrency conflicts. Our approach stems from the observation that when the abort rate increases with the load as already executed transactions queue for longer periods of time waiting for their turn to be certified and committed. We thus propose an adaptive algorithm for judiciously scheduling transactions to minimize the time during which these are vulnerable to being aborted by concurrent transactions, thereby reducing the overall abort rate. We do so by throttling transaction execution using an adaptive mechanism based on the locally known state of globally executing transactions, that includes out-of-order execution. Our evaluation using traces from the industry standard TPC-E workload shows that the amount of aborted transactions can be kept bounded as system load increases, while at the same time fully utilizing system resources and thus scaling transaction processing throughput. © 2013 IFIP International Federation for Information Processing.
2017
Authors
Alonso, A; Couto, R; Pacheco, H; Bessa, R; Gouveia, C; Seca, L; Moreira, J; Nunes, P; Matos, PG; Oliveira, A;
Publication
CIRED - Open Access Proceedings Journal
Abstract
In the framework of the Horizon 2020 project UPGRID, the Portuguese demo is focused on promoting the exchange of smart metering data between the DSO and different stakeholders, guaranteeing neutrality, efficiency and transparency. The platform described in this study, named the Market Hub Platform, has two main objectives: (i) to guarantee neutral data access to all market agents and (ii) to operate as a market hub for the home energy management systems flexibility, in terms of consumption shift under dynamic retailing tariffs and contracted power limitation requests in response to technical problems. The validation results are presented and discussed in terms of scalability, availability and reliability.
2019
Authors
Ferreira, L; Coelho, F; Alonso, AN; Pereira, J;
Publication
CLOSER: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND SERVICES SCIENCE
Abstract
In the context of the CloudDBAppliance (CDBA) project, fault tolerance and high-availability are provided in layers: within each appliance, within a data centre and between data centres. This paper presents the proposed replication architecture for providing fault tolerance and high availability within a data centre. This layer configuration, along with specific deployment constraints require a custom replication architecture. In particular, replication must be implemented at the middleware-level, to avoid constraining the backing operational database. This paper is focused on the design of the CDBA Replication Manager along with an evaluation, using micro-benchmarking, of components for the replication middleware. Results show the impact, on both throughput and latency, of the replication mechanisms in place.
2019
Authors
Abreu, H; Ferreira, L; Coelho, F; Alonso, AN; Pereira, J;
Publication
PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON DATA SCIENCE, TECHNOLOGY AND APPLICATIONS (DATA)
Abstract
In the context of the CloudDBAppliance (CDBA) project, fault tolerance and high-availability are provided in layers: within each appliance, within a data centre and between datacentres. This paper presents the recovery mechanisms in place to fulfill the provision of high-availability within a datacentre. The recovery mechanism takes advantage of CDBA's in-middleware replication mechanism to bring failed replicas up-to-date. Along with the description of different variants of the recovery mechanism, this paper provides their comparative evaluation, focusing on the time it takes to recover a failed replica and how the recovery process impacts throughput.
2020
Authors
Silva, F; Alonso, AN; Pereira, J; Oliveira, R;
Publication
Distributed Applications and Interoperable Systems - 20th IFIP WG 6.1 International Conference, DAIS 2020, Held as Part of the 15th International Federated Conference on Distributed Computing Techniques, DisCoTec 2020, Valletta, Malta, June 15-19, 2020, Proceedings
Abstract
The performance and scalability of byzantine fault-tolerant (BFT) protocols for state machine replication (SMR) have recently come under scrutiny due to their application in the consensus mechanism of blockchain implementations. This led to a proliferation of proposals that provide different trade-offs that are not easily compared as, even if these are all based on message passing, multiple design and implementation factors besides the message exchange pattern differ between each of them. In this paper we focus on the impact of different combinations of cryptographic primitives and the message exchange pattern used to collect and disseminate votes, a key aspect for performance and scalability. By measuring this aspect in isolation and in a common framework, we characterise the design space and point out research directions for adaptive protocols that provide the best trade-off for each environment and workload combination. © IFIP International Federation for Information Processing 2020.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.