Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Ricardo Rocha

2008

Thread-Based Competitive Or-Parallelism

Autores
Moura, P; Rocha, R; Madeira, SC;

Publicação
LOGIC PROGRAMMING, PROCEEDINGS

Abstract
This paper presents the logic programming concept of thread-based competitive or-parallelism,, which combines the original idea of competitive or-parallelism with committed-choice nondeterminism and speculative threading. In thread-based competitive or-parallelism, an explicit. disjunction of subgoals is interpreted as a set of concurrent, alternatives, each running in its own thread. The subgoals compete for providing an answer and the first successful subgoal leads to the nation of the remaining ones. We discuss the implementation of competitive or-parallelism in the context, of Logtalk, all object-oriented logic programming language, and present results.

2002

Achieving Scalability in Parallel Tabled Logic Programs

Autores
Rocha, R; Silva, FMA; Costa, VS;

Publicação
16th International Parallel and Distributed Processing Symposium (IPDPS 2002), 15-19 April 2002, Fort Lauderdale, FL, USA, CD-ROM/Abstracts Proceedings

Abstract
Tabling or memoing is a technique where one stores intermediate answers to a problem so that they can be reused in further calls. Tabling is of interest to logic programming because it addresses some of the most significant weaknesses of Prolog. Namely, it can guarantee termination for programs with the bounded term-size property. Tabled programs exhibit a more complex execution mechanism than traditional Prolog's left-to-right search with backtracking. The reason is that Prolog programs are highly recursive and generate multiple answers. This rather involved execution mechanism requires a more complex implementation than traditional Prolog. The declarative nature of tabled logic programming suggests that it might be amenable to parallel execution. On the other hand, the complexity of the tabling mechanism, and the existence of a shared resource, the table, argues that parallelism might be limited, and that performance for real applications might never scale. In this work we prove that parallel tabling is indeed scalable for real applications by experimenting the OPTYap parallel tabled system on a scalable shared-memory machine. © 2002 IEEE.

2001

On a Tabling Engine That Can Exploit Or-Parallelism

Autores
Rocha, R; Silva, FMA; Costa, VS;

Publicação
Logic Programming, 17th International Conference, ICLP 2001, Paphos, Cyprus, November 26 - December 1, 2001, Proceedings

Abstract
Tabling is an implementation technique that improves the declarativeness and expressiveness of Prolog by reusing solutions to goals. Quite a few interesting applications of tabling have been developed in the last few years, and several are by nature non-deterministic. This raises the question of whether parallel search techniques can be used to improve the performance of tabled applications. In this work we demonstrate that the mechanisms proposed to parallelize search in the context of SLD resolution naturally generalize to parallel tabled computations, and that resulting systems can achieve good performance on multi-processors. To do so, we present the OPT Yap parallel engine. In our system individual SLG engines communicate data through stack copying. Completion is detected through a novel parallel completion algorithm that builds upon the data structures proposed for or-parallelism. Scheduling is simplified by building on previous research on or-parallelism. We show initial performance results for our implementation. Our best result is for an actual application, model checking, where we obtain linear speedups. © Springer-Verlag Berlin Heidelberg 2001.

1999

YapOr: an Or-Parallel Prolog System Based on Environment Copying

Autores
Rocha, R; Silva, FMA; Costa, VS;

Publicação
Progress in Artificial Intelligence, 9th Portuguese Conference on Artificial Intelligence, EPIA '99, Évora, Portugal, September 21-24, 1999, Proceedings

Abstract
YapOr is an or-parallel system that extends the Yap Prolog system to exploit implicit or-parallelism in Prolog programs. It is based on the environment copying model, as first implemented in Muse. The development of YapOr required solutions for some important issues, such as designing the data structures to support parallel processing, implementing incremental copying technique, developing a memory organization able to answer with efficiency to parallel processing and to incremental copying in particular, implementing the scheduler strategies, designing an interface between the scheduler and the engine, implementing the sharing work process, and implementing support to the cut builtin. An initial evaluation of YapOr performance showed that it achieves very good performance on a large set of benchmark programs. Indeed, YapOr compares favorably with a mature parallel Prolog system such as Muse, both in terms of base speed and in terms of speedups. © Springer-Verlag Berlin Heidelberg 1999.

2012

Or-parallel prolog execution on multicores based on stack splitting

Autores
Vieira, R; Rocha, R; Silva, FMA;

Publicação
Proceedings of the POPL 2012 Workshop on Declarative Aspects of Multicore Programming, DAMP 2012, Philadelphia, PA, USA, Saturday, January 28, 2012

Abstract
Many or-parallel Prolog computational models exploiting implicit parallelism have been proposed in the past. The Muse and YapOr systems are arguably two of the most efficient systems exploiting or-parallelism on shared memory architectures, both based on the environment copying model. Stack splitting emerged as an alternative model specially geared to distributed memory architectures as it basically splits the computation in such a way that no further, or just minimal, synchronization is required. With the new multicore architectures, it just makes sense to recover the body of knowledge there is in this area and reengineer prior computational models to evaluate their performance on newer architectures. In this paper, we focus on the design and implementation of stack splitting in the YapOr system. Our aim is to take advantage of its robustness to efficiently implement stack splitting support using shared memory, and then be able to directly compare the original YapOr with the YapOr using stack splitting. We consider two splitting models, vertical splitting and half splitting, and have adapted data structures, scheduling and incremental copy procedures in YapOr to cope with the new models. Experimental results, on a multicore machine with 24 cores, show that YapOr using stack splitting is, in general, comparable to the original YapOr, obtaining in some cases better performance than with only environment copying. Copyright © 2012 ACM.

2009

Improving the efficiency of inductive logic programming systems

Autores
Fonseca, NA; Costa, VS; Rocha, R; Camacho, R; Silva, F;

Publicação
SOFTWARE-PRACTICE & EXPERIENCE

Abstract
Inductive logic programming (ILP) is a sub-field of machine learning that provides an excellent framework for multi-relational data mining applications. The advantages of ILP have been successfully demonstrated in complex and relevant industrial and scientific problems. However, to produce valuable models, ILP systems often require long running times and large amounts of memory. In this paper we address fundamental issues that have direct impact on the efficiency of ILP systems. Namely, we discuss how improvements in the indexing mechanisms of an underlying logic programming system benefit ILP performance. Furthermore, we propose novel data structures to reduce memory requirements and we suggest a new lazy evaluation technique to search the hypothesis space more efficiently. These proposals have been implemented in the April ILP system and evaluated using several well-known data sets. The results observed show significant improvements in running time without compromising the accuracy of the models generated. Indeed, the combined techniques achieve several order of magnitudes speedup in some data sets. Moreover, memory requirements are reduced in nearly half of the data sets. Copyright (C) 2008 John Wiley & Sons, Ltd.

  • 16
  • 19