2020
Autores
Silva, JB; Santos, A; Leal, JP;
Publicação
9th Symposium on Languages, Applications and Technologies, SLATE 2020, July 13-14, 2020, School of Technology, Polytechnic Institute of Cávado and Ave, Portugal (Virtual Conference).
Abstract
The goal of the Semantic Web is to allow the software agents around us and AIs to extract information from the Internet as easily as humans do. This semantic web is a network of connected graphs, where relations between concepts and entities make up a layout that is very easy for machines to navigate. At the moment, there are only a few tools that enable humans to navigate this new layer of the Internet, and those that exist are for the most part very specialized tools that require from the user a lot of pre-existing knowledge about the technologies behind this structure. In this article we report on the development of DAOLOT, a search engine that allows users with no previous knowledge of the semantic web to take full advantage of its information network. This paper presents its design, the algorithm behind it and the results of the validation testing conducted with users. The results of our validation testing show that DAOLOT is useful and intuitive to users, even those without any previous knowledge of the field, and provides curated information from multiple sources instantly about any topic.
2021
Autores
dos Santos, AF; Leal, JP;
Publicação
10th Symposium on Languages, Applications and Technologies, SLATE 2021, July 1-2, 2021, Vila do Conde/Póvoa de Varzim, Portugal.
Abstract
Consuming Semantic Web data presents several challenges, from the number of datasets it is composed of, to the (very) large size of some of those datasets and the uncertain availability of querying endpoints. According to its core principles, accessing linked data can be done simply by dereferencing the IRIs of RDF resources. This is a light alternative both for clients and servers when compared to dataset dumps or SPARQL endpoints. The linked data interface does not support complex querying, but using it recursively may suffice to gather information about RDF resources, or to extract the relevant sub-graph which can then be processed and queried using other methods. We present Derzis1, an open source semantic web crawler capable of traversing the linked data cloud starting from a set of seed resources. Derzis maintains information about the paths followed while crawling, which allows to define property path-based restrictions to the crawling frontier.
2013
Autores
Santos, A; Nogueira, R; Lourenço, A;
Publicação
ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal
Abstract
2023
Autores
dos Santos, AF; Leal, JP;
Publicação
GRAPH-BASED REPRESENTATION AND REASONING, ICCS 2023
Abstract
The size of massive knowledge graphs (KGs) and the lack of prior information regarding the schemas, ontologies and vocabularies they use frequently makes them hard to understand and visualize. Graph summarization techniques can help by abstracting details of the original graph to produce a reduced summary that can more easily be explored. Identifiers often carry latent information which could be used for classification of the entities they represent. Particularly, IRI namespaces can be used to classify RDF resources. Namespaces, used in some RDF serialization formats as a shortening mechanism for resource IRIs, have no role in the semantics of RDF. Nevertheless, there is often a hidden meaning behind the decision of grouping resources under a common prefix and assigning an alias to it. We improved on previous work on a namespace-based approach to KG summarization that classifies resources using their namespaces. Producing the summary graph is fast, light on computing resources and requires no previous domain knowledge. The summary graph can be used to analyze the namespace interdependencies of the original graph. We also present chilon, a tool for calculating namespace-based KG summaries. Namespaces are gathered from explicit declarations in the graph serialization, community contributions or resource IRI prefix analysis. We applied chilon to publicly available KGs, used it to generate interactive visualizations of the summaries, and discuss the results obtained.
2010
Autores
Almeida, JJ; Santos, A; Simoes, A;
Publicação
LREC 2010 - SEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
Abstract
Languages are born, evolve and, eventually, die. During this evolution their spelling rules (and sometimes the syntactic and semantic ones) change, putting old documents out of use. In Portugal, a pair of political agreements with Brazil forced relevant changes on the way the Portuguese language is written. In this article we will detail these two Orthographic Agreements (one in the thirties and the other more recently, in the nineties), and the challenges present on the automatic migration of old documents spelling to their actual one. We will reveal Bigorna, a toolkit for the classification of language variants, their comparison and the conversion of texts in different language versions. These tools will be explained together with examples of migration issues. As Birgorna relies on a set of conversion rules we will also discuss how to infer conversion rules from a set of documents (texts with different ages). The document concludes with a brief evaluation on the conversion and classification tool results and their relevance in the current Portuguese language scenario.
2012
Autores
Santos, A; Almeida, JJ; Carvalho, N;
Publicação
LREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
Abstract
Text alignment is one of the main processes for obtaining parallel corpora. When aligning two versions of a book, results are often affected by unpaired sections - sections which only exist in one of the versions of the book. We developed Text : : Perfide : : BookSync, a Perl module which performs books synchronization (structural alignment based on section delimitation), provided they have been previously annotated by Text : : Perfide : : BookCleaner. We discuss the need for such a tool and several implementation decisions. The main functions are described, and examples of input and output are presented. Text : : Perfide : : PartialAlign is an extension of the partialAlign.py tool bundled with hunalign which proposes an alternative methods for splitting bitexts.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.