Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por José Paulo Leal

2014

A Survey of E-learning Content Aggregation Standards

Autores
Queiros, R; Leal, JP;

Publicação
NEW HORIZONS IN WEB BASED LEARNING, ICWL 2014

Abstract
As e-learning gradually evolved many specialized and disparate systems appeared to fulfil the needs of teachers and students, such as repositories of learning objects, authoring tools, intelligent tutors and automatic evaluators. This heterogeneity raises interoperability issues giving the standardization of content an important role in e-learning. This article presents a survey on current e-learning content aggregation standards focusing on their internal organization and packaging. This study is part of an effort to choose the most suitable specifications and standards for an e-learning framework called Ensemble defined as a conceptual tool to organize a network of e-learning systems and services for domains with complex evaluation.

2014

Multiscale Parameter Tuning of a Semantic Relatedness Algorithm

Autores
Leal, JP; Costa, T;

Publicação
3rd Symposium on Languages, Applications and Technologies, SLATE 2014, June 19-20, 2014 - Bragança, Portugal

Abstract
The research presented in this paper builds on previous work that lead to the definition of a family of semantic relatedness algorithms that compute a proximity given as input a pair of concept labels. The algorithms depends on a semantic graph, provided as RDF data, and on a particular set of weights assigned to the properties of RDF statements (types of arcs in the RDF graph). The current research objective is to automatically tune the weights for a given graph in order to increase the proximity quality. The quality of a semantic relatedness method is usually measured against a benchmark data set. The results produced by the method are compared with those on the benchmark using the Spearman's rank coefficient. This methodology works the other way round and uses this coefficient to tune the proximity weights. The tuning process is controlled by a genetic algorithm using the Spearman's rank coefficient as the fitness function. The genetic algorithm has its own set of parameters which also need to be tuned. Bootstrapping is based on a statistical method for generating samples that is used in this methodology to enable a large number of repetitions of the genetic algorithm, exploring the results of alternative parameter settings. This approach raises several technical challenges due to its computational complexity. This paper provides details on the techniques used to speedup this process. The proposed approach was validated with the WordNet 2.0 and the WordSim-353 data set. Several ranges of parameters values were tested and the obtained results are better than the state of the art methods for computing semantic relatedness using the WordNet 2.0, with the advantage of not requiring any domain knowledge of the ontological graph. © José Paulo Leal and Teresa Costa.

2013

A Survey on eLearning Content Standardization

Autores
Queiros, R; Leal, JP;

Publicação
Communications in Computer and Information Science

Abstract
eLearning has been evolved in a gradual and consistent way. Along with this evolution several specialized and disparate systems appeared to fulfill the needs of teachers and students such as repositories of learning objects, intelligent tutors, or automatic evaluators. This heterogeneity poses issues that are necessary to address in order to promote interoperability among systems. Based on this fact, the standardization of content takes a leading role in the eLearning realm. This article presents a survey on current eLearning content standards. It gathers information on the most emergent standards and categorizes them according three distinct facets: metadata, content packaging and educational design. © Springer-Verlag Berlin Heidelberg 2013.

2013

Using proximity to compute semantic relatedness in RDF graphs

Autores
Leal, JP;

Publicação
COMPUTER SCIENCE AND INFORMATION SYSTEMS

Abstract
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terns in RDF graphs based on the notion of proximity. It proposes a formal definition of proximity in terms of the set paths connecting two concept nodes, and an algorithm for finding this set and computing proximity with a given error margin. This algorithm was implemented on a tool called Shakti that extracts relevant ontological data for a given domain from DBpedia - a community effort to extract structured data from the Wikipedia. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are also reported.

2016

Comparing and Benchmarking Semantic Measures Using SMComp

Autores
Costa, T; Leal, JP;

Publicação
5th Symposium on Languages, Applications and Technologies, SLATE 2016, June 20-21, 2016, Maribor, Slovenia

Abstract
The goal of the semantic measures is to compare pairs of concepts, words, sentences or named entities. Their categorization depends on what they measure. If a measure only considers taxonomy relationships is a similarity measure; if it considers all type of relationships it is a relatedness measure. The evaluation process of these measures usually relies on semantic gold standards. These datasets, with several pairs of words with a rating assigned by persons, are used to assess how well a semantic measure performs. There are a few frameworks that provide tools to compute and analyze several well-known measures. This paper presents a novel tool - SMComp - a testbed designed for path-based semantic measures. At its current state, it is a domain-specific tool using three different versions of WordNet. SMComp has two views: one to compute semantic measures of a pair of words and another to assess a semantic measure using a dataset. On the first view, it offers several measures described in the literature as well as the possibility of creating a new measure, by introducing Java code snippets on the GUI. The other view offers a large set of semantic benchmarks to use in the assessment process. It also offers the possibility of uploading a custom dataset to be used in the assessment. © Teresa Costa and José Paulo Leal;licensed under Creative Commons License CC-BY.

2015

Reducing Large Semantic Graphs to Improve Semantic Relatedness

Autores
Costa, T; Leal, JP;

Publicação
LANGUAGES, APPLICATIONS AND TECHNOLOGIES, SLATE 2015

Abstract
In the previous research the authors developed a family of semantic measures that are adaptable to any semantic graph, being automatically tuned with a set of parameters. The research presented in this paper extends this approach by also tuning the graph. This graph reduction procedure starts with a disconnected graph and incrementally adds edge types, until the quality of the semantic measure cannot be further improved. The validation performed used the three most recent versions of WordNet and, in most cases, this approach improves the quality of the semantic measure.

  • 5
  • 22