Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Ricardo Campos

2019

Document in Context of its Time (DICT): Providing Temporal Context to Support Analysis of Past Documents

Autores
Jatowt, A; Campos, R; Bhowmick, SS; Doucet, A;

Publicação
PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19)

Abstract
Old documents tend to be difficult to be analyzed and understood, not only for average users but oftentimes for professionals as well. This is due to the context shift, vocabulary evolution and, in general, the lack of precise knowledge about the writing styles in the past. We propose a concept of positioning document in the context of its time, and develop an interactive system to support such an objective. Our system helps users to know whether the vocabulary used by an author in the past were frequent at the time of text creation, whether the author used anachronisms or neologisms, and so on. It also enables detecting terms in text that underwent considerable semantic change and provides more information on the nature of such change. Overall, the proposed tool offers additional knowledge on the writing style and vocabulary choice in documents by drawing from data collected at the time of their creation or at other user-specified time.

2020

YAKE! Keyword extraction from single documents using multiple local features

Autores
Campos, R; Mangaravite, V; Pasquali, A; Jorge, A; Nunes, C; Jatowt, A;

Publicação
INFORMATION SCIENCES

Abstract
As the amount of generated information grows, reading and summarizing texts of large collections turns into a challenging task. Many documents do not come with descriptive terms, thus requiring humans to generate keywords on-the-fly. The need to automate this kind of task demands the development of keyword extraction systems with the ability to automatically identify keywords within the text. One approach is to resort to machine-learning algorithms. These, however, depend on large annotated text corpora, which are not always available. An alternative solution is to consider an unsupervised approach. In this article, we describe YAKE!, a light-weight unsupervised automatic keyword extraction method which rests on statistical text features extracted from single documents to select the most relevant keywords of a text. Our system does not need to be trained on a particular set of documents, nor does it depend on dictionaries, external corpora, text size, language, or domain. To demonstrate the merits and significance of YAKE!, we compare it against ten state-of-the-art unsupervised approaches and one supervised method. Experimental results carried out on top of twenty datasets show that YAKE! significantly outperforms other unsupervised methods on texts of different sizes, languages, and domains.

2020

The 3rd International Workshop on Narrative Extraction from Texts: Text2Story 2020

Autores
Campos, R; Jorge, A; Jatowt, A; Bhatia, S;

Publicação
Advances in Information Retrieval - 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14-17, 2020, Proceedings, Part II

Abstract
The Third International Workshop on Narrative Extraction from Texts (Text2Story’20) [text2story20.inesctec.pt] held in conjunction with the 42nd European Conference on Information Retrieval (ECIR 2020) gives researchers of IR, NLP and other fields, the opportunity to share their recent advances in extraction and formal representation of narratives. This workshop also presents a forum to consolidate the multi-disciplinary efforts and foster discussions around the narrative extraction task, a hot topic in recent years. © Springer Nature Switzerland AG 2020.

2019

Proceedings of Text2Story - 2nd Workshop on Narrative Extraction From Texts, co-located with the 41st European Conference on Information Retrieval, Text2Story@ECIR 2019, Cologne, Germany, April 14th, 2019

Autores
Jorge, AM; Campos, R; Jatowt, A; Bhatia, S;

Publicação
Text2Story@ECIR

Abstract

2020

Proceedings of Text2Story - Third Workshop on Narrative Extraction From Texts co-located with 42nd European Conference on Information Retrieval, Text2Story@ECIR 2020, Lisbon, Portugal, April 14th, 2020 [online only]

Autores
Campos, R; Jorge, AM; Jatowt, A; Bhatia, S;

Publicação
Text2Story@ECIR

Abstract

2020

Event-Related Query Classification with Deep Neural Networks

Autores
Gandhi, S; Mansouri, B; Campos, R; Jatowt, A;

Publicação
WWW'20: COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2020

Abstract
Users tend to search over the Internet to get the most updated news when an event occurs. Search engines should then be capable of effectively retrieving relevant documents for event-related queries. As the previous studies have shown, different retrieval models are needed for different types of events. Therefore, the first step for improving effectiveness is identifying the event-related queries and determining their types. In this paper, we propose a novel model based on deep neural networks to classify event-related queries into four categories: periodic, aperiodic, one-time-only, and non-event. The proposed model combines recurrent neural networks (by feeding two LSTM layers with query frequencies) and visual recognition models (by transforming time-series data from a 1D signal to a 2D image - later passed to a CNN model) for effective query type estimation. Worth noting is that our method uses only the time-series data of query frequencies, without the need to resort to any external sources such as contextual data, which makes it language and domain-independent with regards to the query issued. For evaluation, we build upon the previous datasets on event-related queries to create a new dataset that fits the purpose of our experiments. The obtained results show that our proposed model can achieve an F1-score of 0.87.

  • 6
  • 18