Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by LIAAD

2022

Geovisualisation Tools for Reporting and Monitoring Transthyretin-Associated Familial Amyloid Polyneuropathy Disease

Authors
Lôpo, RX; Jorge, AM; Pedroto, M;

Publication
Machine Learning and Principles and Practice of Knowledge Discovery in Databases - International Workshops of ECML PKDD 2022, Grenoble, France, September 19-23, 2022, Proceedings, Part I

Abstract

2022

Proceedings of the 5th Workshop on Online Recommender Systems and User Modeling co-located with the 16th ACM Conference on Recommender Systems, ORSUM@RecSys 2022, Seattle, WA, USA, September 23rd, 2022

Authors
Vinagre, J; Ghossein, MA; Jorge, AM; Bifet, A; Peska, L;

Publication
ORSUM@RecSys

Abstract

2022

Report on the 5th International Workshop on Narrative Extraction from Texts (Text2Story 2022) at ECIR 2022

Authors
Campos, R; Jorge, AM; Jatowt, A; Bhatia, S; Litvak, M; Cordeiro, JP; Rocha, C; Sousa, H; Mansouri, B;

Publication
SIGIR Forum

Abstract

2022

Probing Commonsense Knowledge in Pre-trained Language Models with Sense-level Precision and Expanded Vocabulary

Authors
Loureiro, D; Jorge, AM;

Publication
CoRR

Abstract

2022

ORSUM 2022 - 5th Workshop on Online Recommender Systems and User Modeling

Authors
Vinagre, J; Jorge, AM; Ghossein, MA; Bifet, A;

Publication
RecSys '22: Sixteenth ACM Conference on Recommender Systems, Seattle, WA, USA, September 18 - 23, 2022

Abstract
Modern online systems for user modeling and recommendation need to continuously deal with complex data streams generated by users at very fast rates. This can be overwhelming for systems and algorithms designed to train recommendation models in batches, given the continuous and potentially fast change of content, context and user preferences or intents. Therefore, it is important to investigate methods able to transparently and continuously adapt to the inherent dynamics of user interactions, preferably for long periods of time. Online models that continuously learn from such flows of data are gaining attention in the recommender systems community, given their natural ability to deal with data generated in dynamic, complex environments. User modeling and personalization can particularly benefit from algorithms capable of maintaining models incrementally and online. The objective of this workshop is to foster contributions and bring together a growing community of researchers and practitioners interested in online, adaptive approaches to user modeling, recommendation and personalization, and their implications regarding multiple dimensions, such as evaluation, reproducibility, privacy, fairness and transparency. © 2022 Owner/Author.

2022

LMMS reloaded: Transformer-based sense embeddings for disambiguation and beyond

Authors
Loureiro, D; Mário Jorge, A; Camacho Collados, J;

Publication
ARTIFICIAL INTELLIGENCE

Abstract
Distributional semantics based on neural approaches is a cornerstone of Natural Language Processing, with surprising connections to human meaning representation as well. Recent Transformer-based Language Models have proven capable of producing contextual word representations that reliably convey sense-specific information, simply as a product of self supervision. Prior work has shown that these contextual representations can be used to accurately represent large sense inventories as sense embeddings, to the extent that a distance-based solution to Word Sense Disambiguation (WSD) tasks outperforms models trained specifically for the task. Still, there remains much to understand on how to use these Neural Language Models (NLMs) to produce sense embeddings that can better harness each NLM's meaning representation abilities. In this work we introduce a more principled approach to leverage information from all layers of NLMs, informed by a probing analysis on 14 NLM variants. We also emphasize the versatility of these sense embeddings in contrast to task-specific models, applying them on several sense-related tasks, besides WSD, while demonstrating improved performance using our proposed approach over prior work focused on sense embeddings. Finally, we discuss unexpected findings regarding layer and model performance variations, and potential applications for downstream tasks.& nbsp;

  • 45
  • 429