2022
Autores
Lôpo, RX; Jorge, AM; Pedroto, M;
Publicação
Machine Learning and Principles and Practice of Knowledge Discovery in Databases - International Workshops of ECML PKDD 2022, Grenoble, France, September 19-23, 2022, Proceedings, Part I
Abstract
2022
Autores
Vinagre, J; Ghossein, MA; Jorge, AM; Bifet, A; Peska, L;
Publicação
ORSUM@RecSys
Abstract
2022
Autores
Campos, R; Jorge, AM; Jatowt, A; Bhatia, S; Litvak, M; Cordeiro, JP; Rocha, C; Sousa, H; Mansouri, B;
Publicação
SIGIR Forum
Abstract
2022
Autores
Loureiro, D; Jorge, AM;
Publicação
CoRR
Abstract
2022
Autores
Vinagre, J; Jorge, AM; Ghossein, MA; Bifet, A;
Publicação
RecSys '22: Sixteenth ACM Conference on Recommender Systems, Seattle, WA, USA, September 18 - 23, 2022
Abstract
Modern online systems for user modeling and recommendation need to continuously deal with complex data streams generated by users at very fast rates. This can be overwhelming for systems and algorithms designed to train recommendation models in batches, given the continuous and potentially fast change of content, context and user preferences or intents. Therefore, it is important to investigate methods able to transparently and continuously adapt to the inherent dynamics of user interactions, preferably for long periods of time. Online models that continuously learn from such flows of data are gaining attention in the recommender systems community, given their natural ability to deal with data generated in dynamic, complex environments. User modeling and personalization can particularly benefit from algorithms capable of maintaining models incrementally and online. The objective of this workshop is to foster contributions and bring together a growing community of researchers and practitioners interested in online, adaptive approaches to user modeling, recommendation and personalization, and their implications regarding multiple dimensions, such as evaluation, reproducibility, privacy, fairness and transparency. © 2022 Owner/Author.
2022
Autores
Loureiro, D; Mário Jorge, A; Camacho Collados, J;
Publicação
ARTIFICIAL INTELLIGENCE
Abstract
Distributional semantics based on neural approaches is a cornerstone of Natural Language Processing, with surprising connections to human meaning representation as well. Recent Transformer-based Language Models have proven capable of producing contextual word representations that reliably convey sense-specific information, simply as a product of self supervision. Prior work has shown that these contextual representations can be used to accurately represent large sense inventories as sense embeddings, to the extent that a distance-based solution to Word Sense Disambiguation (WSD) tasks outperforms models trained specifically for the task. Still, there remains much to understand on how to use these Neural Language Models (NLMs) to produce sense embeddings that can better harness each NLM's meaning representation abilities. In this work we introduce a more principled approach to leverage information from all layers of NLMs, informed by a probing analysis on 14 NLM variants. We also emphasize the versatility of these sense embeddings in contrast to task-specific models, applying them on several sense-related tasks, besides WSD, while demonstrating improved performance using our proposed approach over prior work focused on sense embeddings. Finally, we discuss unexpected findings regarding layer and model performance variations, and potential applications for downstream tasks.& nbsp;
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.