Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por LIAAD

2022

Exploiting BIM Objects for Synthetic Data Generation toward Indoor Point Cloud Classification Using Deep Learning

Autores
Frias, E; Pinto, J; Sousa, R; Lorenzo, H; Diaz Vilarino, L;

Publicação
JOURNAL OF COMPUTING IN CIVIL ENGINEERING

Abstract
Advances in technology are leading to more and more devices integrating sensors capable of acquiring data quickly and with high accuracy. Point clouds are no exception. Therefore, there is increased research interest in the large amount of available light detection and ranging (LiDAR) data by point cloud classification using artificial intelligence. Nevertheless, point cloud labeling is a time-consuming task. Hence the amount of labeled data is still scarce. Data synthesis is gaining attention as an alternative to increase the volume of classified data. At the same time, the amount of Building Information Models (BIMs) provided by manufacturers on website databases is increasing. In line with these recent trends, this paper presents a deep-learning framework for classifying point cloud objects based on synthetic data sets created from BIM objects. The method starts by transforming BIM objects into point clouds deriving a data set consisting of 21 object classes characterized with various perturbation patterns. Then, the data set is split into four subsets to carry out the evaluation of synthetic data on the implemented flexible two-dimensional (2D) deep neural framework. In the latter, binary or greyscale images can be generated from point clouds by both orthographic or perspective projection to feed the network. Moreover, the surface variation feature was computed in order to aggregate more geometric information to images and to evaluate how it influences the object classification. The overall accuracy is over 85% in all tests when orthographic images are used. Also, the use of greyscale images representing surface variation improves performance in almost all tests although the computation of this feature may not be robust in point clouds with complex geometry or perturbations. (C) 2022 American Society of Civil Engineers.

2022

The CirCor DigiScope Dataset: From Murmur Detection to Murmur Classification

Autores
Oliveira, J; Renna, F; Costa, PD; Nogueira, M; Oliveira, C; Ferreira, C; Jorge, A; Mattos, S; Hatem, T; Tavares, T; Elola, A; Rad, AB; Sameni, R; Clifford, GD; Coimbra, MT;

Publicação
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Cardiac auscultation is one of the most cost-effective techniques used to detect and identify many heart conditions. Computer-assisted decision systems based on auscultation can support physicians in their decisions. Unfortunately, the application of such systems in clinical trials is still minimal since most of them only aim to detect the presence of extra or abnormal waves in the phonocardiogram signal, i.e., only a binary ground truth variable (normal vs abnormal) is provided. This is mainly due to the lack of large publicly available datasets, where a more detailed description of such abnormal waves (e.g., cardiac murmurs) exists. To pave the way to more effective research on healthcare recommendation systems based on auscultation, our team has prepared the currently largest pediatric heart sound dataset. A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients, in the process, 215780 heart sounds have been manually annotated. Furthermore, and for the first time, each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading, and quality. In addition, the auscultation locations where the murmur is present were identified as well as the auscultation location where the murmur is detected more intensively. Such detailed description for a relatively large number of heart sounds may pave the way for new machine learning algorithms with a real-world application for the detection and analysis of murmur waves for diagnostic purposes.

2022

The 5th International Workshop on Narrative Extraction from Texts: Text2Story 2022

Autores
Campos, R; Jorge, A; Jatowt, A; Bhatia, S; Litvak, M;

Publicação
ADVANCES IN INFORMATION RETRIEVAL, PT II

Abstract
Narrative extraction, understanding, verification, and visualization are currently popular topics for users interested in achieving a deeper understanding of text, researchers who want to develop accurate methods for text mining, and commercial companies that strive to provide efficient tools for that. Information Retrieval (IR), Natural Language Processing (NLP), Machine Learning (ML) and Computational Linguistics (CL) already offer many instruments that aid the exploration of narrative elements in text and within unstructured data. Despite evident advances in the last couple of years, the problem of automatically representing narratives in a structured form and interpreting them, beyond the conventional identification of common events, entities and their relationships, is yet to be solved. This workshop held virtually on April 10th, 2022 in conjunction with the 44th European Conference on Information Retrieval (ECIR '22) aims at presenting and discussing current and future directions for IR, NLP, ML and other computational linguistics-related fields capable of improving the automatic understanding of narratives. It includes sessions devoted to research, demo, position papers, work-in-progress, project description, nectar, and negative results papers, keynote talks and space for an informal discussion of the methods, of the challenges and of the future of this research area.

2022

Proceedings of Text2Story - Fifth Workshop on Narrative Extraction From Texts held in conjunction with the 44th European Conference on Information Retrieval (ECIR 2022), Stavanger, Norway, April 10, 2022

Autores
Campos, R; Jorge, AM; Jatowt, A; Bhatia, S; Litvak, M;

Publicação
Text2Story@ECIR

Abstract

2022

Text2Icons: linking icons to narrative participants (position paper)

Autores
Valente, J; Jorge, A; Nunes, S;

Publicação
Proceedings of Text2Story - Fifth Workshop on Narrative Extraction From Texts held in conjunction with the 44th European Conference on Information Retrieval (ECIR 2022), Stavanger, Norway, April 10, 2022.

Abstract
Narratives are used to convey information and are an important way of understanding the world through information sharing. With the increasing development in Natural Language Processing and Artificial Intelligence, it becomes relevant to explore new techniques to extract, process, and visualize narratives. Narrative visualization tools enable a news story reader to have a different perspective from the traditional format, allowing it to be presented in a schematic way, using representative symbols to summarize it. We propose a new narrative visualization approach using icons to represent important narrative elements. The proposed visualization is integrated in Brat2Viz, a narrative annotation visualization tool that implements a pipeline that transforms text annotations into formal representations leading to narrative visualizations. To build the icon visualization, we present a narrative element extraction process that uses automatic sentence extraction, automatic translation methods, and an algorithm that determines the actors' most adequate descriptions. Then, we introduce a method to create an icon dictionary, with the ability to automatically search for icons. Furthermore, we present a critical analysis and user-based evaluation of the results resorting to the responses collected in two separate surveys.

2022

Tweet2Story: A Web App to Extract Narratives from Twitter

Autores
Campos, V; Campos, R; Mota, P; Jorge, A;

Publicação
ADVANCES IN INFORMATION RETRIEVAL, PT II

Abstract
Social media platforms are used to discuss current events with very complex narratives that become difficult to understand. In this work, we introduce Tweet2Story, a web app to automatically extract narratives from small texts such as tweets and describe them through annotations. By doing this, we aim to mitigate the difficulties existing on creating narratives and give a step towards deeply understanding the actors and their corresponding relations found in a text. We build the web app to be modular and easy-to-use, which allows it to easily incorporate new techniques as they keep getting developed.

  • 43
  • 429