2021
Authors
Ostic, D; Qalati, SA; Barbosa, B; Shah, SMM; Vela, EG; Herzallah, AM; Liu, F;
Publication
FRONTIERS IN PSYCHOLOGY
Abstract
The growth in social media use has given rise to concerns about the impacts it may have on users' psychological well-being. This paper's main objective is to shed light on the effect of social media use on psychological well-being. Building on contributions from various fields in the literature, it provides a more comprehensive study of the phenomenon by considering a set of mediators, including social capital types (i.e., bonding social capital and bridging social capital), social isolation, and smartphone addiction. The paper includes a quantitative study of 940 social media users from Mexico, using structural equation modeling (SEM) to test the proposed hypotheses. The findings point to an overall positive indirect impact of social media usage on psychological well-being, mainly due to the positive effect of bonding and bridging social capital. The empirical model's explanatory power is 45.1%. This paper provides empirical evidence and robust statistical analysis that demonstrates both positive and negative effects coexist, helping to reconcile the inconsistencies found so far in the literature.
2021
Authors
Carvalho, CL; Barbosa, B;
Publication
Digital Services in Crisis, Disaster, and Emergency Situations - Advances in Human Services and Public Health
Abstract
2020
Authors
Saadallah, A; Moreira Matias, L; Sousa, R; Khiari, J; Jenelius, E; Gama, J;
Publication
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Abstract
Massive data broadcast by GPS-equipped vehicles provide unprecedented opportunities. One of the main tasks in order to optimize our transportation networks is to build data-driven real-time decision support systems. However, the dynamic environments where the networks operate disallow the traditional assumptions required to put in practice many off-the-shelf supervised learning algorithms, such as finite training sets or stationary distributions. In this paper, we propose BRIGHT: a drift-aware supervised learning framework to predict demand quantities. BRIGHT aims to provide accurate predictions for short-term horizons through a creative ensemble of time series analysis methods that handles distinct types of concept drift. By selecting neighborhoods dynamically, BRIGHT reduces the likelihood of overfitting. By ensuring diversity among the base learners, BRIGHT ensures a high reduction of variance while keeping bias stable. Experiments were conducted using three large-scale heterogeneous real-world transportation networks in Porto (Portugal), Shanghai (China), and Stockholm (Sweden), as well as with controlled experiments using synthetic data where multiple distinct drifts were artificially induced. The obtained results illustrate the advantages of BRIGHT in relation to state-of-the-art methods for this task.
2020
Authors
Balado, J; Sousa, R; Diaz Vilarino, L; Arias, P;
Publication
AUTOMATION IN CONSTRUCTION
Abstract
The application of Deep Learning techniques to point clouds for urban object classification is limited by the large number of samples needed. Acquiring and tagging point clouds is more expensive and tedious labour than its image equivalent process. Point cloud online datasets contain few samples for Deep Learning or not always the desired classes This work focuses on minimizing the use of point cloud samples for neural network training in urban object classification. The method proposed is based on the conversion of point clouds to images (pc-images) because it enables: the use of Convolutional Neural Networks, the generation of several samples (images) per object (point clouds) by means of multi-view, and the combination of pc-images with images from online datasets (ImageNet and Google Images). The study is conducted with ten classes of objects extracted from two street point clouds from two different cities. The network selected for the job is the InceptionV3. The training set consists of 5000 online images with a variable percentage (0% to 10%) of pc-images. The validation and testing sets are composed exclusively of pc-images. Although the network trained only with online images reached 47% accuracy, the inclusion of a small percentage of pc-images in the training set improves the classification to 99.5% accuracy with 6% pc-images. The network is also applied at IQmulus & TerraMobilita Contest dataset and it allows the correct classification of elements with few samples.
2020
Authors
Azevedo, AC; Delgado, JMPQ; Guimarães, AS; Ribeiro, I; Sousa, R;
Publication
Hygrothermal Behaviour and Building Pathologies - Building Pathology and Rehabilitation
Abstract
2020
Authors
Campos, R; Mangaravite, V; Pasquali, A; Jorge, A; Nunes, C; Jatowt, A;
Publication
INFORMATION SCIENCES
Abstract
As the amount of generated information grows, reading and summarizing texts of large collections turns into a challenging task. Many documents do not come with descriptive terms, thus requiring humans to generate keywords on-the-fly. The need to automate this kind of task demands the development of keyword extraction systems with the ability to automatically identify keywords within the text. One approach is to resort to machine-learning algorithms. These, however, depend on large annotated text corpora, which are not always available. An alternative solution is to consider an unsupervised approach. In this article, we describe YAKE!, a light-weight unsupervised automatic keyword extraction method which rests on statistical text features extracted from single documents to select the most relevant keywords of a text. Our system does not need to be trained on a particular set of documents, nor does it depend on dictionaries, external corpora, text size, language, or domain. To demonstrate the merits and significance of YAKE!, we compare it against ten state-of-the-art unsupervised approaches and one supervised method. Experimental results carried out on top of twenty datasets show that YAKE! significantly outperforms other unsupervised methods on texts of different sizes, languages, and domains.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.