2020
Authors
Costa, MRC; Valente, JMS; Schaller, JE;
Publication
FLEXIBLE SERVICES AND MANUFACTURING JOURNAL
Abstract
This paper addresses a permutation flowshop scheduling problem, with the objective of minimizing total weighted squared tardiness. The focus is on providing efficient procedures that can quickly solve medium or even large instances. Within this context, we first present multiple dispatching heuristics. These include general rules suited to various due date-related environments, heuristics developed for the problem with a linear objective function, and procedures that are suitably adapted to take the squared objective into account. Then, we describe several improvement procedures, which use one or more of three techniques. These procedures are used to improve the solution obtained by the best dispatching rule. Computational results show that the quadratic rules greatly outperform the linear counterparts, and that one of the quadratic rules is the overall best performing dispatching heuristic. The computational tests also show that all procedures significantly improve upon the initial solution. The non-dominated procedures, when considering both solution quality and runtime, are identified. The best dispatching rule, and two of the non-dominated improvement procedures, are quite efficient, and can be applied to even very large-sized problems. The remaining non-dominated improvement method can provide somewhat higher quality solutions, but it may need excessive time for extremely large instances.
2020
Authors
Veloso, BM; Leal, F; Malheiro, B; Carlos Burguillo, JC;
Publication
ELECTRONIC COMMERCE RESEARCH AND APPLICATIONS
Abstract
Tourism crowdsourcing platforms accumulate and use large volumes of feedback data on tourism-related services to provide personalized recommendations with high impact on future tourist behavior. Typically, these recommendation engines build individual tourist profiles and suggest hotels, restaurants, attractions or routes based on the shared ratings, reviews, photos, videos or likes. Due to the dynamic nature of this scenario, where the crowd produces a continuous stream of events, we have been exploring stream-based recommendation methods, using stochastic gradient descent (SGD), to incrementally update the prediction models and post-filters to reduce the search space and improve the recommendation accuracy. In this context, we offer an update and comment on our previous article (Veloso et al., 2019a) by providing a recent literature review and identifying the challenges laying ahead concerning the online recommendation of tourism resources supported by crowdsourced data.
2020
Authors
Leal, F; Veloso, B; Malheiro, B; Gonzalez Velez, H; Carlo Burguillo, JC;
Publication
ELECTRONIC COMMERCE RESEARCH AND APPLICATIONS
Abstract
Wiki-based crowdsourced data sources generally lack reliability, as their provenance is not intrinsically marshalled. By using recommendation, one may arguably assess the reliability of wiki-based repositories in order to identify the most interesting articles for a given domain. In this commentary, we explore current trends in scalable modelling and recommendation methods based on side information such as the quality and popularity of wiki articles. The systematic parallelization of such profiling and recommendation algorithms allows the concurrent processing of distributed crowdsourced Wikidata repositories. These algorithms, which perform incremental updating, need further research to improve the performance and generate up-to-date high-quality recommendations. This article builds upon our previous work (Leal et al., 2019) by extending the literature review and identifying important trends and challenges pertaining to crowdsourcing platforms, particularly those of Wikidata provenance.
2020
Authors
Leal, F; Veloso, B; Malheiro, B; González Vélez, H;
Publication
TRENDS AND INNOVATIONS IN INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 1
Abstract
Recommendation systems are usually evaluated through accuracy and classification metrics. However, when these systems are supported by crowdsourced data, such metrics are unable to estimate data authenticity, leading to potential unreliability. Consequently, it is essential to ensure data authenticity and processing transparency in large crowdsourced recommendation systems. In this work, processing transparency is achieved by explaining recommendations and data authenticity is ensured via blockchain smart contracts. The proposed method models the pairwise trust and system-wide reputation of crowd contributors; stores the contributor models as smart contracts in a private Ethereum network; and implements a recommendation and explanation engine based on the stored contributor trust and reputation smart contracts. In terms of contributions, this paper explores trust and reputation smart contracts for explainable recommendations. The experiments, which were performed with a crowdsourced data set from Expedia, showed that the proposed method provides cost-free processing transparency and data authenticity at the cost of latency. © 2020, The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG.
2020
Authors
Veloso, B; Gama, J; Martins, C; Espanha, R; Azevedo, R;
Publication
ACM SIGAPP Applied Computing Review
Abstract
2020
Authors
Pech, G; Delgado, C;
Publication
SCIENTOMETRICS
Abstract
Recent studies have shown that the coverage of Scopus and Web of Science (WoS) databases differs substantially. Consequently, the citation counts of a paper are different depending on the database used, making it difficult to apply both together. To address this problem, this paper aims to examine whether the percentile- and stochastic-based approach is effective for converting citation counts between two databases while guaranteeing its time-normalization. For this analysis, we collected a dataset of 326,345 papers, published in 1987-2017 in the top 10% source titles of the following fields: Industrial and Manufacturing Engineering, Aquatic Science, Social Psychology and Archaeology. First, we applied the linear regression model to the citation percentiles of indexed papers in both databases. Secondly, we used the predicted results of this linear dependence, combined with the Monte Carlo simulations, to obtain the probability density function of a percentile from papers in the database in which they are missing. The results indicate that, with the method proposed in this paper, it is possible to convert the citation counts of articles between Scopus and WoS. In addition, it also predicts the citation impact of a missing paper on one of those databases, based on the citation impact on the other database. Tests on subsamples, using Lin's concordance coefficient, suggest substantial agreement between estimated and real citation values. This allows the combined use of the citation counts of two databases, improving the coverage and accuracy of both bibliometric studies and bibliometric indicators.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.