Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by BIO

2019

Adversarial learning for a robust iris presentation attack detection method against unseen attack presentations

Authors
Ferreira, PM; Sequeira, AF; Pernes, D; Rebelo, A; Cardoso, JS;

Publication
2019 INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG 2019)

Abstract
Despite the high performance of current presentation attack detection (PAD) methods, the robustness to unseen attacks is still an under addressed challenge. This work approaches the problem by enforcing the learning of the bona fide presentations while making the model less dependent on the presentation attack instrument species (PAIS). The proposed model comprises an encoder, mapping from input features to latent representations, and two classifiers operating on these underlying representations: (i) the task-classifier, for predicting the class labels (as bona fide or attack); and (ii) the species-classifier, for predicting the PAIS. In the learning stage, the encoder is trained to help the task-classifier while trying to fool the species-classifier. Plus, an additional training objective enforcing the similarity of the latent distributions of different species is added leading to a 'PAIspecies'- independent model. The experimental results demonstrated that the proposed regularisation strategies equipped the neural network with increased PAD robustness. The adversarial model obtained better loss and accuracy as well as improved error rates in the detection of attack and bona fide presentations. © 2019 Gesellschaft fur Informatik (GI). All rights reserved.

2019

Comparison of Conventional and Deep Learning Based Methods for Pulmonary Nodule Segmentation in CT Images

Authors
Rocha, J; Cunha, A; Mendonça, AM;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2019, PT I

Abstract
Lung cancer is among the deadliest diseases in the world. The detection and characterization of pulmonary nodules are crucial for an accurate diagnosis, which is of vital importance to increase the patients’ survival rates. The segmentation process contributes to the mentioned characterization, but faces several challenges, due to the diversity in nodular shape, size, and texture, as well as the presence of adjacent structures. This paper proposes two methods for pulmonary nodule segmentation in Computed Tomography (CT) scans. First, a conventional approach which applies the Sliding Band Filter (SBF) to estimate the center of the nodule, and consequently the filter’s support points, matching the initial border coordinates. This preliminary segmentation is then refined to include mainly the nodular area, and no other regions (e.g. vessels and pleural wall). The second approach is based on Deep Learning, using the U-Net to achieve the same goal. This work compares both performances, and consequently identifies which one is the most promising tool to promote early lung cancer screening and improve nodule characterization. Both methodologies used 2653 nodules from the LIDC database: the SBF based one achieved a Dice score of 0.663, while the U-Net achieved 0.830, yielding more similar results to the ground truth reference annotated by specialists, and thus being a more reliable approach. © Springer Nature Switzerland AG 2019.

2019

Lesions Multiclass Classification in Endoscopic Capsule Frames

Authors
Valerio, MT; Gomes, S; Salgado, M; Oliveira, HP; Cunha, A;

Publication
CENTERIS2019--INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS/PROJMAN2019--INTERNATIONAL CONFERENCE ON PROJECT MANAGEMENT/HCIST2019--INTERNATIONAL CONFERENCE ON HEALTH AND SOCIAL CARE INFORMATION SYSTEMS AND TECHNOLOGIES

Abstract
Wireless capsule endoscopy is a relatively novel technique used for imaging of the gastrointestinal tract. Unlike traditional approaches, it allows painless visualisation of the whole of the gastrointestinal tract, including the small bowel, a region of difficult access. Endoscopic capsules record for about 8h, producing around 60,000 images. These are analysed by an expert that identifies abnormalities present in the frames, a process that is very tedious and prone to errors. Thus there is a clear need to develop systems that automatically analyse this data and detect lesions. Lesion detection achieved a precision of 0.94 and a recall of 0.93 by fmetuning the pre-trained DenseNet-161 model. (C) 2019 The Authors. Published by Elsevier B.V.

2019

EyeWeS: Weakly Supervised Pre-Trained Convolutional Neural Networks for Diabetic Retinopathy Detection

Authors
Costa, P; Araujo, T; Aresta, G; Galdran, A; Mendonca, AM; Smailagic, A; Campilho, A;

Publication
PROCEEDINGS OF MVA 2019 16TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA)

Abstract
Diabetic Retinopathy (DR) is one of the leading causes of preventable blindness in the developed world. With the increasing number of diabetic patients there is a growing need of an automated system for DR detection. We propose EyeWeS, a method that not only detects DR in eye fundus images but also pinpoints the regions of the image that contain lesions, while being trained with image labels only. We show that it is possible to convert any pre-trained convolutional neural network into a weakly-supervised model while increasing their performance and efficiency. EyeWeS improved the results of Inception V3 from 94:9% Area Under the Receiver Operating Curve (AUC) to 95:8% AUC while maintaining only approximately 5% of the Inception V3's number of parameters. The same model is able to achieve 97:1% AUC in a cross-dataset experiment.

2019

Deep learning approaches for plethysmography signal quality assessment in the presence of atrial fibrillation

Authors
Pereira, T; Ding, C; Gadhoumi, K; Tran, N; Colorado, RA; Meisel, K; Hu, X;

Publication
PHYSIOLOGICAL MEASUREMENT

Abstract

2019

Brain computer interface for neuro-rehabilitation with deep learning classification and virtual reality feedback

Authors
Karácsony, T; Hansen, JP; Iversen, HK; Puthusserypady, S;

Publication
ACM International Conference Proceeding Series

Abstract
Though Motor Imagery (MI) stroke rehabilitation effectively promotes neural reorganization, current therapeutic methods are immeasurable and their repetitiveness can be demotivating. In this work, a real-time electroencephalogram (EEG) based MI-BCI (Brain Computer Interface) system with a virtual reality (VR) game as a motivational feedback has been developed for stroke rehabilitation. If the subject successfully hits one of the targets, it explodes and thus providing feedback on a successfully imagined and virtually executed movement of hands or feet. Novel classification algorithms with deep learning (DL) and convolutional neural network (CNN) architecture with a unique trial onset detection technique was used. Our classifiers performed better than the previous architectures on datasets from PhysioNet offline database. It provided fine classification in the real-time game setting using a 0.5 second 16 channel input for the CNN architectures. Ten participants reported the training to be interesting, fun and immersive. "It is a bit weird, because it feels like it would be my hands", was one of the comments from a test person. The VR system induced a slight discomfort and a moderate effort for MI activations was reported. We conclude that MI-BCI-VR systems with classifiers based on DL for real-time game applications should be considered for motivating MI stroke rehabilitation. © 2019 Association for Computing Machinery.

  • 51
  • 113