Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by BIO

2019

VitalResponder®: Wearable wireless platform for vitals and body-area environment monitoring of first response teams

Authors
Cunha, JPS; Rodrigues, S; Dias, D; Brandão, P; Aguiar, A; Oliveira, I; Fernandes, JM; Maia, C; Tedim, AR; Barros, A; Azuaje, O; Soares, E; De La Torre, F;

Publication
Wearable Technologies and Wireless Body Sensor Networks for Healthcare

Abstract
Under the VitalResponder® (VR) line of research, mostly funded by the Carnegie Mellon University (CMU)-Portugal program, we have been developing, in partnership with colleagues from CMU, novel wearable monitoring solutions for hazardous professionals such as first responders (FR). We are exploring the synergy between innovative wearable technologies, scattered sensor network and precise localization to provide secure, reliable and effective first-response information services in emergency scenarios. This enables a thorough teams’management, namely on FR exposure to different hazardous elements, effort levels and critical situations that contribute to team members’ stress and fatigue levels. © The Institution of Engineering and Technology 2017.

2019

Lightweight Deep Learning Pipeline for Detection, Segmentation and Classification of Breast Cancer Anomalies

Authors
Oliveira, HS; Teixeira, JF; Oliveira, HP;

Publication
IMAGE ANALYSIS AND PROCESSING - ICIAP 2019, PT II

Abstract
The small amount of public available medical images hinders the use of deep learning techniques for mammogram automatic diagnosis. Deep learning methods require large annotated training sets to be effective, however medical datasets are costly to obtain and suffer from large variability. In this work, a lightweight deep learning pipeline to detect, segment and classify anomalies in mammogram images is presented. First, data augmentation using the ground-truth annotation is performed and used by a cascade segmentation and classification methods. Results are obtained using the INbreast public database in the context of lesion detection and BI-RADS classification. Moreover, a pre-trained Convolutional Neural Network using ResNet50 is modified to generate the lesion regions proposals followed by a false positive reduction and contour refinement stages while a pre-trained VGG16 network is fine-tuned to classify mammograms. The detection and segmentation stage results show that the cascade configuration achieves a DICE of 0.83 without massive training while the multi-class classification exhibits an MAE of 0.58 with data augmentation.

2019

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery

Authors
Al Hajj, H; Lamard, M; Conze, PH; Roychowdhury, S; Hu, XW; Marsalkaite, G; Zisimopoulos, O; Dedmari, MA; Zhao, FQ; Prellberg, J; Sahu, M; Galdran, A; Araujo, T; Vo, DM; Panda, C; Dahiya, N; Kondo, S; Bian, ZB; Vandat, A; Bialopetravicius, J; Flouty, E; Qiu, CH; Dill, S; Mukhopadhyay, A; Costa, P; Aresta, G; Ramamurthys, S; Lee, SW; Campilho, A; Zachow, S; Xia, SR; Conjeti, S; Stoyanov, D; Armaitis, J; Heng, PA; Macready, WG; Cochener, B; Quellec, G;

Publication
MEDICAL IMAGE ANALYSIS

Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.

2019

Deep Learning for Segmentation Using an Open Large-Scale Dataset in 2D Echocardiography

Authors
Leclerc, S; Smistad, E; Pedrosa, J; Ostvik, A; Cervenansky, F; Espinosa, F; Espeland, T; Berg, EAR; Jodoin, PM; Grenier, T; Lartizien, C; Dhooge, J; Lovstakken, L; Bernard, O;

Publication
IEEE transactions on medical imaging

Abstract
Delineation of the cardiac structures from 2D echocardiographic images is a common clinical task to establish a diagnosis. Over the past decades, the automation of this task has been the subject of intense research. In this paper, we evaluate how far the state-of-the-art encoder-decoder deep convolutional neural network methods can go at assessing 2D echocardiographic images, i.e., segmenting cardiac structures and estimating clinical indices, on a dataset, especially, designed to answer this objective. We, therefore, introduce the cardiac acquisitions for multi-structure ultrasound segmentation dataset, the largest publicly-available and fully-annotated dataset for the purpose of echocardiographic assessment. The dataset contains two and four-chamber acquisitions from 500 patients with reference measurements from one cardiologist on the full dataset and from three cardiologists on a fold of 50 patients. Results show that encoder-decoder-based architectures outperform state-of-the-art non-deep learning methods and faithfully reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.95 and an absolute mean error of 9.5 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.80 and an absolute mean error of 5.6%. Although these results are below the inter-observer scores, they remain slightly worse than the intra-observer's ones. Based on this observation, areas for improvement are defined, which open the door for accurate and fully-automatic analysis of 2D echocardiographic images.

2019

Unsupervised Neural Network for Homography Estimation in Capsule Endoscopy Frames

Authors
Gomes, S; Valerio, MT; Salgado, M; Oliveira, HP; Cunha, A;

Publication
CENTERIS2019--INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS/PROJMAN2019--INTERNATIONAL CONFERENCE ON PROJECT MANAGEMENT/HCIST2019--INTERNATIONAL CONFERENCE ON HEALTH AND SOCIAL CARE INFORMATION SYSTEMS AND TECHNOLOGIES

Abstract
Capsule endoscopy is becoming the major medical technique for the examination of the gastrointestinal tract, and the detection of small bowel lesions. With the growth of endoscopic capsules and the lack of an appropriate tracking system to allow the localization of lesions, the need to develop software-based techniques for the localisation of the capsule at any given frame is also increasing. With this in mind, and knowing the lack of availability of labelled endoscopic datasets, this work aims to develop a unsupervised method for homography estimation in video capsule endoscopy frames, to later be applied in capsule localisation systems. The pipeline is based on an unsupervised convolutional neural network, with a VGG Net architecture, that estimates the homography between two images. The overall error, using a synthetic dataset, was evaluated through the mean average corner error, which was 34 pixels, showing great promise for the real-life application of this technique, although there is still room for the improvement of its performance. (C) 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the CENTERIS -International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies.

2019

Radiogenomics: Lung Cancer-Related Genes Mutation Status Prediction

Authors
Dias, C; Pinheiro, G; Cunha, A; Oliveira, HP;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS, IBPRIA 2019, PT II

Abstract
Advances in genomics have driven to the recognition that tumours are populated by different minor subclones of malignant cells that control the way the tumour progresses. However, the spatial and temporal genomic heterogeneity of tumours has been a hurdle in clinical oncology. This is mainly because the standard methodology for genomic analysis is the biopsy, that besides being an invasive technique, it does not capture the entire tumour spatial state in a single exam. Radiographic medical imaging opens new opportunities for genomic analysis by providing full state visualisation of a tumour at a macroscopic level, in a non-invasive way. Having in mind that mutational testing of EGFR and KRAS is a routine in lung cancer treatment, it was studied whether clinical and imaging data are valuable for predicting EGFR and KRAS mutations in a cohort of NSCLC patients. A reliable predictive model was found for EGFR (AUC = 0.96) using both a Multi-layer Perceptron model and a Random Forest model but not for KRAS (AUC = 0.56). A feature importance analysis using Random Forest reported that the presence of emphysema and lung parenchymal features have the highest correlation with EGFR mutation status. This study opens new opportunities for radiogenomics on predicting molecular properties in a more readily available and non-invasive way. © 2019, Springer Nature Switzerland AG.

  • 50
  • 113