Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2020

Fusion of Clinical, Self-Reported, and Multisensor Data for Predicting Falls

Authors
Silva, J; Sousa, I; Cardoso, JS;

Publication
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Falls are among the frequent causes of the loss of mobility and independence in the elderly population. Given the global population aging, new strategies for predicting falls are required to reduce the number of their occurrences. In this study, a multifactorial screening protocol was applied to 281 community-dwelling adults aged over 65, and their 12-month prospective falls were annotated. Clinical and self-reported data, along with data from instrumented functional tests, involving inertial sensors and a pressure platform, were fused using early, late, and slow fusion approaches. For the early and late fusion, a classification pipeline was designed employing stratified sampling for the generation of the training and test sets. Grid search with cross-validation was used to optimize a set of feature selectors and classifiers. According to the slow fusion approach, each data source was mixed in the middle layers of a multilayer perceptron. The three studied fusion approaches yielded similar results for the majority of the metrics. However, if recall is considered to be more important than specificity, then the result of the late fusion approach providing a recall of 78.6% is better compared with the results achieved by the other two approaches.

2020

3D digital breast cancer models with multimodal fusion algorithms

Authors
Bessa, S; Gouveia, PF; Carvalho, PH; Rodrigues, C; Silva, NL; Cardoso, F; Cardoso, JS; Oliveira, HP; Cardoso, MJ;

Publication
BREAST

Abstract
Breast cancer image fusion consists of registering and visualizing different sets of a patient synchronized torso and radiological images into a 3D model. Breast spatial interpretation and visualization by the treating physician can be augmented with a patient-specific digital breast model that integrates radiological images. But the absence of a ground truth for a good correlation between surface and radiological information has impaired the development of potential clinical applications. A new image acquisition protocol was designed to acquire breast Magnetic Resonance Imaging (MRI) and 3D surface scan data with surface markers on the patient's breasts and torso. A patient-specific digital breast model integrating the real breast torso and the tumor location was created and validated with a MRI/3D surface scan fusion algorithm in 16 breast cancer patients. This protocol was used to quantify breast shape differences between different modalities, and to measure the target registration error of several variants of the MRI/3D scan fusion algorithm. The fusion of single breasts without the biomechanical model of pose transformation had acceptable registration errors and accurate tumor locations. The performance of the fusion algorithm was not affected by breast volume. Further research and virtual clinical interfaces could lead to fast integration of this fusion technology into clinical practice. (C) 2020 The Authors. Published by Elsevier Ltd.

2020

Automated Development of Custom Fall Detectors: Position, Model and Rate Impact in Performance

Authors
Silva, J; Gomes, D; Sousa, I; Cardoso, JS;

Publication
IEEE SENSORS JOURNAL

Abstract
The past years have witnessed a boost in fall detection-related research works, disclosing an extensive number of methodologies built upon similar principles but addressing particular use-cases. These use-cases frequently motivate algorithm fine-tuning, making the modelling stage a time and effort consuming process. This work contributes towards understanding the impact of several of the most frequent requirements for wearable-based fall detection solutions in their performance (usage positions, learning model, rate). We introduce a new machine learning pipeline, trained with a proprietary dataset, with a customisable modelling stage which enabled the assessment of performance over each combination of custom parameters. Finally, we benchmark a model deployed by our framework using the UMAFall dataset, achieving state-of-the-art results with an F1-score of 84.6% for the classification of the entire dataset, which included an unseen usage position (ankle), considering a sampling rate of 10 Hz and a Random Forest classifier.

2020

A novel approach to keypoint detection for the aesthetic evaluation of breast cancer surgery outcomes

Authors
Goncalves, T; Silva, W; Cardoso, MJ; Cardoso, JS;

Publication
HEALTH AND TECHNOLOGY

Abstract
The implementation of routine breast cancer screening and better treatment strategies made possible to offer to the majority of women the option of breast conservation instead of a mastectomy. The most important aim of breast cancer conservative treatment (BCCT) is to try to optimize aesthetic outcome and implicitly, quality of life (QoL) without jeopardizing local cancer control and overall survival. As a consequence of the impact aesthetic outcome has on QoL, there has been an effort to try to define an optimal tool capable of performing this type of evaluation. Starting from the classical subjective aesthetic evaluation of BCCT (either by the patient herself or by a group of clinicians through questionnaires) to an objective aesthetic evaluation (where machine learning and computer vision methods are employed), leads to less variability and increasing reproducibility of results. Currently, there are some offline software applications available such as BAT(c) and BCCT.core, which perform the assessment based on asymmetry measurements that are computed based on semi-automatically annotated keypoints. In the literature, one can find algorithms that attempt to do the completely automatic keypoint annotation with reasonable success. However, these algorithms are very time-consuming. As the course of research goes more and more into the development of web software applications, these time-consuming tasks are not desirable. In this work, we propose a novel approach to the keypoints detection task treating the problem in part as image segmentation. This novel approach can improve both execution-time and results.

2020

Secure Triplet Loss for End-to-End Deep Biometrics

Authors
Pinto, JR; Cardoso, JS; Correia, MV;

Publication
2020 8TH INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF 2020)

Abstract
Although deep learning is being widely adopted for every topic in pattern recognition, its use for secure and cance-lable biometrics is currently reserved for feature extraction and biometric data preprocessing, limiting achievable performance. In this paper, we propose a novel formulation of the triplet loss methodology, designated as secure triplet loss, that enables biometric template cancelability with end-to-end convolutional neural networks, using easily changeable keys. Trained and evaluated for electrocardiogram-based biometrics, the network revealed easy to optimize using the modified triplet loss and achieved superior performance when compared with the state-of-the-art (10.63% equal error rate with data from 918 subjects of the UofTDB database). Additionally, it ensured biometric template security and effective template cancelability. Although further efforts are needed to avoid template linkability, the proposed secure triplet loss shows promise in template cancelability and non-invertibility for biometric recognition while taking advantage of the full power of convolutional neural networks.

2020

Offline computer -aided diagnosis for Glaucoma detection using fundus images targeted at mobile devices

Authors
Martins, J; Cardoso, JS; Soares, F;

Publication
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE

Abstract
Background and Objective: Glaucoma, an eye condition that leads to permanent blindness, is typically asymptomatic and therefore difficult to be diagnosed in time. However, if diagnosed in time, Glaucoma can effectively be slowed down by using adequate treatment; hence, an early diagnosis is of utmost importance. Nonetheless, the conventional approaches to diagnose Glaucoma adopt expensive and bulky equipment that requires qualified experts, making it difficult, costly and time-consuming to diagnose large amounts of people. Consequently, new alternatives to diagnose Glaucoma that suppress these issues should be explored. Methods: This work proposes an interpretable computer-aided diagnosis (CAD) pipeline that is capable of diagnosing Glaucoma using fundus images and run offline in mobile devices. Several public datasets of fundus images were merged and used to build Convolutional Neural Networks (CNNs) that perform segmentation and classification tasks. These networks are then used to build a pipeline for Glaucoma assessment that outputs a Glaucoma confidence level and also provides several morphological features and segmentations of relevant structures, resulting in an interpretable Glaucoma diagnosis. To assess the performance of this method in a restricted environment, this pipeline was integrated into a mobile application and time and space complexities were assessed. Results: Considering the test set, the developed pipeline achieved 0.91 and 0.75 of Intersection over Union (IoU) in the optic disc and optic cup segmentation, respectively. With regards to the classification, an accuracy of 0.87 with a sensitivity of 0.85 and an AUC of 0.93 were attained. Moreover, this pipeline runs on an average Android smartphone in under two seconds. Conclusions: The results demonstrate the potential that this method can have in the contribution to an early Glaucoma diagnosis. The proposed approach achieved similar or slightly better metrics than the current CAD systems for Glaucoma assessment while running on more restricted devices. This pipeline can, therefore, be used to construct accurate and affordable CAD systems that could enable large Glaucoma screenings, contributing to an earlier diagnose of this condition. © 2020

  • 85
  • 324