Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Jaime Cardoso

2019

Editorial

Authors
Carneiro, G; Manuel, J; Tavares, RS; Bradley, AP; Papa, JP; Nascimento, JC; Cardoso, JS; Lu, Z; Belagiannis, V;

Publication
Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization

Abstract

2019

Editorial

Authors
Carneiro, G; Manuel, J; Tavares, RS; Bradley, AP; Papa, JP; Nascimento, JC; Cardoso, JS; Lu, Z; Belagiannis, V;

Publication
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization

Abstract

2017

mu SmartScope: 3D-printed Smartphone Microscope with Motorized Automated Stage

Authors
Rosado, L; Oliveira, J; Vasconcelos, MJM; da Costa, JMC; Elias, D; Cardoso, JS;

Publication
PROCEEDINGS OF THE 10TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, VOL 1: BIODEVICES

Abstract
Microscopic examination is currently the gold standard test for diagnosis of several neglected tropical diseases. However, reliable identification of parasitic infections requires in-depth train and access to proper equipment for subsequent microscopic analysis. These requirements are closely related with the increasing interest in the development of computer-aided diagnosis systems, and Mobile Health is starting to play an important role when it comes to health in Africa, allowing for distributed solutions that provide access to complex diagnosis even in rural areas. In this paper, we present a 3D-printed microscope that can easily be attached to a wide range of mobile devices models. To the best of our knowledge, this is the first proposed smartphone-based alternative to conventional microscopy that allows autonomous acquisition of a pre-defined number of images at 1000x magnification with suitable resolution, by using a motorized automated stage fully powered and controlled by a smartphone, without the need of manual focus of the smear slide. Reference smears slides with different parasites were used to test the device. The acquired images showed that was possible to visually detect those agents, which clearly illustrate the potential that this device can have, specially in developing countries with limited access to healthcare services.

2019

Deep Neural Networks for Biometric Identification Based on Non-Intrusive ECG Acquisitions

Authors
Pinto, JR; Cardoso, JS; Lourenço, A;

Publication
The Biometric Computing

Abstract

2020

Learning Signer-Invariant Representations with Adversarial Training

Authors
Ferreira, PM; Pernes, D; Rebelo, A; Cardoso, JS;

Publication
TWELFTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2019)

Abstract
Sign Language Recognition (SLR) has become an appealing topic in modern societies because such technology can ideally be used to bridge the gap between deaf and hearing people. Although important steps have been made towards the development of real-world SLR systems, signer-independent SLR is still one of the bottleneck problems of this research field. In this regard, we propose a deep neural network along with an adversarial training objective, specifically designed to address the signer-independent problem. Concretely speaking, the proposed model consists of an encoder, mapping from input images to latent representations, and two classifiers operating on these underlying representations: (i) the signclassifier, for predicting the class/sign labels, and (ii) the signer-classifier, for predicting their signer identities. During the learning stage, the encoder is simultaneously trained to help the sign-classifier as much as possible while trying to fool the signer-classifier. This adversarial training procedure allows learning signer-invariant latent representations that are in fact highly discriminative for sign recognition. Experimental results demonstrate the effectiveness of the proposed model and its capability of dealing with the large inter-signer variations.

2020

Automatic detection of perforators for microsurgical reconstruction

Authors
Mavioso, C; Araujo, RJ; Oliveira, HP; Anacleto, JC; Vasconcelos, MA; Pinto, D; Gouveia, PF; Alves, C; Cardoso, F; Cardoso, JS; Cardoso, MJ;

Publication
BREAST

Abstract
The deep inferior epigastric perforator (DIEP) is the most commonly used free flap in mastectomy reconstruction. Preoperative imaging techniques are routinely used to detect location, diameter and course of perforators, with direct intervention from the imaging team, who subsequently draw a chart that will help surgeons choosing the best vascular support for the reconstruction. In this work, the feasibility of using a computer software to support the preoperative planning of 40 patients proposed for breast reconstruction with a DIEP flap is evaluated for the first time. Blood vessel centreline extraction and local characterization algorithms are applied to identify perforators and compared with the manual mapping, aiming to reduce the time spent by the imaging team, as well as the inherent subjectivity to the task. Comparing with the measures taken during surgery, the software calibre estimates were worse for vessels smaller than 1.5 mm (P = 6e-4) but better for the remaining ones (P = 2e-3). Regarding vessel location, the vertical component of the software output was significantly different from the manual measure (P = 0.02), nonetheless that was irrelevant during surgery as errors in the order of 2-3 mm do not have impact in the dissection step. Our trials support that a reduction of the time spent is achievable using the automatic tool (about 2 h/case). The introduction of artificial intelligence in clinical practice intends to simplify the work of health professionals and to provide better outcomes to patients. This pilot study paves the way for a success story. (C) 2020 The Authors. Published by Elsevier Ltd.

  • 23
  • 59