Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por João Paulo Cunha

2006

MPEG-7 visual descriptors - Contributions for automated feature extraction in capsule endoscopy

Autores
Coimbra, MT; Cunha, JPS;

Publicação
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY

Abstract
Recent advances in miniaturization led to the development of what is now called the endoscopic capsule. This small device is swallowed by a patient ana films the whole gastrointestinal tract, allowing the detection of abnormalities. Currently, a doctor typically needs up to two hours to analyze a full exam, so automation is desirable. This paper presents a methodology for measuring the potential of selected visual MPEG-7 descriptors for the task of specific medical event detection such as blood, ulcers. Experiments show that the best results are obtained by the Scalable Color and Homogenous Texture descriptors, especially if only relevant coefficients are used.

2005

Extracting clinical information from endoscopic capsule exams using MPEG-7 visual descriptors

Autores
Coimbra, M; Campos, P; Cunha, JPS;

Publicação
IET Seminar Digest

Abstract
The endoscopic capsule is a recent technological breakthrough with high clinical importance. Exam analysis duration is its main setback, requiring an average of two hours from a trained specialist. Automation is required and this paper presents a topographic segmentation tool using low-level features that can reduce annotation times up to 15 minutes per exam. This is accomplished using Bayesian classifiers and MPEG-7 visual descriptors.

2012

Vital responder - Wearable sensing challenges in uncontrolled critical environments

Autores
Coimbra, M; Silva Cunha, JP;

Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering

Abstract
The goal of the Vital Responder research project is to explore the synergies between innovative wearable technologies, scattered sensor networks, intelligent building technology and precise localization services to provide secure, reliable and effective first-response systems in critical emergency scenarios. Critical events, such as natural disaster or other large-scale emergency, induce fatigue and stress in first responders, such as fire fighters, policemen and paramedics. There are distinct fatigue and stress factors (and even pathologies) that were identified among these professionals. Nevertheless, previous work has uncovered a lack of real-time monitoring and decision technologies that can lead to in-depth understanding of the physiological stress processes and to the development of adequate response mechanisms. Our "silver bullet" to address these challenges is a suite of non-intrusive wearable technologies, as inconspicuous as a t-shirt, capable of gathering relevant information about the individual and disseminating this information through a wireless sensor network. In this paper we will describe the objectives, activities and results of the first two years of the Vital Responder project, depicting how it is possible to address wearable sensing challenges even in very uncontrolled environments. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.

2009

ECCA - Endoscopic Capsule Capview cAtaloguer

Autores
Lima, S; Silva Cunha, JPS; Coimbra, M; Soares, JM;

Publicação
WORLD CONGRESS ON MEDICAL PHYSICS AND BIOMEDICAL ENGINEERING, VOL 25, PT 5

Abstract
Statistical pattern recognition research, namely in applied computer vision, typically needs highly accurate massive datasets to train and test its classifiers. This paper presents extensive work for creating a large clinically annotated dataset of high confidence events for gastroenterology. More specifically, we address images and videos obtained using endoscopic capsule imaging technology, which contain some kind of lesion. The purpose of such dataset is to boost scientific research in computer aided diagnostic systems for a technology that would clearly benefit from them.

2006

Combining color with spatial and temporal position of the endoscopic capsule for improved topographic classification and segmentation

Autores
Coimbra, M; Kustra, J; Campos, P; Silva Cunha, JP;

Publicação
CEUR Workshop Proceedings

Abstract
Capsule endoscopy is a recent technology with a clear need for automatic tools that reduce the long exam annotation times of exams. We have previously developed a topographic segmentation method, which is now improved by using spatial and temporal position information. Two approaches are studied: using this information as a confidence measure for our previous segmentation method, and direct integrating of this data into the image classification process. These allow us not only to automatically know when we have obtained results with error magnitudes close to human errors, but also to reduce these automatic errors to much lower values. All the developed methods have been integrated in the CapView annotation software, currently used for clinical practice in hospitals responsible for over 250 capsule exams per year, and where we estimate that the two hour annotation times are reduced by around 15 minutes.

2009

A TOOL FOR ENDOSCOPIC CAPSULE DATASET PREPARATION FOR CLINICAL VIDEO EVENT DETECTOR ALGORITHMS

Autores
Lima, S; Cunha, JP; Coimbra, M; Soares, JM;

Publicação
HEALTHINF 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON HEALTH INFORMATICS

Abstract
In all R&D projects there's at least one phase of model verification and accuracy, and when we are working with visual information (such as pictures and video) this phase should be emphasised. When working with medical information and clinical trials the truth of automatic results must be accurate. This work is based on the need of a huge and well annotated dataset of pictures retrieved from endoscopic capsule. This datasets should be used to learn the computer vision algorithms focused on endoscopic capsule video processing, and event detection.

  • 36
  • 39