2019
Autores
Shakibapour, E; Cunha, A; Aresta, G; Mendonca, AM; Campilho, A;
Publicação
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
This paper proposes a new methodology to automatically segment and measure the volume of pulmonary nodules in lung computed tomography (CT) scans. Estimating the malignancy likelihood of a pulmonary nodule based on lesion characteristics motivated the development of an unsupervised pulmonary nodule segmentation and volume measurement as a preliminary stage for pulmonary nodule characterization. The idea is to optimally cluster a set of feature vectors composed by intensity and shape-related features in a given feature data space extracted from a pre-detected nodule. For that purpose, a metaheuristic search based on evolutionary computation is used for clustering the corresponding feature vectors. The proposed method is simple, unsupervised and is able to segment different types of nodules in terms of location and texture without the need for any manual annotation. We validate the proposed segmentation and volume measurement on the Lung Image Database Consortium and Image Database Resource Initiative - LIDC-IDRI dataset. The first dataset is a group of 705 solid and sub-solid (assessed as part-solid and non-solid) nodules located in different regions of the lungs, and the second, more challenging, is a group of 59 sub-solid nodules. The average Dice scores of 82.35% and 71.05% for the two datasets show the good performance of the segmentation proposal. Comparisons with previous state-of-the-art techniques also show acceptable and comparable segmentation results. The volumes of the segmented nodules are measured via ellipsoid approximation. The correlation and statistical significance between the measured volumes of the segmented nodules and the ground-truth are obtained by Pearson correlation coefficient value, obtaining an R-value >= 92.16% with a significance level of 5%.
2018
Autores
Ferreira, CA; Cunha, A; Mendonça, AM; Campilho, A;
Publicação
Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications - 23rd Iberoamerican Congress, CIARP 2018, Madrid, Spain, November 19-22, 2018, Proceedings
Abstract
Lung cancer is one of the most common causes of death in the world. The early detection of lung nodules allows an appropriate follow-up, timely treatment and potentially can avoid greater damage in the patient health. The texture is one of the nodule characteristics that is correlated with the malignancy. We developed convolutional neural network architectures to classify automatically the texture of nodules into the non-solid, part-solid and solid classes. The different architectures were tested to determine if the context, the number of slices considered as input and the relation between slices influence on the texture classification performance. The architecture that obtained better performance took into account different scales, different rotations and the context of the nodule, obtaining an accuracy of 0.833 ± 0.041. © Springer Nature Switzerland AG 2019.
2019
Autores
Al Hajj, H; Lamard, M; Conze, PH; Roychowdhury, S; Hu, XW; Marsalkaite, G; Zisimopoulos, O; Dedmari, MA; Zhao, FQ; Prellberg, J; Sahu, M; Galdran, A; Araujo, T; Vo, DM; Panda, C; Dahiya, N; Kondo, S; Bian, ZB; Vandat, A; Bialopetravicius, J; Flouty, E; Qiu, CH; Dill, S; Mukhopadhyay, A; Costa, P; Aresta, G; Ramamurthys, S; Lee, SW; Campilho, A; Zachow, S; Xia, SR; Conjeti, S; Stoyanov, D; Armaitis, J; Heng, PA; Macready, WG; Cochener, B; Quellec, G;
Publicação
MEDICAL IMAGE ANALYSIS
Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.
2018
Autores
Wanderley, DS; Carvalho, CB; Domingues, A; Peixoto, C; Pignatelli, D; Beires, J; Silva, J; Campilho, A;
Publicação
Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications - 23rd Iberoamerican Congress, CIARP 2018, Madrid, Spain, November 19-22, 2018, Proceedings
Abstract
The segmentation and characterization of the ovarian structures are important tasks in gynecological and reproductive medicine. Ultrasound imaging is typically used for the medical diagnosis within this field but the understanding of the images can be difficult due to their characteristics. Furthermore, the complexity of ultrasound data may lead to a heavy image processing, which makes the application of classical methods of computer vision difficult. This work presents the first supervised fully convolutional neural network (fCNN) for the automatic segmentation of ovarian structures in B-mode ultrasound images. Due to the small dataset available, only 57 images were used for training. In order to overcome this limitation, several regularization techniques were used and are discussed in this paper. The experiments show the ability of the fCNN to learn features to distinguish ovarian structures, achieving a Dice similarity coefficient (DSC) of 0.855 for the segmentation of the stroma and a DSC of 0.955 for the follicles. When compared with a semi-automatic commercial application for follicle segmentation, the proposed fCNN achieved an average improvement of 19%. © Springer Nature Switzerland AG 2019.
2016
Autores
Canedo, VB; Remeseiro, B; Betanzos, AA; Campilho, A;
Publicação
24th European Symposium on Artificial Neural Networks, ESANN 2016, Bruges, Belgium, April 27-29, 2016
Abstract
2018
Autores
Machado, M; Aresta, G; Leitao, P; Carvalho, AS; Rodrigues, M; Ramos, I; Cunha, A; Campilho, A;
Publicação
2018 1ST INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION (ICGI 2018)
Abstract
Lung cancer diagnosis is made by radiologists through nodule search in chest Computed Tomography (CT) scans. This task is known to be difficult and prone to errors that can lead to late diagnosis. Although Computer-Aided Diagnostic (CAD) systems are promising tools to be used in clinical practice, experienced radiologists continue to perform better diagnosis than CADs. This paper proposes a methodology for characterizing the radiologist's gaze during nodules search in chest CT scans. The main goals are to identify regions that attract the radiologists' attention, which can then be used for improving a lung CAD system, and to create a tool to assist radiologists during the search task. For that purpose, the methodology processes the radiologists' gaze and their mouse coordinates during the nodule search. The resulting data is then processed to obtain a 3D gaze path from which relevant attention studies can be derived. To better convey the found information, a reference model of the lung that eases the communication of the location of relevant anatomical/pathological findings is also proposed. The methodology is tested on a set of 24 real-practice gazes, recorded via an Eye tracker, from 3 radiologists.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.