2020
Authors
Andrade, C; Teixeira, LF; Vasconcelos, MJM; Rosado, L;
Publication
Image Analysis and Recognition - 17th International Conference, ICIAR 2020, Póvoa de Varzim, Portugal, June 24-26, 2020, Proceedings, Part II
Abstract
With the ever-increasing occurrence of skin cancer, timely and accurate skin cancer detection has become clinically more imperative. A clinical mobile-based deep learning approach is a possible solution for this challenge. Nevertheless, there is a major impediment in the development of such a model: the scarce availability of labelled data acquired with mobile devices, namely macroscopic images. In this work, we present two experiments to assemble a robust deep learning model for macroscopic skin lesion segmentation and to capitalize on the sizable dermoscopic databases. In the first experiment two groups of deep learning models, U-Net based and DeepLab based, were created and tested exclusively in the available macroscopic images. In the second experiment, the possibility of transferring knowledge between the domains was tested. To accomplish this, the selected model was retrained in the dermoscopic images and, subsequently, fine-tuned with the macroscopic images. The best model implemented in the first experiment was a DeepLab based model with a MobileNetV2 as feature extractor with a width multiplier of 0.35 and optimized with the soft Dice loss. This model comprehended 0.4 million parameters and obtained a thresholded Jaccard coefficient of 72.97% and 78.51% in the Dermofit and SMARTSKINS databases, respectively. In the second experiment, with the usage of transfer learning, the performance of this model was significantly improved in the first database to 75.46% and slightly decreased to 78.04% in the second. © 2020, The Author(s).
2020
Authors
Pereira, A; Carvalho, P; Coelho, G; Corte Real, L;
Publication
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Abstract
Color and color differences are critical aspects in many image processing and computer vision applications. A paradigmatic example is object segmentation, where color distances can greatly influence the performance of the algorithms. Metrics for color difference have been proposed in the literature, including the definition of standards such as CIEDE2000, which quantifies the change in visual perception of two given colors. This standard has been recommended for industrial computer vision applications, but the benefits of its application have been impaired by the complexity of the formula. This paper proposes a new strategy that improves the usability of the CIEDE2000 metric when a maximum acceptable distance can be imposed. We argue that, for applications where a maximum value, above which colors are considered to be different, can be established, then it is possible to reduce the amount of calculations of the metric, by preemptively analyzing the color features. This methodology encompasses the benefits of the metric while overcoming its computational limitations, thus broadening the range of applications of CIEDE2000 in both the computer vision algorithms and computational resource requirements.
2020
Authors
Martins, I; Carvalho, P; Corte Real, L; Luis Alba Castro, JL;
Publication
COMPUTER VISION AND IMAGE UNDERSTANDING
Abstract
One of the most difficult scenarios for unsupervised segmentation of moving objects is found in nighttime videos where the main challenges are the poor illumination conditions resulting in low-visibility of objects, very strong lights, surface-reflected light, a great variance of light intensity, sudden illumination changes, hard shadows, camouflaged objects, and noise. This paper proposes a novel method, coined COLBMOG (COLlinearity Boosted MOG), devised specifically for the foreground segmentation in nighttime videos, that shows the ability to overcome some of the limitations of state-of-the-art methods and still perform well in daytime scenarios. It is a texture-based classification method, using local texture modeling, complemented by a color-based classification method. The local texture at the pixel neighborhood is modeled as an..-dimensional vector. For a given pixel, the classification is based on the collinearity between this feature in the input frame and the reference background frame. For this purpose, a multimodal temporal model of the collinearity between texture vectors of background pixels is maintained. COLBMOG was objectively evaluated using the ChangeDetection.net (CDnet) 2014, Night Videos category, benchmark. COLBMOG ranks first among all the unsupervised methods. A detailed analysis of the results revealed the superior performance of the proposed method compared to the best performing state-of-the-art methods in this category, particularly evident in the presence of the most complex situations where all the algorithms tend to fail.
2020
Authors
Pinheiro, G; Pereira, T; Dias, C; Freitas, C; Hespanhol, V; Costa, JL; Cunha, A; Oliveira, HP;
Publication
SCIENTIFIC REPORTS
Abstract
EGFR and KRAS are the most frequently mutated genes in lung cancer, being active research topics in targeted therapy. The biopsy is the traditional method to genetically characterise a tumour. However, it is a risky procedure, painful for the patient, and, occasionally, the tumour might be inaccessible. This work aims to study and debate the nature of the relationships between imaging phenotypes and lung cancer-related mutation status. Until now, the literature has failed to point to new research directions, mainly consisting of results-oriented works in a field where there is still not enough available data to train clinically viable models. We intend to open a discussion about critical points and to present new possibilities for future radiogenomics studies. We conducted high-dimensional data visualisation and developed classifiers, which allowed us to analyse the results for EGFR and KRAS biological markers according to different combinations of input features. We show that EGFR mutation status might be correlated to CT scans imaging phenotypes; however, the same does not seem to hold for KRAS mutation status. Also, the experiments suggest that the best way to approach this problem is by combining nodule-related features with features from other lung structures.
2020
Authors
Carvalho, PH; Bessa, S; Silva, ARM; Peixoto, PS; Segundo, MA; Oliveira, HP;
Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS, PT I
Abstract
Overuse of antibiotics is causing the environment to become polluted with them. This is a major threat to global health, with bacteria developing resistance to antibiotics because of it. To monitor this threat, multiple antibiotic detection methods have been developed; however, they are normally complex and costly. In this work, an affordable, easy to use alternative based on digital colourimetry is proposed. Photographs of samples next to a colour reference target were acquired to build a dataset. The algorithm proposed detects the reference target, based on binarisation algorithms, in order to standardise the collected images using a colour correction matrix converting from RGB to XYZ, providing a necessary colour constancy between photographs from different devices. Afterwards, the sample is extracted through edge detection and Hough transform algorithms. Finally, the sulfonamide concentration is estimated resorting to an experimentally designed calibration curve, which correlates the concentration and colour information. Best performance was obtained using Hue colour, achieving a relative standard deviation value of less than 3.5%. © 2019, Springer Nature Switzerland AG.
2020
Authors
Teixeira, JF; Carreiro, AM; Santos, RM; Oliveira, HP;
Publication
Image Analysis and Recognition - 17th International Conference, ICIAR 2020, Póvoa de Varzim, Portugal, June 24-26, 2020, Proceedings, Part II
Abstract
Breast Ultrasound has long been used to support diagnostic and exploratory procedures concerning breast cancer, with an interesting success rate, specially when complemented with other radiology information. This usability can further enhance visualization tasks during pre-treatment clinical analysis by coupling the B-Mode images to 3D space, as found in Magnetic Resonance Imaging (MRI) per instance. In fact, Lesions in B-mode are visible and present high detail when comparing with other 3D sequences. This coupling, however, would be largely benefited from the ability to match the various structures present in the B-Mode, apart from the broadly studied lesion. In this work we focus on structures such as skin, subcutaneous fat, mammary gland and thoracic region. We provide a preliminary insight to several structure segmentation approaches in the hopes of obtaining a functional and dependable pipeline for delineating these potential reference regions that will assist in multi-modal radiological data alignment. For this, we experiment with pre-processing stages that include Anisotropic Diffusion guided by Log-Gabor filters (ADLG) and main segmentation steps using K-Means, Meanshift and Watershed. Among the pipeline configurations tested, the best results were found using the ADLG filter that ran for 50 iterations and H-Maxima suppression of 20% and the K-Means method with $$K=6$$. The results present several cases that closely approach the ground truth despite overall having larger average errors. This encourages the experimentation of other approaches that could withstand the innate data variability that makes this task very challenging. © Springer Nature Switzerland AG 2020.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.