Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por BIO

2018

Single Particle Differentiation through 2D Optical Fiber Trapping and Back-Scattered Signal Statistical Analysis: An Exploratory Approach

Autores
Paiva, JS; Ribeiro, RSR; Cunha, JPS; Rosa, CC; Jorge, PAS;

Publicação
SENSORS

Abstract
Recent trends on microbiology point out the urge to develop optical micro-tools with multifunctionalities such as simultaneous manipulation and sensing. Considering that miniaturization has been recognized as one of the most important paradigms of emerging sensing biotechnologies, optical fiber tools, including Optical Fiber Tweezers (OFTs), are suitable candidates for developing multifunctional small sensors for Medicine and Biology. OFTs are flexible and versatile optotools based on fibers with one extremity patterned to form a micro-lens. These are able to focus laser beams and exert forces onto microparticles strong enough (piconewtons) to trap and manipulate them. In this paper, through an exploratory analysis of a 45 features set, including time and frequency-domain parameters of the back-scattered signal of particles trapped by a polymeric lens, we created a novel single feature able to differentiate synthetic particles (PMMA and Polystyrene) from living yeasts cells. This single statistical feature can be useful for the development of label-free hybrid optical fiber sensors with applications in infectious diseases detection or cells sorting. It can also contribute, by revealing the most significant information that can be extracted from the scattered signal, to the development of a simpler method for particles characterization (in terms of composition, heterogeneity degree) than existent technologies.

2018

Cross-eyed 2017: Cross-spectral iris/periocular recognition competition

Autores
Sequeira A.F.; Chen L.; Ferryman J.; Wild P.; Alonso-Fernandez F.; Bigun J.; Raja K.B.; Raghavendra R.; Busch C.; De Freitas Pereira T.; Marcel S.; Behera S.S.; Gour M.; Kanhangad V.;

Publicação
IEEE International Joint Conference on Biometrics, IJCB 2017

Abstract
This work presents the 2nd Cross-Spectrum Iris/Periocular Recognition Competition (Cross-Eyed2017). The main goal of the competition is to promote and evaluate advances in cross-spectrum iris and periocular recognition. This second edition registered an increase in the participation numbers ranging from academia to industry: five teams submitted twelve methods for the periocular task and five for the iris task. The benchmark dataset is an enlarged version of the dual-spectrum database containing both iris and periocular images synchronously captured from a distance and within a realistic indoor environment. The evaluation was performed on an undisclosed test-set. Methodology, tested algorithms, and obtained results are reported in this paper identifying the remaining challenges in path forward.

2018

Creation of Retinal Mosaics for Diabetic Retinopathy Screening: A Comparative Study

Autores
Melo, T; Mendonça, AM; Campilho, A;

Publicação
Image Analysis and Recognition - 15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, June 27-29, 2018, Proceedings

Abstract
The creation of retinal mosaics from sets of fundus photographs can significantly reduce the time spent on the diabetic retinopathy (DR) screening, because through mosaic analysis the ophthalmologists can examine several portions of the eye at a single glance and, consequently, detect and grade DR more easily. Like most of the methods described in the literature, this methodology includes two main steps: image registration and image blending. In the registration step, relevant keypoints are detected on all images, the transformation matrices are estimated based on the correspondences between those keypoints and the images are reprojected into the same coordinate system. However, the main contributions of this work are in the blending step. In order to combine the overlapping images, a color compensation is applied to those images and a distance-based map of weights is computed for each one. The methodology is applied to two different datasets and the mosaics obtained for one of them are visually compared with the results of two state-of-the-art methods. The mosaics obtained with our method present good quality and they can be used for DR grading. © 2018, Springer International Publishing AG, part of Springer Nature.

2018

Three-dimensional planning tool for breast conserving surgery: A technological review

Autores
Oliveira, SP; Morgado, P; Gouveia, PF; Teixeira, JF; Bessa, S; Monteiro, JP; Zolfagharnasab, H; Reis, M; Silva, NL; Veiga, D; Cardoso, MJ; Oliveira, HP; Ferreira, MJ;

Publicação
Critical Reviews in Biomedical Engineering

Abstract
Breast cancer is one of the most common malignanciesaffecting women worldwide. However, despite its incidence trends have increased, the mortality rate has significantly decreased. The primary concern in any cancer treatment is the oncological outcome but, in the case of breast cancer, the surgery aesthetic result has become an important quality indicator for breast cancer patients. In this sense, an adequate surgical planning and prediction tool would empower the patient regarding the treatment decision process, enabling a better communication between the surgeon and the patient and a better understanding of the impact of each surgical option. To develop such tool, it is necessary to create complete 3D model of the breast, integrating both inner and outer breast data. In this review, we thoroughly explore and review the major existing works that address, directly or not, the technical challenges involved in the development of a 3D software planning tool in the field of breast conserving surgery. © 2018 by Begell House, Inc.

2018

Convolutional Neural Network Architectures for Texture Classification of Pulmonary Nodules

Autores
Ferreira, CA; Cunha, A; Mendonça, AM; Campilho, A;

Publicação
Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications - 23rd Iberoamerican Congress, CIARP 2018, Madrid, Spain, November 19-22, 2018, Proceedings

Abstract
Lung cancer is one of the most common causes of death in the world. The early detection of lung nodules allows an appropriate follow-up, timely treatment and potentially can avoid greater damage in the patient health. The texture is one of the nodule characteristics that is correlated with the malignancy. We developed convolutional neural network architectures to classify automatically the texture of nodules into the non-solid, part-solid and solid classes. The different architectures were tested to determine if the context, the number of slices considered as input and the relation between slices influence on the texture classification performance. The architecture that obtained better performance took into account different scales, different rotations and the context of the nodule, obtaining an accuracy of 0.833 ± 0.041. © Springer Nature Switzerland AG 2019.

2018

Deep Convolutional Artery/Vein Classification of Retinal Vessels

Autores
Meyer, MI; Galdran, A; Costa, P; Mendonça, AM; Campilho, A;

Publicação
Image Analysis and Recognition - 15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, June 27-29, 2018, Proceedings

Abstract
The classification of retinal vessels into arteries and veins in eye fundus images is a relevant task for the automatic assessment of vascular changes. This paper presents a new approach to solve this problem by means of a Fully-Connected Convolutional Neural Network that is specifically adapted for artery/vein classification. For this, a loss function that focuses only on pixels belonging to the retinal vessel tree is built. The relevance of providing the model with different chromatic components of the source images is also analyzed. The performance of the proposed method is evaluated on the RITE dataset of retinal images, achieving promising results, with an accuracy of 96 % on large caliber vessels, and an overall accuracy of 84 %. © 2018, Springer International Publishing AG, part of Springer Nature.

  • 63
  • 113