Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Carlos Ferreira é apaixonado por saúde, tecnologia e empreendedorismo desde tenra idade. Nesse sentido, ingressou em Bioengenharia na FEUP em 2012, acabando o mesmo em 2017. Durante o mestrado teve incursões por grupos de investigação do INESC TEC e do I3S. Paralelamente fundou o student branch chapter do EMBS na U. Porto em 2015, sendo o chair do mesmo durante dois ano, e foi vice-presidente do NEB FEUP/ICBAS durante 2016/2017. Em 2017, trabalhou na U. Porto Inovação como analista de tecnologia antes de se juntar definitivamente no INESC TEC como investigador na área de análise de imagem médica para classificação de nódulos pulmonares com tomografias computadorizadas. Em 2019, recebeu financiamento da FCT para realizar doutoramento e tornou-se Business Development Manager do TEC4Health no INESC TEC. Por fim, Carlos tem servido como tesoureiro do IEEE, primeiro em 2018-2021 no chapter do EMBS PT e a partir de 2022 na secção do IEEE Portugal.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Carlos Alexandre Ferreira
  • Cargo

    Business Developer
  • Desde

    06 setembro 2017
  • Nacionalidade

    Portugal
  • Contactos

    +351222094000
    carlos.a.ferreira@inesctec.pt
004
Publicações

2024

LNDb v4: pulmonary nodule annotation from medical reports

Autores
Ferreira, CA; Sousa, C; Marques, ID; Sousa, P; Ramos, I; Coimbra, M; Campilho, A;

Publicação
SCIENTIFIC DATA

Abstract
Given the high prevalence of lung cancer, an accurate diagnosis is crucial. In the diagnosis process, radiologists play an important role by examining numerous radiology exams to identify different types of nodules. To aid the clinicians' analytical efforts, computer-aided diagnosis can streamline the process of identifying pulmonary nodules. For this purpose, medical reports can serve as valuable sources for automatically retrieving image annotations. Our study focused on converting medical reports into nodule annotations, matching textual information with manually annotated data from the Lung Nodule Database (LNDb)-a comprehensive repository of lung scans and nodule annotations. As a result of this study, we have released a tabular data file containing information from 292 medical reports in the LNDb, along with files detailing nodule characteristics and corresponding matches to the manually annotated data. The objective is to enable further research studies in lung cancer by bridging the gap between existing reports and additional manual annotations that may be collected, thereby fostering discussions about the advantages and disadvantages between these two data types.

2024

Towards automatic forecasting of lung nodule diameter with tabular data and CT imaging

Autores
Ferreira, ICA; Venkadesh, KV; Jacobs, C; Coimbra, M; Campilho, A;

Publicação
BIOMEDICAL SIGNAL PROCESSING AND CONTROL

Abstract
Objective: This study aims to forecast the progression of lung cancer by estimating the future diameter of lung nodules. Methods: This approach uses as input the tabular data, axial images from tomography scans, and both data types, employing a ResNet50 model for image feature extraction and direct analysis of patient information for tabular data. The data are processed through a neural network before prediction. In the training phase, class weights are assigned based on the rarity of different types of nodules within the dataset, in alignment with nodule management guidelines. Results: Tabular data alone yielded the most accurate results, with a mean absolute deviation of 0.99 mm. For malignant nodules, the best performance, marked by a deviation of 2.82 mm, was achieved using tabular data applying Lung-RADS class weights during training. The tabular data results highlight the influence of using the initial nodule size as an input feature. These results surpass the literature reference of 348-day volume doubling time for malignant nodules. Conclusion: The developed predictive model is optimized for integration into a clinical workflow after detecting, segmenting, and classifying nodules. It provides accurate growth forecasts, establishing a more objective basis for determining follow-up intervals. Significance: With lung cancer's low survival rates, the capacity for precise nodule growth prediction represents a significant breakthrough. This methodology promises to revolutionize patient care and management, enhancing the chances for early risk assessment and effective intervention.

2024

A Comparative Study of Feature-Based and End-to-End Approaches for Lung Nodule Classification in CT Volumes to Lung-RADS Follow-up Recommendation

Autores
Ferreira, CA; Ramos, I; Coimbra, M; Campilho, A;

Publicação
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024

Abstract
Lung cancer represents a significant health concern necessitating diligent monitoring of individuals at risk. While the detection of pulmonary nodules warrants clinical attention, not all cases require immediate surgical intervention, often calling for a strategic approach to follow-up decisions. The Lung-RADS guideline serves as a cornerstone in clinical practice, furnishing structured recommendations based on various nodule characteristics, including size, calcification, and texture, outlined within established reference tables. However, the reliance on labor-intensive manual measurements underscores the potential advantages of integrating decision support systems into this process. Herein, we propose a feature-based methodology aimed at enhancing clinical decision-making by automating the assessment of nodules in computed tomography scans. Leveraging algorithms tailored for nodule calcification, texture analysis, and segmentation, our approach facilitates the automated classification of follow-up recommendations aligned with Lung-RADS criteria. Comparison with a previously reported end-to-end image-based classification method revealed competitive performance, with the feature-based approach achieving an accuracy of 0.701 +/- 0.026, while the end-to-end method attained 0.727 +/- 0.020. The inherent explainability of the feature-based approach offers distinct advantages, allowing clinicians to scrutinize and modify individual features to address disagreements or rectify inaccuracies, thereby tailoring follow-up recommendations to patient profiles.

2024

MedShapeNet - a large-scale dataset of 3D medical shapes for computer vision

Autores
Li, J; Zhou, Z; Yang, J; Pepe, A; Gsaxner, C; Luijten, G; Qu, C; Zhang, T; Chen, X; Li, W; Wodzinski, M; Friedrich, P; Xie, K; Jin, Y; Ambigapathy, N; Nasca, E; Solak, N; Melito, GM; Vu, VD; Memon, R; Schlachta, C; De Ribaupierre, S; Patel, R; Eagleson, R; Chen, X; Mächler, H; Kirschke, JS; La Rosa, E; Christ, PF; Li, HB; Ellis, G; Aizenberg, R; Gatidis, S; Küstner, T; Shusharina, N; Heller, N; Andrearczyk, V; Depeursinge, A; Hatt, M; Sekuboyina, A; Löffler, T; Liebl, H; Dorent, R; Vercauteren, T; Shapey, J; Kujawa, A; Cornelissen, S; Langenhuizen, P; Ben Hamadou, A; Rekik, A; Pujades, S; Boyer, E; Bolelli, F; Grana, C; Lumetti, L; Salehi, H; Ma, J; Zhang, Y; Gharleghi, R; Beier, S; Sowmya, A; Garza Villarreal, A; Balducci, T; Angeles Valdez, D; Souza, R; Rittner, L; Frayne, R; Ji, Y; Ferrari, V; Chatterjee, S; Dubost, F; Schreiber, S; Mattern, H; Speck, O; Haehn, D; John, C; Nürnberger, A; Pedrosa, J; Ferreira, C; Aresta, G; Cunha, A; Campilho, A; Suter, Y; Garcia, J; Lalande, A; Vandenbossche, V; Van Oevelen, A; Duquesne, K; Mekhzoum, H; Vandemeulebroucke, J; Audenaert, E; Krebs, C; Van Leeuwen, T; Vereecke, E; Heidemeyer, H; Röhrig, R; Hölzle, F; Badeli, V; Krieger, K; Gunzer, M; Chen, J; Van Meegdenburg, T; Dada, A; Balzer, M; Fragemann, J; Jonske, F; Rempe, M; Malorodov, S; Bahnsen, H; Seibold, C; Jaus, A; Marinov, Z; Jaeger, F; Stiefelhagen, R; Santos, AS; Lindo, M; Ferreira, A; Alves, V; Kamp, M; Abourayya, A; Nensa, F; Hörst, F; Brehmer, A; Heine, L; Hanusrichter, Y; Weßling, M; Dudda, M; Podleska, E; Fink, A; Keyl, J; Tserpes, K; Kim, M; Elhabian, S; Lamecker, H; Zukic, De; Paniagua, B; Wachinger, C; Urschler, M; Duong, L; Wasserthal, J; Hoyer, F; Basu, O; Maal, T; Witjes, JH; Schiele, G; Chang, T; Ahmadi, S; Luo, P; Menze, B; Reyes, M; Deserno, M; Davatzikos, C; Puladi, B; Fua, P; Yuille, L; Kleesiek, J; Egger, J;

Publicação
Biomedizinische Technik

Abstract
The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. © 2024 Walter de Gruyter GmbH, Berlin/Boston.

2024

Automated Visceral and Subcutaneous Fat Segmentation in Computed Tomography

Autores
Castro, R; Sousa, I; Nunes, F; Mancio, J; Fontes Carvalho, R; Ferreira, C; Pedrosa, J;

Publicação
Proceedings - International Symposium on Biomedical Imaging

Abstract
Cardiovascular diseases are the leading causes of death worldwide. While there are a number of cardiovascular risk indicators, recent studies have found a connection between cardiovascular risk and the accumulation and characteristics of visceral adipose tissue in the ventral cavity. The quantification of visceral adipose tissue can be easily performed in computed tomography scans but the manual delineation of these structures is a time consuming process subject to variability. This has motivated the development of automatic tools to achieve a faster and more precise solution. This paper explores the use of a U-Net architecture to perform ventral cavity segmentation followed by the use of threshold-based approaches for visceral and subcutaneous adipose tissue segmentation. Experiments with different learning rates, input image sizes and types of loss functions were employed to assess the hyperparameters most suited to this problem. In an external test set, the ventral cavity segmentation model with the best performance achieved a 0.967 Dice Score Coefficient, while the visceral and subcutaneous adipose tissue achieve Dice Score Coefficients of 0.986 and 0.995. Not only are these competitive results when compared to state of the art, the interobserver variability measured in this external dataset was similar to these results confirming the robustness and reliability of the proposed segmentation. © 2024 IEEE.

Teses
supervisionadas

2024

Automatic Visceral/Abdominal Fat Segmentation in Computed Tomography

Autor
Rui Castro

Instituição
UP-FEUP