Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2022

Colon Nuclei Instance Segmentation using a Probabilistic Two-Stage Detector

Authors
Costa, P; Fu, Y; Nunes, J; Campilho, A; Cardoso, JS;

Publication
CoRR

Abstract

2022

Explainable Biometrics in the Age of Deep Learning

Authors
Neto, PC; Gonçalves, T; Pinto, JR; Silva, W; Sequeira, AF; Ross, A; Cardoso, JS;

Publication
CoRR

Abstract

2022

OCFR 2022: Competition on Occluded Face Recognition From Synthetically Generated Structure-Aware Occlusions

Authors
Neto, PC; Boutros, F; Pinto, JR; Damer, N; Sequeira, AF; Cardoso, JS; Bengherabi, M; Bousnat, A; Boucheta, S; Hebbadj, N; Erakin, ME; Demir, U; Ekenel, HK; Queiroz Vidal, PBd; Menotti, D;

Publication
CoRR

Abstract

2022

Evaluation of Vectra (R) XT 3D Surface Imaging Technology in Measuring Breast Symmetry and Breast Volume

Authors
Pham, M; Alzul, R; Elder, E; French, J; Cardoso, J; Kaviani, A; Meybodi, F;

Publication
AESTHETIC PLASTIC SURGERY

Abstract
Background Breast symmetry is an essential component of breast cosmesis. The Harvard Cosmesis scale is the most widely adopted method of breast symmetry assessment. However, this scale lacks reproducibility and reliability, limiting its application in clinical practice. The VECTRA (R) XT 3D (VECTRA (R)) is a novel breast surface imaging system that, when combined with breast contour measuring software (Mirror (R)), aims to produce a more accurate and reproducible measurement of breast contour to aid operative planning in breast surgery. Objectives This study aims to compare the reliability and reproducibility of subjective (Harvard Cosmesis scale) with objective (VECTRA (R)) symmetry assessment on the same cohort of patients. Methods Patients at a tertiary institution had 2D and 3D photographs of their breasts. Seven assessors scored the 2D photographs using the Harvard Cosmesis scale. Two independent assessors used Mirror (R) software to objectively calculate breast symmetry by analysing 3D images of the breasts. Results Intra-observer agreement ranged from none to moderate (kappa - 0.005-0.7) amongst the assessors using the Harvard Cosmesis scale. Inter-observer agreement was weak (kappa 0.078-0.454) amongst Harvard scores compared to VECTRA (R) measurements. Kappa values ranged 0.537-0.674 for intra-observer agreement (p < 0.001) with Root Mean Square (RMS) scores. RMS had a moderate correlation with the Harvard Cosmesis scale (r(s) = 0.613). Furthermore, absolute volume difference between breasts had poor correlation with RMS (R-2 = 0.133). Conclusion VECTRA (R) and Mirror (R) software have potential in clinical practice as objectifying breast symmetry, but in the current form, it is not an ideal test.

2022

Deep learning-based system for real-time behavior recognition and closed-loop control of behavioral mazes using depth sensing

Authors
Geros, AF; Cruz, R; de Chaumont, F; Cardoso, JS; Aguiar, P;

Publication

Abstract
Robust quantification of animal behavior is fundamental in experimental neuroscience research. Systems providing automated behavioral assessment are an important alternative to manual measurements avoiding problems such as human bias, low reproducibility and high cost. Integrating these tools with closed-loop control systems creates conditions to correlate environment and behavioral expressions effectively, and ultimately explain the neural foundations of behavior. We present an integrated solution for automated behavioral analysis of rodents using deep learning networks on video streams acquired from a depth-sensing camera. The use of depth sensors has notable advantages: tracking/classification performance is improved and independent of animals' coat color, and videos can be recorded in dark conditions without affecting animals' natural behavior. Convolutional and recurrent layers were combined in deep network architectures, and both spatial and temporal representations were successfully learned for a 4-classes behavior classification task (standstill, walking, rearing and grooming). Integration with Arduino microcontrollers creates an easy-to-use control platform providing low-latency feedback signals based on the deep learning automatic classification of animal behavior. The complete system, combining depth-sensor camera, computer, and Arduino microcontroller, allows simple mapping of input-output control signals using the animal's current behavior and position. For example, a feeder can be controlled not by pressing a lever but by the animal behavior itself. An integrated graphical user interface completes a user-friendly and cost-effective solution for animal tracking and behavior classification. This open-software/open-hardware platform can boost the development of customized protocols for automated behavioral research, and support ever more sophisticated, reliable and reproducible behavioral neuroscience experiments.

2022

Hybrid Quality Inspection for the Automotive Industry: Replacing the Paper-Based Conformity List through Semi-Supervised Object Detection and Simulated Data

Authors
Rio-Torto, I; Campanico, AT; Pinho, P; Filipe, V; Teixeira, LF;

Publication
APPLIED SCIENCES-BASEL

Abstract
The still prevalent use of paper conformity lists in the automotive industry has a serious negative impact on the performance of quality control inspectors. We propose instead a hybrid quality inspection system, where we combine automated detection with human feedback, to increase worker performance by reducing mental and physical fatigue, and the adaptability and responsiveness of the assembly line to change. The system integrates the hierarchical automatic detection of the non-conforming vehicle parts and information visualization on a wearable device to present the results to the factory worker and obtain human confirmation. Besides designing a novel 3D vehicle generator to create a digital representation of the non conformity list and to collect automatically annotated training data, we apply and aggregate in a novel way state-of-the-art domain adaptation and pseudo labeling methods to our real application scenario, in order to bridge the gap between the labeled data generated by the vehicle generator and the real unlabeled data collected on the factory floor. This methodology allows us to obtain, without any manual annotation of the real dataset, an example-based F1 score of 0.565 in an unconstrained scenario and 0.601 in a fixed camera setup (improvements of 11 and 14.6 percentage points, respectively, over a baseline trained with purely simulated data). Feedback obtained from factory workers highlighted the usefulness of the proposed solution, and showed that a truly hybrid assembly line, where machine and human work in symbiosis, increases both efficiency and accuracy in automotive quality control.

  • 41
  • 322