2017
Autores
Pedrosa, J; Barbosa, D; Heyde, B; Schnell, F; Rosner, A; Claus, P; D'hooge, J;
Publicação
IEEE TRANSACTIONS ON ULTRASONICS FERROELECTRICS AND FREQUENCY CONTROL
Abstract
Cardiac volume/function assessment remains a critical step in daily cardiology, and 3-D ultrasound plays an increasingly important role. Though development of automatic endocardial segmentation methods has received much attention, the same cannot be said about epicardial segmentation, in spite of the importance of full myocardial segmentation. In this paper, different ways of coupling the endocardial and epicardial segmentations are contrasted and compared with uncoupled segmentation. For this purpose, the B-spline explicit active surfaces framework was used; 27 3-D echocardiographic images were used to validate the different coupling strategies, which were compared with manual contouring of the endocardial and epicardial borders performed by an expert. It is shown that an independent segmentation of the endocardium followed by an epicardial segmentation coupled to the endocardium is the most advantageous. In this way, a framework for fully automatic 3-D myocardial segmentation is proposed using a novel coupling strategy.
2017
Autores
Pinheiro, AP; Dias, M; Pedrosa, J; Soares, AP;
Publicação
BEHAVIOR RESEARCH METHODS
Abstract
During social communication, words and sentences play a critical role in the expression of emotional meaning. The Minho Affective Sentences (MAS) were developed to respond to the lack of a standardized sentence battery with normative affective ratings: 192 neutral, positive, and negative declarative sentences were strictly controlled for psycholinguistic variables such as numbers of words and letters and per-million word frequency. The sentences were designed to represent examples of each of the five basic emotions (anger, sadness, disgust, fear, and happiness) and of neutral situations. These sentences were presented to 536 participants who rated the stimuli using both dimensional and categorical measures of emotions. Sex differences were also explored. Additionally, we probed how personality, empathy, and mood from a subset of 40 participants modulated the affective ratings. Our results confirmed that the MAS affective norms are valid measures to guide the selection of stimuli for experimental studies of emotion. The combination of dimensional and categorical ratings provided a more fine-grained characterization of the affective properties of the sentences. Moreover, the affective ratings of positive and negative sentences were not only modulated by participants' sex, but also by individual differences in empathy and mood state. Together, our results indicate that, in their quest to reveal the neurofunctional underpinnings of verbal emotional processing, researchers should consider not only the role of sex, but also of interindividual differences in empathy and mood states, in responses to the emotional meaning of sentences.
2016
Autores
Pinheiro, AP; Barros, C; Pedrosa, J;
Publicação
SOCIAL COGNITIVE AND AFFECTIVE NEUROSCIENCE
Abstract
In a dynamically changing social environment, humans have to face the challenge of prioritizing stimuli that compete for attention. In the context of social communication, the voice is the most important sound category. However, the existing studies do not directly address whether and how the salience of an unexpected vocal change in an auditory sequence influences the orientation of attention. In this study, frequent tones were interspersed with task-relevant infrequent tones and task-irrelevant infrequent vocal sounds (neutral, happy and angry vocalizations). Eighteen healthy college students were asked to count infrequent tones. A combined event-related potential (ERP) and EEG time-frequency approach was used, with the focus on the P3 component and on the early auditory evoked gamma band response, respectively. A spatial-temporal principal component analysis was used to disentangle potentially overlapping ERP components. Although no condition differences were observed in the 210-310 ms window, larger positive responses were observed for emotional than neutral vocalizations in the 310-410 ms window. Furthermore, the phase synchronization of the early auditory evoked gamma oscillation was enhanced for happy vocalizations. These findings support the idea that the brain prioritizes the processing of emotional stimuli, by devoting more attentional resources to salient social signals even when they are not task-relevant.
2023
Autores
Ferraz, S; Coimbra, M; Pedrosa, J;
Publicação
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG
Abstract
Two-dimensional echocardiography is the most widely used non-invasive imaging modality due to its fast acquisition time, low cost, and high temporal resolution. Accurate segmentation of the left ventricle in echocardiography is vital for ensuring the accuracy of subsequent diagnosis. Currently, numerous efforts have been made to automatize this task and various public datasets have been released in recent decades to further develop present research. However, medical datasets acquired at different institutions have inherent bias caused by various confounding factors, such as operation policies, machine protocols, treatment preference, etc. As a result, models trained on one dataset, regardless of volume, cannot be confidently utilized for the others. In this study, we investigated model robustness to dataset bias using two publicly available echocardiographic datasets. This work validates the efficacy of a supervised deep learning model for left ventricle segmentation and ejection fraction prediction, outside the dataset on which it was trained. The exposure of this model to unseen, but related samples without additional training maintained a good performance. However, a performance decrease from the original results can be observed, while the impact of quality is also noteworthy with lower quality data leading to decreased performance.
2023
Autores
Pedrosa, J; Sousa, P; Silva, J; Mendonça, AM; Campilho, A;
Publicação
2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS
Abstract
Chest radiography is one of the most ubiquitous medical imaging modalities. Nevertheless, the interpretation of chest radiography images is time-consuming, complex and subject to observer variability. As such, automated diagnosis systems for pathology detection have been proposed, aiming to reduce the burden on radiologists. The advent of deep learning has fostered the development of solutions for both abnormality detection with promising results. However, these tools suffer from poor explainability as the reasons that led to a decision cannot be easily understood, representing a major hurdle for their adoption in clinical practice. In order to overcome this issue, a method for chest radiography abnormality detection is presented which relies on an object detection framework to detect individual findings and thus separate normal and abnormal CXRs. It is shown that this framework is capable of an excellent performance in abnormality detection (AUC: 0.993), outperforming other state-of-the-art classification methodologies (AUC: 0.976 using the same classes). Furthermore, validation on external datasets shows that the proposed framework has a smaller drop in performance when applied to previously unseen data (21.9% vs 23.4% on average). Several approaches for object detection are compared and it is shown that merging pathology classes to minimize radiologist variability improves the localization of abnormal regions (0.529 vs 0.491 APF when using all pathology classes), resulting in a network which is more explainable and thus more suitable for integration in clinical practice.
2024
Autores
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonça, AM;
Publicação
ARTIFICIAL INTELLIGENCE IN MEDICINE
Abstract
Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% -a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.