Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2020

Automotive Interior Sensing - Towards a Synergetic Approach between Anomaly Detection and Action Recognition Strategies

Autores
Augusto, P; Cardoso, JS; Fonseca, J;

Publicação
4th IEEE International Conference on Image Processing, Applications and Systems, IPAS 2020, Virtual Event, Italy, December 9-11, 2020

Abstract
With the appearance of Shared Autonomous Vehicles there will no longer be a driver responsible for maintaining the car interior and well-being of passengers. To counter this, it is imperative to have a system that is able to detect any abnormal behaviors, more specifically, violence between passengers. Traditional action recognition algorithms build models around known interactions but activities can be so diverse, that having a dataset that incorporates most use cases is unattainable. While action recognition models are normally trained on all the defined activities and directly output a score that classifies the likelihood of violence, video anomaly detection algorithms present themselves as an alternative approach to build a good discriminative model since usually only non-violent examples are needed. This work focuses on anomaly detection and action recognition algorithms trained, validated and tested on a subset of human behavior video sequences from Bosch's internal datasets. The anomaly detection network architecture defines how to properly reconstruct normal frame sequences so that during testing, each sequence can be classified as normal or abnormal based on its reconstruction error. With these errors, regularity scores are inferred showing the predicted regularity of each frame. The resulting framework is a viable addition to traditional action recognition algorithms since it can work as a tool for detecting unknown actions, strange/violent behaviors and aid in understanding the meaning of such human interactions.

2020

Interpretable and Annotation-Efficient Learning for Medical Image Computing - Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3ID 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4-8, 2020, Proceedings

Autores
Cardoso, JS; Nguyen, HV; Heller, N; Abreu, PH; Isgum, I; Silva, W; Cruz, R; Amorim, JP; Patel, V; Roysam, B; Zhou, SK; Jiang, SB; Le, N; Luu, K; Sznitman, R; Cheplygina, V; Mateus, D; Trucco, E; Sureshjani, SA;

Publicação
iMIMIC/MIL3ID/LABELS@MICCAI

Abstract

2020

Tackling unsupervised multi-source domain adaptation with optimism and consistency

Autores
Pernes, D; Cardoso, JS;

Publicação
CoRR

Abstract

2020

Towards a Joint Approach to Produce Decisions and Explanations Using CNNs

Autores
Rio Torto, I; Fernandes, K; Teixeira, LF;

Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS, PT I

Abstract
Convolutional Neural Networks, as well as other deep learning methods, have shown remarkable performance on tasks like classification and detection. However, these models largely remain black-boxes. With the widespread use of such networks in real-world scenarios and with the growing demand of the right to explanation, especially in highly-regulated areas like medicine and criminal justice, generating accurate predictions is no longer enough. Machine learning models have to be explainable, i.e., understandable to humans, which entails being able to present the reasons behind their decisions. While most of the literature focuses on post-model methods, we propose an in-model CNN architecture, composed by an explainer and a classifier. The model is trained end-to-end, with the classifier taking as input not only images from the dataset but also the explainer’s resulting explanation, thus allowing for the classifier to focus on the relevant areas of such explanation. We also developed a synthetic dataset generation framework, that allows for automatic annotation and creation of easy-to-understand images that do not require the knowledge of an expert to be explained. Promising results were obtained, especially when using L1 regularisation, validating the potential of the proposed architecture and further encouraging research to improve the proposed architecture’s explainability and performance. © 2019, Springer Nature Switzerland AG.

2020

Understanding the Impact of Artificial Intelligence on Services

Autores
Ferreira, P; Teixeira, JG; Teixeira, LF;

Publicação
EXPLORING SERVICE SCIENCE (IESS 2020)

Abstract
Services are the backbone of modern economies and are increasingly supported by technology. Meanwhile, there is an accelerated growth of new technologies that are able to learn from themselves, providing more and more relevant results, i.e. Artificial Intelligence (AI). While there have been significant advances on the capabilities of AI, the impacts of this technology on service provision are still unknown. Conceptual research claims that AI offers a way to augment human capabilities or position it as a threat to human jobs. The objective of this study is to better understand the impact of AI on service, namely by understanding current trends in AI, and how they are, and will, impact service provision. To achieve this, a qualitative study, following Grounded Theory methodology was performed, with ten Artificial Intelligence experts selected from industry and academia.

2020

Deep Learning for Interictal Epileptiform Discharge Detection from Scalp EEG Recordings

Autores
Lourenco, C; Tjepkema Cloostermans, MC; Teixeira, LF; van Putten, MJAM;

Publicação
XV MEDITERRANEAN CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING AND COMPUTING - MEDICON 2019

Abstract
Interictal Epileptiform Discharge (IED) detection in EEG signals is widely used in the diagnosis of epilepsy. Visual analysis of EEGs by experts remains the gold standard, outperforming current computer algorithms. Deep learning methods can be an automated way to perform this task. We trained a VGG network using 2-s EEG epochs from patients with focal and generalized epilepsy (39 and 40 patients, respectively, 1977 epochs total) and 53 normal controls (110770 epochs). Five-fold cross-validation was performed on the training set. Model performance was assessed on an independent set (734 IEDs from 20 patients with focal and generalized epilepsy and 23040 normal epochs from 14 controls). Network visualization techniques (filter visualization and occlusion) were applied. The VGG yielded an Area Under the ROC Curve (AUC) of 0.96 (95% Confidence Interval (CI) = 0.95 - 0.97). At 99% specificity, the sensitivity was 79% and only one sample was misclassified per two minutes of analyzed EEG. Filter visualization showed that filters from higher level layers display patches of activity indicative of IED detection. Occlusion showed that the model correctly identified IED shapes. We show that deep neural networks can reliably identify IEDs, which may lead to a fundamental shift in clinical EEG analysis.

  • 127
  • 370