2016
Authors
Castro, H; Monteiro, J; Pereira, A; Silva, D; Coelho, G; Carvalho, P;
Publication
MULTIMEDIA TOOLS AND APPLICATIONS
Abstract
Over the last decade noticeable progress has occurred in automated computer interpretation of visual information. Computers running artificial intelligence algorithms are growingly capable of extracting perceptual and semantic information from images, and registering it as metadata. There is also a growing body of manually produced image annotation data. All of this data is of great importance for scientific purposes as well as for commercial applications. Optimizing the usefulness of this, manually or automatically produced, information implies its precise and adequate expression at its different logical levels, making it easily accessible, manipulable and shareable. It also implies the development of associated manipulating tools. However, the expression and manipulation of computer vision results has received less attention than the actual extraction of such results. Hence, it has experienced a smaller advance. Existing metadata tools are poorly structured, in logical terms, as they intermix the declaration of visual detections with that of the observed entities, events and comprising context. This poor structuring renders such tools rigid, limited and cumbersome to use. Moreover, they are unprepared to deal with more advanced situations, such as the coherent expression of the information extracted from, or annotated onto, multi-view video resources. The work here presented comprises the specification of an advanced XML based syntax for the expression and processing of Computer Vision relevant metadata. This proposal takes inspiration from the natural cognition process for the adequate expression of the information, with a particular focus on scenarios of varying numbers of sensory devices, notably, multi-view video.
2014
Authors
Monteiro, JP; Oliveira, HP; Aguiar, P; Cardoso, JS;
Publication
2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)
Abstract
Animal behavior assessment plays an important role in basic and clinical neuroscience. Although assessing the higher functional level of the nervous system is already possible, behavioral tests are extremely complex to design and analyze. Animal's responses are often evaluated manually, making it subjective, extremely time consuming, poorly reproducible and potentially fallible. The main goal of the present work is to evaluate the use of consumer depth cameras, such as the Microsoft's Kinect, for detection of behavioral patterns of mice. The hypothesis is that the depth information, should enable a more feasible and robust method for automatic behavior recognition. Thus, we introduce our depth-map based approach comprising mouse segmentation, body-like per-frame feature extraction and per-frame classification given temporal context, to prove the usability of this methodology.
2014
Authors
Costa, P; Zolfagharnasab, H; Monteiro, JP; Cardoso, JS; Oliveira, HP;
Publication
Proceedings of the 5th International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 21-22 October 2014
Abstract
2017
Authors
Zolfagharnasab, H; Monteiro, JP; Teixeira, JF; Borlinhas, F; Oliveira, HP;
Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2017)
Abstract
Automatic segmentation of breast is an important step in the context of providing a planning tool for breast cancer conservative treatment, being important to segment completely the breast region in an objective way; however, current methodologies need user interaction or detect breast contour partially. In this paper, we propose a methodology to detect the complete breast contour, including the pectoral muscle, using multi-modality data. Exterior contour is obtained from 3D reconstructed data acquired from low-cost RGB-D sensors, and the interior contour (pectoral muscle) is obtained from Magnetic Resonance Imaging (MRI) data. Quantitative evaluation indicates that the proposed methodology performs an acceptable detection of breast contour, which is also confirmed by visual evaluation.
2014
Authors
Sequeira, AF; Oliveira, HP; Monteiro, JC; Monteiro, JP; Cardoso, JS;
Publication
2014 IEEE/IAPR INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2014)
Abstract
Biometric systems based on iris are vulnerable to several attacks, particularly direct attacks consisting on the presentation of a fake iris to the sensor. The development of iris liveness detection techniques is crucial for the deployment of iris biometric applications in daily life specially in the mobile biometric field. The 1st Mobile Iris Liveness Detection Competition (MobILive) was organized in the context of IJCB2014 in order to record recent advances in iris liveness detection. The goal for (MobILive) was to contribute to the state of the art of this particular subject. This competition covered the most common and simple spoofing attack in which printed images from an authorized user are presented to the sensor by a non-authorized user in order to obtain access. The benchmark dataset was the MobBIOfake database which is composed by a set of 800 iris images and its corresponding fake copies (obtained from printed images of the original ones captured with the same handheld device and in similar conditions). In this paper we present a brief description of the methods and the results achieved by the six participants in the competition. © 2014 IEEE.
2016
Authors
Eiben, B; Lacher, R; Vavourakis, V; Hipwell, JH; Stoyanov, D; Williams, NR; Sabczynski, J; Buelow, T; Kutra, D; Meetz, K; Young, S; Barschdorf, H; Oliveira, HP; Cardoso, JS; Monteiro, JP; Zolfagharnasab, H; Sinkus, R; Gouveia, P; Liefers, GJ; Molenkamp, B; van de Velde, CJH; Hawkes, DJ; Cardoso, MJ; Keshtgar, M;
Publication
BREAST IMAGING, IWDM 2016
Abstract
Patient-specific surgical predictions of Breast Conserving Therapy, through mechano-biological simulations, could inform the shared decision making process between clinicians and patients by enabling the impact of different surgical options to be visualised. We present an overview of our processing workflow that integrates MR images and three dimensional optical surface scans into a personalised model. Utilising an interactively generated surgical plan, a multi-scale open source finite element solver is employed to simulate breast deformity based on interrelated physiological and biomechanical processes that occur post surgery. Our outcome predictions, based on the pre-surgical imaging, were validated by comparing the simulated outcome with follow-up surface scans of four patients acquired 6 to 12 months post-surgery. A mean absolute surface distance of 3.3mm between the follow-up scan and the simulation was obtained.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.