2019
Authors
Renna, F; Oliveira, J; Coimbra, MT;
Publication
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
Abstract
This paper studies the use of deep convolutional neural networks to segment heart sounds into their main components. The proposed methods are based on the adoption of a deep convolutional neural network architecture, which is inspired by similar approaches used for image segmentation. Different temporal modeling schemes are applied to the output of the proposed neural network, which induce the output state sequence to be consistent with the natural sequence of states within a heart sound signal (S1, systole, S2, diastole). In particular, convolutional neural networks are used in conjunction with underlying hidden Markov models and hidden semi-Markov models to infer emission distributions. The proposed approaches are tested on heart sound signals from the publicly available PhysioNet dataset, and they are shown to outperform current state-of-the-art segmentation methods by achieving an average sensitivity of 93.9 and an average positive predictive value of 94 in detecting S1 and S2 sounds.
2019
Authors
Renna, F; Coimbra, MT;
Publication
46th Computing in Cardiology, CinC 2019, Singapore, September 8-11, 2019
Abstract
In this work, we present a method to separate aortic (A2) and pulmonary (P2) components from second heart sounds (S2). The proposed approach captures the different dynamical behavior of A2 and P2 components via a joint Gaussian mixture model, which is then used to perform separation via a closed-form conditional mean estimator.The proposed approach is tested over synthetic heart sounds and it is shown guarantee a reduction of approximately 25% of the normalized root mean-squared error incurred in signal separation, with respect to a previously presented approach in the literature. © 2019 Creative Commons.
2019
Authors
Renna, F; Illanes, A; Oliveira, J; Esmaeili, N; Friebe, M; Coimbra, MT;
Publication
2019 41ST ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC)
Abstract
This paper studies the use of non-invasive acoustic emission recordings for clinical device tracking. In particular, audio signals recorded at the proximal end of a needle are used to detect perforation events that occur when the needle tip crosses internal tissue layers. A comparative study is performed to assess the capacity of different features and envelopes in detecting perforation events. The results obtained from the considered experimental setup show a statistically significant correlation between the extracted envelopes and the perforation events, thus leading the way for future development of perforation detection algorithms.
2020
Authors
Antun, V; Renna, F; Poon, C; Adcock, B; Hansen, AC;
Publication
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
Abstract
Deep learning, due to its unprecedented success in tasks such as image classification, has emerged as a new tool in image reconstruction with potential to change the field. In this paper, we demonstrate a crucial phenomenon: Deep learning typically yields unstable methods for image reconstruction. The instabilities usually occur in several forms: 1) Certain tiny, almost undetectable perturbations, both in the image and sampling domain, may result in severe artefacts in the reconstruction; 2) a small structural change, for example, a tumor, may not be captured in the reconstructed image; and 3) (a counterintuitive type of instability) more samples may yield poorer performance. Our stability test with algorithms and easy-to-use software detects the instability phenomena. The test is aimed at researchers, to test their networks for instabilities, and for government agencies, such as the Food and Drug Administration (FDA), to secure safe use of deep learning methods.
2020
Authors
Sabetsarvestani, Z; Renna, F; Kiraly, F; Rodrigues, M;
Publication
IEEE TRANSACTIONS ON SIGNAL PROCESSING
Abstract
In this paper, we propose an algorithm for source separation with side information where one observes the linear superposition of two source signals plus two additional signals that are correlated with the mixed ones. Our algorithm is based on two ingredients: first, we learn a Gaussian mixture model (GMM) for the joint distribution of a source signal and the corresponding correlated side information signal; second, we separate the signals using standard computationally efficient conditional mean estimators. The paper also puts forth new recovery guarantees for this source separation algorithm. In particular, under the assumption that the signals can be perfectly described by a GMM model, we characterize necessary and sufficient conditions for reliable source separation in the asymptotic regime of low-noise as a function of the geometry of the underlying signals and their interaction. It is shown that if the subspaces spanned by the innovation components of the source signals with respect to the side information signals have zero intersection, provided that we observe a certain number of linear measurements from the mixture, then we can reliably separate the sources; otherwise we cannot. Our proposed framework which provides a new way to incorporate side information to aid the solution of source separation problems where the decoder has access to linear projections of superimposed sources and side information is also employed in a real-world art investigation application involving the separation of mixtures of X-ray images. The simulation results showcase the superiority of our algorithm against other state-of-the-art algorithms.
2020
Authors
Zamani, M; Sokolic, J; Jiang, D; Renna, F; Rodrigues, MRD; Demosthenous, A;
Publication
IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS
Abstract
This paper presents an adaptable dictionary-based feature extraction approach for spike sorting offering high accuracy and low computational complexity for implantable applications. It extracts and learns identifiable features from evolving subspaces through matched unsupervised subspace filtering. To provide compatibility with the strict constraints in implantable devices such as the chip area and power budget, the dictionary contains arrays of {-1, 0 and 1} and the algorithm need only process addition and subtraction operations. Three types of such dictionary were considered. To quantify and compare the performance of the resulting three feature extractors with existing systems, a neural signal simulator based on several different libraries was developed. For noise levels sigma(N) between 0.05 and 0.3 and groups of 3 to 6 clusters, all three feature extractors provide robust high performance with average classification errors of less than 8% over five iterations, each consisting of 100 generated data segments. To our knowledge, the proposed adaptive feature extractors are the first able to classify reliably 6 clusters for implantable applications. An ASIC implementation of the best performing dictionary-based feature extractor was synthesized in a 65-nm CMOS process. It occupies an area of 0.09 mm(2) and dissipates up to about 10.48 mu W from a 1 V supply voltage, when operating with 8-bit resolution at 30 kHz operating frequency.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.