Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Manuel Eduardo Correia

2020

Illegitimate HIS Access by Healthcare Professionals Detection System Applying an Audit Trail-based Model

Autores
Sa Correia, L; Correia, ME; Cruz Correia, R;

Publicação
PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, VOL 5: HEALTHINF

Abstract
Complex data management on healthcare institutions makes very hard to identify illegitimate accesses which is a serious issue. We propose to develop a system to detect accesses with suspicious behavior for further investigation. We modeled use cases (UC) and sequence diagrams (SD) showing the data flow between users and systems. The algorithms represented by activity diagrams apply rules based on professionals' routines, use data from an audit trail (AT) and classify accesses as suspicious or normal. The algorithms were evaluated between 23rd and 31st July 2019. The results were analyzed using absolute and relative frequencies and dispersion measures. Access classification was in accordance to rules applied. "Check time of activity" UC had 64,78% of suspicious classifications, being 55% of activity period shorter and 9,78% longer than expected, "Check days of activity" presented 2,27% of suspicious access and "EHR read access" 79%, the highest percentage of suspicious accesses. The results show the first picture of HIS accesses. Deeper analysis to evaluate algorithms sensibility and specificity should be done. Lack of more detailed information about professionals' routines and systems. and low quality of systems logs are some limitations. Although we believe this is an important step in this field.

2020

Providing Secured Access Delegation in Identity Management Systems

Autores
Shehu, AS; Pinto, A; Correia, ME;

Publicação
PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON E-BUSINESS AND TELECOMMUNICATIONS (SECRYPT), VOL 1

Abstract
The evolutionary growth of information technology has enabled us with platforms that eases access to a wide range of electronic services. Typically, access to these services requires users to authenticate their identity, which involves the release, dissemination and processing of personal data by third parties such as service and identity providers. The involvement of these and other entities in managing and processing personal identifiable data has continued to raise concerns on privacy of personal information. Identity management systems (IdMs) emerged as a promising solution to address major access control and privacy issues, however most research works are focused on securing service providers (SPs) and the services provided, with little emphases on users privacy. In order to optimise users privacy and ensure that personal information are used only for intended purposes, there is need for authorisation systems that controls who may access what and under what conditions. However, for adoption data owners perspective must not be neglected. To address these issues, this paper introduces the concept of IdM and access control framework which operates with RESTful based services. The proposal provides a new level of abstraction and logic in access management, while giving data owner a decisive control over access to personal data using smartphone. The framework utilises Attribute based access control (ABAC) method to authenticate and authorise users, Open ID Connect (OIDC) protocol for data owner authorisation and Public-key cryptography to achieve perfect forward secrecy communication. The solution enables data owner to attain the responsibility of granting or denying access to their data, from a secured communication with an identity provider using a digitally signed token.

2020

Container Hardening Through Automated Seccomp Profiling

Autores
Lopes, N; Martins, R; Correia, ME; Serrano, S; Nunes, F;

Publicação
PROCEEDINGS OF THE 2020 6TH INTERNATIONAL WORKSHOP ON CONTAINER TECHNOLOGIES AND CONTAINER CLOUDS (WOC '20)

Abstract
Nowadays the use of container technologies is ubiquitous and thus the need to make them secure arises. Container technologies such as Docker provide several options to better improve container security, one of those is the use of a Seccomp profile. A major problem with these profiles is that they are hard to maintain because of two different factors: they need to be updated quite often and present a complex and time consuming task to determine exactly what to update, therefore not many people use them. The research goal of this paper is to make Seccomp profiles a viable technique in a production environment by proposing a reliable method to generate custom Seccomp profiles for arbitrary containerized application. This research focused on developing a solution with few requirements allowing for an easy integration with any environment with no human intervention. Results show that using a custom Seccomp profile can mitigate several attacks and even some zero day vulnerabilities on containerized applications. This represents a big step forward on using Seccomp in a production environment, which would benefit users worldwide.

2021

Exposing Manipulated Photos and Videos in Digital Forensics Analysis

Autores
Ferreira, S; Antunes, M; Correia, ME;

Publicação
JOURNAL OF IMAGING

Abstract
Tampered multimedia content is being increasingly used in a broad range of cybercrime activities. The spread of fake news, misinformation, digital kidnapping, and ransomware-related crimes are amongst the most recurrent crimes in which manipulated digital photos and videos are the perpetrating and disseminating medium. Criminal investigation has been challenged in applying machine learning techniques to automatically distinguish between fake and genuine seized photos and videos. Despite the pertinent need for manual validation, easy-to-use platforms for digital forensics are essential to automate and facilitate the detection of tampered content and to help criminal investigators with their work. This paper presents a machine learning Support Vector Machines (SVM) based method to distinguish between genuine and fake multimedia files, namely digital photos and videos, which may indicate the presence of deepfake content. The method was implemented in Python and integrated as new modules in the widely used digital forensics application Autopsy. The implemented approach extracts a set of simple features resulting from the application of a Discrete Fourier Transform (DFT) to digital photos and video frames. The model was evaluated with a large dataset of classified multimedia files containing both legitimate and fake photos and frames extracted from videos. Regarding deepfake detection in videos, the Celeb-DFv1 dataset was used, featuring 590 original videos collected from YouTube, and covering different subjects. The results obtained with the 5-fold cross-validation outperformed those SVM-based methods documented in the literature, by achieving an average F1-score of 99.53%, 79.55%, and 89.10%, respectively for photos, videos, and a mixture of both types of content. A benchmark with state-of-the-art methods was also done, by comparing the proposed SVM method with deep learning approaches, namely Convolutional Neural Networks (CNN). Despite CNN having outperformed the proposed DFT-SVM compound method, the competitiveness of the results attained by DFT-SVM and the substantially reduced processing time make it appropriate to be implemented and embedded into Autopsy modules, by predicting the level of fakeness calculated for each analyzed multimedia file.

2021

A Dataset of Photos and Videos for Digital Forensics Analysis Using Machine Learning Processing

Autores
Ferreira, S; Antunes, M; Correia, ME;

Publicação
DATA

Abstract
Deepfake and manipulated digital photos and videos are being increasingly used in a myriad of cybercrimes. Ransomware, the dissemination of fake news, and digital kidnapping-related crimes are the most recurrent, in which tampered multimedia content has been the primordial disseminating vehicle. Digital forensic analysis tools are being widely used by criminal investigations to automate the identification of digital evidence in seized electronic equipment. The number of files to be processed and the complexity of the crimes under analysis have highlighted the need to employ efficient digital forensics techniques grounded on state-of-the-art technologies. Machine Learning (ML) researchers have been challenged to apply techniques and methods to improve the automatic detection of manipulated multimedia content. However, the implementation of such methods have not yet been massively incorporated into digital forensic tools, mostly due to the lack of realistic and well-structured datasets of photos and videos. The diversity and richness of the datasets are crucial to benchmark the ML models and to evaluate their appropriateness to be applied in real-world digital forensics applications. An example is the development of third-party modules for the widely used Autopsy digital forensic application. This paper presents a dataset obtained by extracting a set of simple features from genuine and manipulated photos and videos, which are part of state-of-the-art existing datasets. The resulting dataset is balanced, and each entry comprises a label and a vector of numeric values corresponding to the features extracted through a Discrete Fourier Transform (DFT). The dataset is available in a GitHub repository, and the total amount of photos and video frames is 40,588 and 12,400, respectively. The dataset was validated and benchmarked with deep learning Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) methods; however, a plethora of other existing ones can be applied. Generically, the results show a better F1-score for CNN when comparing with SVM, both for photos and videos processing. CNN achieved an F1-score of 0.9968 and 0.8415 for photos and videos, respectively. Regarding SVM, the results obtained with 5-fold cross-validation are 0.9953 and 0.7955, respectively, for photos and videos processing. A set of methods written in Python is available for the researchers, namely to preprocess and extract the features from the original photos and videos files and to build the training and testing sets. Additional methods are also available to convert the original PKL files into CSV and TXT, which gives more flexibility for the ML researchers to use the dataset on existing ML frameworks and tools.

2021

Forensic Analysis of Tampered Digital Photos

Autores
Ferreira, S; Antunes, M; Correia, ME;

Publicação
Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications - 25th Iberoamerican Congress, CIARP 2021, Porto, Portugal, May 10-13, 2021, Revised Selected Papers

Abstract
Deepfake in multimedia content is being increasingly used in a plethora of cybercrimes, namely those related to digital kidnap, and ransomware. Criminal investigation has been challenged in detecting manipulated multimedia material, by applying machine learning techniques to distinguish between fake and genuine photos and videos. This paper aims to present a Support Vector Machines (SVM) based method to detect tampered photos. The method was implemented in Python and integrated as a new module in the widely used digital forensics application Autopsy. The method processes a set of features resulting from the application of a Discrete Fourier Transform (DFT) in each photo. The experiments were made in a new and large dataset of classified photos containing both legitimate and manipulated photos, and composed of objects and faces. The results obtained were promising and reveal the appropriateness of using this method embedded in Autopsy, to help in criminal investigation activities and digital forensics.

  • 5
  • 11