2015
Autores
Pereira, N; Tennina, S; Loureiro, J; Severino, R; Saraiva, B; Santos, M; Pacheco, F; Tovar, E;
Publicação
INTERNATIONAL JOURNAL OF SENSOR NETWORKS
Abstract
Data centres are large energy consumers. A large portion of this power consumption is due to the control of physical parameters of the data centre (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centres, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data centre. Therefore, managing the physical and compute infrastructure of a large data centre is an embodiment of a cyber-physical system (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data centre at a very high temporal and spatial resolution of the sensor measurements. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data.
2015
Autores
Malta, MC; Baptista, AA; Parente, C;
Publicação
Proceedings of the International Conference on Dublin Core and Metadata Applications
Abstract
This article presents a work-in-progress version of a Dublin Core Application Profile (DCAP) developed to serve the Social and Solidarity Economy (SSE). Studies revealed that this community is interested in implementing both internal interoperability between their Web platforms to build a global SSE e-marketplace, and external interoperability among their Web platforms and external ones. The Dublin Core Application Profile for Social and Solidarity Economy (DCAP-SSE) serves this purpose. SSE organisations are submerged in the market economy but they have specificities not taken into account in this economy. The DCAP-SSE integrates terms from well-known metadata schemas, Resource Description Framework (RDF) vocabularies or ontologies, in order to enhance interoperability and take advantage of the benefits of the Linked Open Data ecosystem. It also integrates terms from the new essglobal RDF vocabulary which was created with the goal to respond to the SSE-specific needs. The DCAP-SSE also integrates five new Vocabulary Encoding Schemes to be used with DCAP-SSE properties. The DCAP development was based on a method for the development of application profiles (Me4MAP). We believe that this article has an educational value since it presents the idea that it is important to base DCAP developments on a method. This article shows the main results of applying such a method.
2015
Autores
Malta M.C.; Vidotti S.A.B.G.;
Publicação
Proceedings of the International Conference on Dublin Core and Metadata Applications
Abstract
2015
Autores
Pereira, D; Oliveira, P; Rodrigues, F;
Publicação
PROCEEDINGS OF THE 2015 10TH IBERIAN CONFERENCE ON INFORMATION SYSTEMS AND TECHNOLOGIES (CISTI 2015)
Abstract
Due to its historical nature, data warehouses require that large volumes of data need to be stored in their repositories. Some organizations are beginning to have problems to manage and analyze these huge volumes of data. This is due, in large part, to the relational databases which are the primary method of data storage in a data warehouse, and start underperforming, crumbling under the weight of the data stored. In opposition to these systems, arise the NoSQL databases that are associated with the storage of very large volumes of data inherent to the Big Data paradigm. Thus, this article focuses on the study of the feasibility and the implications of the adoption of a NoSQL database, within the data warehousing context. MongoDB was selected to represent the NoSQL systems in this investigation. In this paper will be explained the processes required to design the structure of a data warehouse and typically dimensional queries in the MongoDB system. The undertaken research culminates in the performance analysis of queries executed in a traditional data warehouse, based on the SQL Server system, and an equivalent data warehouse based on the MongoDB system.
2014
Autores
da Costa, FP; Cunha, A; David, G;
Publicação
EURODYN 2014: IX INTERNATIONAL CONFERENCE ON STRUCTURAL DYNAMICS
Abstract
This project has been motivated by the need to standardize, preserve, and share the data sets of the Laboratory of Vibrations and Structural Monitoring (ViBest, www.fe.up.pt/vibest) of FEUP, produced by several long term projects individually managed. The solution presented is meant to support the process of Structural Health Monitoring, offering features to catalogue the projects, their goals and components, to store and visualize their acquired and processed data through time, and to preserve the data in a standardized form for all the research unit and extensible to future applications. The result is a digital archive with automatic ingestion of new data files and a Web interface with access control and tools for information management. There is a batch export functionality to deal with large data transfers. It is being used on monitoring data related with different kinds of structural health monitoring applications. The standardization and preservation of all data sets acquired in multiple applications will be certainly a solid basis for further research, either at a local basis or in the context of international joint cooperation.
2014
Autores
Castro, JA; da Silva, JR; Ribeiro, C;
Publicação
2014 IEEE/ACM JOINT CONFERENCE ON DIGITAL LIBRARIES (JCDL)
Abstract
The description of data is a central task in research data management. Describing datasets requires deep knowledge of both the data and the data creation process to ensure adequate capture of their meaning and context. Metadata schemas are usually followed in resource description to enforce comprehensiveness and interoperability, but they can be hard to understand and adopt by researchers. We propose to address data description using ontologies, which can evolve easily, express semantics at different granularity levels and be directly used in system development. Considering that existing ontologies are often hard to use in a cross domain research data management environment, we present an approach for creating lightweight ontologies to describe research data. We illustrate our process with two ontologies, and then use them as configuration parameters for Dendro, a software platform for research data management currently being developed at the University of Porto.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.