Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CRACS

2011

Integration of ePortfolios in Learning Management Systems

Autores
Queiros, R; Oliveira, L; Leal, JP; Moreira, F;

Publicação
COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2011, PT V

Abstract
The LMS plays a decisive role in most eLearning environments. Although they integrate many useful tools for managing eLearning activities, they must also be effectively integrated with other specialized systems typically found in an educational environment such as Repositories of Learning Objects or ePortfolio Systems. Both types of systems evolved separately but in recent years the trend is to combine them, allowing the LMS to benefit from using the ePortfolio assessment features. This paper details the most common strategies for integrating an ePortfolio system into an LMS: the data, the API and the tool integration strategies. It presents a comparative study of strategies based on the technical skills, degree of coupling, security features, batch integration, development effort, status and standardization. This study is validated through the integration of two of the most representative systems on each category - respectively Mahara and Moodle.

2011

A comparative study on LMS interoperability

Autores
Leal, JP; Queiros, R;

Publicação
Higher Education Institutions and Learning Management Systems: Adoption and Standardization

Abstract
A Learning Management System (LMS) plays an important role in any eLearning environment. Still, the LMS cannot afford to be isolated from other systems in an educational institution. Thus, the potential for interoperability is an important, although frequently overlooked, aspect of an LMS system. In this chapter we make a comparative study of the interoperability features of the most relevant LMS in use nowadays. We start by defining a comparison framework, with systems that are representative of the LMS universe, and interoperability facets that are representative of the type integration with other broad classes of eLearning systems. For each interoperability facet we categorize and identify the most representative remote systems, we present a comprehensive survey of existing standards and we illustrate with concrete integration scenarios. Finally, we draw some conclusions on the status of interoperability in LMS based on our study. © 2012, IGI Global.

2011

Modelling Text File Evaluation Processes

Autores
Leal, JP; Queiros, R;

Publicação
NEW HORIZONS IN WEB-BASED LEARNING: ICWL 2010 WORKSHOPS

Abstract
Text file evaluation is an emergent topic in e-learning that responds to the shortcomings of the assessment based on questions with predefined answers. Questions with predefined answers are formalized in languages such as IMS Question & Test Interoperability Specification (QTI) and supported by many e-learning systems. Complex evaluation domains justify the development of specialized evaluators that participate in several business processes. The goal of this paper is to formalize the concept of a text file evaluation in the scope of the E-Framework a service oriented framework for development of e-learning systems maintained by a community of practice. The contribution includes an abstract service type and a service usage model. The former describes the generic capabilities of a text file evaluation service. The later is a business process involving a set of services such as repositories of learning objects and learning management systems.

2011

Using the Common Cartridge Profile to Enhance Learning Content Interoperability

Autores
Queiros, R; Leal, JP;

Publicação
PROCEEDINGS OF THE 10TH EUROPEAN CONFERENCE ON E-LEARNING, VOLS 1 AND 2

Abstract
The concept of Learning Object (LO) is crucial for the standardization on eLearning. The latest LO standard from IMS Global Learning Consortium is the IMS Common Cartridge (IMS CC) that organizes and distributes digital learning content. By analyzing this new specification we considered two interoperability levels: content and communication. A common content format is the backbone of interoperability and is the basis for content exchange among eLearning systems. Communication is more than just exchanging content; it includes also accessing to specialized systems and services and reporting on content usage. This is particularly important when LOs are used for evaluation. In this paper we analyze the Common Cartridge profile based on the two interoperability levels we proposed. We detail its data model that comprises a set of derived schemata referenced on the CC schema and we explore the use of the IMS Learning Tools Interoperability (LTI) to allow remote tools and content to be integrated into a Learning Management System (LMS). In order to test the applicability of IMS CC for automatic evaluation we define a representation of programming exercises using this standard. This representation is intended to be the cornerstone of a network of eLearning systems where students can solve computer programming exercises and obtain feedback automatically. The CC learning object is automatically generated based on a XML dialect called PExIL that aims to consolidate all the data need to describe resources within the programming exercise life-cycle. Finally, we test the generated cartridge on the IMS CC online validator to verify its conformance with the IMS CC specification.

2011

Runtime programming through model-preserving, scalable runtime patches

Autores
Kirsch, CM; Lopes, L; Marques, ERB; Sokolova, A;

Publicação
Proceedings - International Conference on Application of Concurrency to System Design, ACSD

Abstract
We consider a methodology for flexible software design, runtime programming, defined by recurrent, incremental software modifications to a program at runtime, called runtime patches. The principles we consider for runtime programming are model preservation and scalability. Model preservation means that a runtime patch preserves the programming model in place for programs - in terms of syntax, semantics, and correctness properties - as opposed to an "ad-hoc", disruptive operation, or one that requires an extra level of abstraction. Scalability means that, for practicality and performance, the effort in program compilation required by a runtime patch should ideally scale in proportion to the change induced by it. We formulate runtime programming over an abstract model for component-based concurrent programs, defined by a modular relation between the syntax and semantics of programs, plus built-in notions of initialization and quiescence. The notion of a runtime patch is defined over these assumptions, as a model-preserving transition between two programs and respective states. Additionally, we propose an incremental compilation framework for scalability in patch compilation. The formulation is put in perspective through a case-study instantiation over a language for distributed hard real-time systems, the Hierarchical Timing Language (HTL). © 2011 IEEE.

2011

Clustering distributed sensor data streams using local processing and reduced communication

Autores
Gama, J; Rodrigues, PP; Lopes, L;

Publicação
INTELLIGENT DATA ANALYSIS

Abstract
Nowadays applications produce infinite streams of data distributed across wide sensor networks. In this work we study the problem of continuously maintain a cluster structure over the data points generated by the entire network. Usual techniques operate by forwarding and concentrating the entire data in a central server, processing it as a multivariate stream. In this paper, we propose DGClust, a new distributed algorithm which reduces both the dimensionality and the communication burdens, by allowing each local sensor to keep an online discretization of its data stream, which operates with constant update time and (almost) fixed space. Each new data point triggers a cell in this univariate grid, reflecting the current state of the data stream at the local site. Whenever a local site changes its state, it notifies the central server about the new state it is in. This way, at each point in time, the central site has the global multivariate state of the entire network. To avoid monitoring all possible states, which is exponential in the number of sensors, the central site keeps a small list of counters of the most frequent global states. Finally, a simple adaptive partitional clustering algorithm is applied to the frequent states central points in order to provide an anytime definition of the clusters centers. The approach is evaluated in the context of distributed sensor networks, focusing on three outcomes: loss to real centroids, communication prevention, and processing reduction. The experimental work on synthetic data supports our proposal, presenting robustness to a high number of sensors, and the application to real data from physiological sensors exposes the aforementioned advantages of the system.

  • 134
  • 192