Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by João Barroso

2019

A review of assistive spatial orientation and navigation technologies for the visually impaired

Authors
Fernandes, H; Costa, P; Filipe, V; Paredes, H; Barroso, J;

Publication
UNIVERSAL ACCESS IN THE INFORMATION SOCIETY

Abstract
The overall objective of this work is to review the assistive technologies that have been proposed by researchers in recent years to address the limitations in user mobility posed by visual impairment. This work presents an umbrella review. Visually impaired people often want more than just information about their location and often need to relate their current location to the features existing in the surrounding environment. Extensive research has been dedicated into building assistive systems. Assistive systems for human navigation, in general, aim to allow their users to safely and efficiently navigate in unfamiliar environments by dynamically planning the path based on the user's location, respecting the constraints posed by their special needs. Modern mobile assistive technologies are becoming more discrete and include a wide range of mobile computerized devices, including ubiquitous technologies such as mobile phones. Technology can be used to determine the user's location, his relation to the surroundings (context), generate navigation instructions and deliver all this information to the blind user.

2018

Sculpture maps and assistive navigation as a way to promote universal access

Authors
Fernandes, H; Rocha, T; Reis, A; Paredes, H; Barroso, J;

Publication
PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON SOFTWARE DEVELOPMENT AND TECHNOLOGIES FOR ENHANCING ACCESSIBILITY AND FIGHTING INFO-EXCLUSION (DSAI 2018)

Abstract
Assistive systems which incorporate different technologies to provide simple and quick, yet informative, content, have recently been proposed to alleviate the mobility and accessibility constrains of users with visual impairment. Currently, technology has reached a maturation point that allows the development of systems based on video capturing, image recognition and geo-location referencing, which are key for providing features of artificial vision, assisted navigation and spatial perception. The miniaturization of electronics can be used to create devices, such as electronic canes equipped with sensors, that can provide contextual information to a blind user. In this paper, we describe the current work on assistive systems for the blind and propose a new perspective on using the base information of those systems to provide new services to the general public. By bridging the gap between the two groups, we expect to further advance the development of the current systems and contribute to their economic sustainability.

2019

Classification of Physical Exercise Intensity Based on Facial Expression Using Deep Neural Network

Authors
Khanal, SR; Sampaio, J; Barroso, J; Filipe, V;

Publication
Universal Access in Human-Computer Interaction. Multimodality and Assistive Environments - 13th International Conference, UAHCI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, Part II

Abstract
If done properly, physical exercise can help maintain fitness and health. The benefits of physical exercise could be increased with real time monitoring by measuring physical exercise intensity, which refers to how hard it is for a person to perform a specific task. This parameter can be estimated using various sensors, including contactless technology. Physical exercise intensity is usually synchronous to heart rate; therefore, if we measure heart rate, we can define a particular level of physical exercise. In this paper, we proposed a Convolutional Neural Network (CNN) to classify physical exercise intensity based on the analysis of facial images extracted from a video collected during sub-maximal exercises in a stationary bicycle, according to standard protocol. The time slots of the video used to extract the frames were determined by heart rate. We tested different CNN models using as input parameters the individual color components and grayscale images. The experiments were carried out separately with various numbers of classes. The ground truth level for each class was defined by the heart rate. The dataset was prepared to classify the physical exercise intensity into two, three, and four classes. For each color model a CNN was trained and tested. The model performance was presented using confusion matrix as metrics for each case. The most significant color channel in terms of accuracy was Green. The average model accuracy was 100%, 99% and 96%, for two, three and four classes classification, respectively. © 2019, Springer Nature Switzerland AG.

2018

Telepresence Robots in the Classroom: The State-of-the-Art and a Proposal for a Telepresence Service for Higher Education

Authors
Reis, A; Martins, MG; Martins, P; Sousa, J; Barroso, J;

Publication
Technology and Innovation in Learning, Teaching and Education - First International Conference, TECH-EDU 2018, Thessaloniki, Greece, June 20-22, 2018, Revised Selected Papers

Abstract
In this work we reviewed the current state-of-the art regarding the usage of robots, in particular telepresence robots, on educational related activities. We also researched the current consumer and corporate grade telepresence robotic equipment and tested three of these devices. Lastly, we reviewed the problem of disabled students, including students with special education needs, which fail at accessing and staying on higher education. One of the reasons for such problem is the impossibility to physically attend all the classes due to temporary or permanent limitations. As a conclusion of this work, and considering the ongoing positive cases with robotics and the current equipment availability, we propose the creation of telepresence services on the higher education institutions as a solution for those students that can’t attend classes. © Springer Nature Switzerland AG 2019.

2019

Creating Weather Narratives

Authors
Reis, A; Liberato, M; Paredes, H; Martins, P; Barroso, J;

Publication
Universal Access in Human-Computer Interaction. Multimodality and Assistive Environments - 13th International Conference, UAHCI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, Part II

Abstract
Information can be conveyed to the user by means of a narrative, modeled according to the user’s context. A case in point is the weather, which can be perceived differently and with distinct levels of importance according to the user’s context. For example, for a blind person, the weather is an important element to plan and move between locations. In fact, weather can make it very difficult or even impossible for a blind person to successfully negotiate a path and navigate from one place to another. To provide proper information, narrated and delivered according to the person’s context, this paper proposes a project for the creation of weather narratives, targeted at specific types of users and contexts. The proposal’s main objective is to add value to the data, acquired through the observation of weather systems, by interpreting that data, in order to identify relevant information and automatically create narratives, in a conversational way or with machine metadata language. These narratives should communicate specific aspects of the evolution of the weather systems in an efficient way, providing knowledge and insight in specific contexts and for specific purposes. Currently, there are several language generator’ systems, which automatically create weather forecast reports, based on previously processed and synthesized information. This paper, proposes a wider and more comprehensive approach to the weather systems phenomena, proposing a full process, from the raw data to a contextualized narration, thus providing a methodology and a tool that might be used for various contexts and weather systems. © 2019, Springer Nature Switzerland AG.

2019

Usage of artificial vision cloud services as building blocks for blind people assistive systems

Authors
Paulino, D; Reis, A; Paredes, H; Fernandes, H; Barroso, J;

Publication
International Journal of Recent Technology and Engineering

Abstract
This study has the objective of select the best service at image processing and recognition, running in the cloud, and best suited for usage in systems to aid and improve the daily lives of blind people. To accomplish this purpose, a set of candidate services was built, including Microsoft Cognitive Services and Google Cloud Vision. A test mobile app was developed to automatically take pictures, which are sent to the online cloud services for processing. The results and the functionalities were evaluated with the aim to measure their accuracy and relevance. The following variables were registered: relative accuracy, represented by the ratio of the number of accurate results vs. the number of results shown; confidence degree, representing the service accuracy (when provided by the service); and relevance, identifying situations that can be useful in the daily lives of the blind people. The results have shown that these two services, Microsoft Cognitive Services and Google Cloud Vision, provided good accuracy and significance, in supporting systems to help blind people in their daily tasks. It was chosen some functionalities in two APIs of services running in the cloud like face identification, image description, objects, and text recognition. © BEIESP.

  • 19
  • 41