Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Tatiana Martins Pinho

2021

Prototyping IoT-Based Virtual Environments: An Approach toward the Sustainable Remote Management of Distributed Mulsemedia Setups

Authors
Adao, T; Pinho, T; Padua, L; Magalhaes, LG; Sousa, JJ; Peres, E;

Publication
APPLIED SCIENCES-BASEL

Abstract
Business models built upon multimedia/multisensory setups delivering user experiences within disparate contexts-entertainment, tourism, cultural heritage, etc.-usually comprise the installation and in-situ management of both equipment and digital contents. Considering each setup as unique in its purpose, location, layout, equipment and digital contents, monitoring and control operations may add up to a hefty cost over time. Software and hardware agnosticity may be of value to lessen complexity and provide more sustainable management processes and tools. Distributed computing under the Internet of Things (IoT) paradigm may enable management processes capable of providing both remote control and monitoring of multimedia/multisensory experiences made available in different venues. A prototyping software to perform IoT multimedia/multisensory simulations is presented in this paper. It is fully based on virtual environments that enable the remote design, layout, and configuration of each experience in a transparent way, without regard of software and hardware. Furthermore, pipelines to deliver contents may be defined, managed, and updated in a context-aware environment. This software was tested in the laboratory and was proven as a sustainable approach to manage multimedia/multisensory projects. It is currently being field-tested by an international multimedia company for further validation.

2021

Grape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models

Authors
Aguiar, AS; Magalhaes, SA; dos Santos, FN; Castro, L; Pinho, T; Valente, J; Martins, R; Boaventura Cunha, J;

Publication
AGRONOMY-BASEL

Abstract
The agricultural sector plays a fundamental role in our society, where it is increasingly important to automate processes, which can generate beneficial impacts in the productivity and quality of products. Perception and computer vision approaches can be fundamental in the implementation of robotics in agriculture. In particular, deep learning can be used for image classification or object detection, endowing machines with the capability to perform operations in the agriculture context. In this work, deep learning was used for the detection of grape bunches in vineyards considering different growth stages: the early stage just after the bloom and the medium stage where the grape bunches present an intermediate development. Two state-of-the-art single-shot multibox models were trained, quantized, and deployed in a low-cost and low-power hardware device, a Tensor Processing Unit. The training input was a novel and publicly available dataset proposed in this work. This dataset contains 1929 images and respective annotations of grape bunches at two different growth stages, captured by different cameras in several illumination conditions. The models were benchmarked and characterized considering the variation of two different parameters: the confidence score and the intersection over union threshold. The results showed that the deployed models could detect grape bunches in images with a medium average precision up to 66.96%. Since this approach uses low resources, a low-cost and low-power hardware device that requires simplified models with 8 bit quantization, the obtained performance was satisfactory. Experiments also demonstrated that the models performed better in identifying grape bunches at the medium growth stage, in comparison with grape bunches present in the vineyard after the bloom, since the second class represents smaller grape bunches, with a color and texture more similar to the surrounding foliage, which complicates their detection.

2022

Benchmark of Deep Learning and a Proposed HSV Colour Space Models for the Detection and Classification of Greenhouse Tomato

Authors
Moreira, G; Magalhaes, SA; Pinho, T; dos Santos, FN; Cunha, M;

Publication
AGRONOMY-BASEL

Abstract
The harvesting operation is a recurring task in the production of any crop, thus making it an excellent candidate for automation. In protected horticulture, one of the crops with high added value is tomatoes. However, its robotic harvesting is still far from maturity. That said, the development of an accurate fruit detection system is a crucial step towards achieving fully automated robotic harvesting. Deep Learning (DL) and detection frameworks like Single Shot MultiBox Detector (SSD) or You Only Look Once (YOLO) are more robust and accurate alternatives with better response to highly complex scenarios. The use of DL can be easily used to detect tomatoes, but when their classification is intended, the task becomes harsh, demanding a huge amount of data. Therefore, this paper proposes the use of DL models (SSD MobileNet v2 and YOLOv4) to efficiently detect the tomatoes and compare those systems with a proposed histogram-based HSV colour space model to classify each tomato and determine its ripening stage, through two image datasets acquired. Regarding detection, both models obtained promising results, with the YOLOv4 model standing out with an F1-Score of 85.81%. For classification task the YOLOv4 was again the best model with an Macro F1-Score of 74.16%. The HSV colour space model outperformed the SSD MobileNet v2 model, obtaining results similar to the YOLOv4 model, with a Balanced Accuracy of 68.10%.

2021

Hydroponics Monitoring through UV-Vis Spectroscopy and Artificial Intelligence: Quantification of Nitrogen, Phosphorous and Potassium

Authors
Silva, AF; Löfkvist, K; Gilbertsson, M; Os, EV; Franken, G; Balendonck, J; Pinho, TM; Boaventura-Cunha, J; Coelho, L; Jorge, P; Martins, RC;

Publication
Chemistry Proceedings

Abstract
In hydroponic cultivation, monitoring and quantification of nutrients is of paramount importance. Precision agriculture has an urgent need for measuring fertilization and plant nutrient uptake. Reliable, robust and accurate sensors for measuring nitrogen (N), phosphorus (P) and potassium (K) are regarded as critical in this process. It is vital to understand nutrients’ interference; thusly, a Hoagland fertilizer solution-based orthogonal experimental design was deployed. Concentration ranges were varied in a target analyte-independent style, as follows: [N] = [103.17–554.85] ppm; [P] = [15.06–515.35] ppm; [K] = [113.78–516.45] ppm, by dilution from individual stock solutions. Quantitative results for N and K, and qualitative results for P were obtained.

2021

Routing and schedule simulation of a biomass energy supply chain through SimPy simulation package

Authors
Pinho T.M.; Coelho J.P.; Oliveira P.M.; Oliveira B.; Marques A.; Rasinmäki J.; Moreira A.P.; Veiga G.; Boaventura-Cunha J.;

Publication
Applied Computing and Informatics

Abstract
The optimisation of forest fuels supply chain involves several entities actors, and particularities. To successfully manage these supply chains, efficient tools must be devised with the ability to deal with stakeholders dynamic interactions and to optimize the supply chain performance as a whole while being stable and robust, even in the presence of uncertainties. This work proposes a framework to coordinate different planning levels and event-based models to manage the forest-based supply chain. In particular, with the new methodology, the resilience and flexibility of the biomass supply chain is increased through a closed-loop system based on the system forecasts provided by a discrete-event model. The developed event-based predictive model will be described in detail, explaining its link with the remaining elements. The implemented models and their links within the proposed framework are presented in a case study in Finland and results are shown to illustrate the advantage of the proposed architecture.

2023

Nano Aerial Vehicles for Tree Pollination

Authors
Pinheiro, I; Aguiar, A; Figueiredo, A; Pinho, T; Valente, A; Santos, F;

Publication
APPLIED SCIENCES-BASEL

Abstract
Currently, Unmanned Aerial Vehicles (UAVs) are considered in the development of various applications in agriculture, which has led to the expansion of the agricultural UAV market. However, Nano Aerial Vehicles (NAVs) are still underutilised in agriculture. NAVs are characterised by a maximum wing length of 15 centimetres and a weight of fewer than 50 g. Due to their physical characteristics, NAVs have the advantage of being able to approach and perform tasks with more precision than conventional UAVs, making them suitable for precision agriculture. This work aims to contribute to an open-source solution known as Nano Aerial Bee (NAB) to enable further research and development on the use of NAVs in an agricultural context. The purpose of NAB is to mimic and assist bees in the context of pollination. We designed this open-source solution by taking into account the existing state-of-the-art solution and the requirements of pollination activities. This paper presents the relevant background and work carried out in this area by analysing papers on the topic of NAVs. The development of this prototype is rather complex given the interactions between the different hardware components and the need to achieve autonomous flight capable of pollination. We adequately describe and discuss these challenges in this work. Besides the open-source NAB solution, we train three different versions of YOLO (YOLOv5, YOLOv7, and YOLOR) on an original dataset (Flower Detection Dataset) containing 206 images of a group of eight flowers and a public dataset (TensorFlow Flower Dataset), which must be annotated (TensorFlow Flower Detection Dataset). The results of the models trained on the Flower Detection Dataset are shown to be satisfactory, with YOLOv7 and YOLOR achieving the best performance, with 98% precision, 99% recall, and 98% F1 score. The performance of these models is evaluated using the TensorFlow Flower Detection Dataset to test their robustness. The three YOLO models are also trained on the TensorFlow Flower Detection Dataset to better understand the results. In this case, YOLOR is shown to obtain the most promising results, with 84% precision, 80% recall, and 82% F1 score. The results obtained using the Flower Detection Dataset are used for NAB guidance for the detection of the relative position in an image, which defines the NAB execute command.

  • 6
  • 7