Engenharia Elétrica
URI Permanente desta comunidade
Programa de Pós-Graduação em Engenharia Elétrica
Centro: CT
Telefone: (27) 4009 2663
URL do programa: https://engenhariaeletrica.ufes.br/pt-br/pos-graduacao/PPGEE
Navegar
Navegando Engenharia Elétrica por Autor "Almonfrey, Douglas"
Agora exibindo 1 - 3 de 3
Resultados por página
Opções de Ordenação
- ItemEstudo comparativo de detecção e rastreamento de elementos no trânsito utilizando imagens ominidirecionais(Universidade Federal do Espírito Santo, 2024-11-22) Scarparo, Heisthen Mazzei; Vassallo, Raquel Frizera ; https://orcid.org/0000-0002-4762-3219; http://lattes.cnpq.br/9572903915280374; https://orcid.org/0009-0009-8243-3320; http://lattes.cnpq.br/4545146039699801; Almonfrey, Douglas ; https://orcid.org/0000-0002-0547-3494; http://lattes.cnpq.br/1291322166628469; Cavalieri, Daniel Cruz ; https://orcid.org/0000-0002-4916-1863; http://lattes.cnpq.br/9583314331960942Traffic detection and tracking play an important role in the context of smart cities. These technologies have the potential to alleviate congestion, optimize the use of resources, and improve the quality of life of the population. However, one aspect of this field that has not yet been explored is the use of omnidirectional videos, which provide a 360° field of view. Omnidirectional images offer a large field of view of the road environment, allowing for a more complete analysis of traffic and moving objects. This panoramic view makes it possible to detect vehicles, pedestrians, cyclists, and other elements in all directions, including angles that are difficult to capture with conventional cameras. Using this type of imagery for traffic light control makes it easier to obtain information on the trajectory of vehicles in real time and, therefore, configure traffic lights in a more intelligent and efficient way. In addition, omnidirectional images can be used to monitor areas of high traffic density, identify congestion points, and analyze road user behavior patterns. This information is valuable for urban planning, the development of mobility policies, and the implementation of strategies aimed at improving traffic flow and street safety. Although the use of 360° panoramic images in the context of traffic detection and tracking is still an underexplored f ield, it represents a good tool for the implementation of smart cities through its integration with traffic light control and traffic management systems in cities. In this context, this work presents a database containing 25 panoramic videos, with their respective annotations. This database is available for use by the academic community. It also presents a comparative study between the application of the YOLOv5, YOLOv7, and YOLO-NAS networks, together with the use of the DEEPSORT algorithm, for detection and tracking of traffic objects present in the database. To compare the networks, the metrics of Precision, Recall, F1-Score, mAP@.5, and mAP@.5:.95 were used. In this study, the best result was obtained using the YOLOv7 network with training. Such result shows the feasibility of considering the use of omnidirectional images as a tool in the task of traffic monitoring and helping provide urban mobility
- ItemServiço Flexível de Detecção de Seres Humanos para Espaços Inteligentes Baseados em Redes de Câmeras(Universidade Federal do Espírito Santo, 2018-07-26) Almonfrey, Douglas; Salles, Evandro Ottoni Teatini; Vassallo, Raquel Frizera; Santos-Victor, José Alberto; Salomão, João Marques; Rauber, Thomas Walter; Ciarelli, Patrick MarquesThe topic of intelligent spaces has experienced increasing attention in the last decade. As aninstance of the ubiquitous computing paradigm, the general idea is to extract information fromthe ambient and use it to interact and provide services to the actors present in the environment.The sensory analysis is mandatory in this area and humans are usually the principal actorsinvolved. In this sense, we propose a human detector to be used in an intelligent space based ona multi-camera network. Our human detector is implemented using concepts of cloud computingand service-oriented architecture (SOA). As the main contribution of the present work, thehuman detector is designed to be a service that is scalable, reliable and parallelizable. It isalso a concern of our service to be flexible, decoupled from specific processing nodes of theinfrastructure and less structured as possible, attending different intelligent space applicationsand services. Since it can be easily found already installed in many different environments,a multi-camera system is used to overcome some difficulties traditionally faced by existinghuman detection methods that are based in only one camera. To validate our approach, weimplement three different applications that are proof of concept (PoC) of many day-to-day realtasks. Two of these applications are related to robot navigation and demand the knowledge aboutthe tridimensional localization of the humans present in the environment. With respect to timeand detection performance requirements, our human detection service has proved to be suitablefor interacting with the other services of our Intelligent Space, in order to successfully completethe tasks of each application. As an additional contribution, a feature extraction procedure basedon the independent component analysis (ICA) theory is proposed as part of a detector andevaluated in public datasets. The pedestrian detection area is used as a playground to developthe human detector since it is the most mature research area of the human detection literature.The resulted detector is also used in the pipeline of the proposed human detection service, thus,being also applied in real-time applications in the intelligent space used as our testbed.
- ItemUso da constância de cor na robótica móvel(Universidade Federal do Espírito Santo, 2011-07-21) Almonfrey, Douglas; Schneebeli, Hans-Jörg Andreas; Vassallo, Raquel Frizera; Salles, Evandro Ottoni Teatini; Stemmer, Marcelo RicardoThe color captured by a camera is function of the scene illumination, the reflective characteristics of the surfaces in the scene, the photosensors in the vision systems and mainly the processing made by the brain. Due to this processing performed by the brain, humans show the color constancy phenomenon: the color of a surface is perceived as the same regardless of the environment illumination conditions. However, the variation in the scene illumination implies a change in the color value of a surface registered by an artificial vision system. In the literature, defining surface descriptors that are independent of the illumination is known as color constancy problem. One solution to this problem is to obtain the reflective characteristics of the surfaces apart from the information of the scene illumination. Another approach to solve the color constancy problem is to convert the colors of the surfaces in the image so that the surfaces appear to be always under influence of the same standard illumination. Independently of the chosen approach, this is a hard problem to solve and most existing theories are applied only to synthesized images while others present a limited performance when applied to real images of environments under uncontrolled illumination. Due to the absence of the color constancy phenomenon in artificial vision systems, many automatic systems avoid the use of color information obtained from images captured by these systems. Besides that, the solution of the color constancy problem is also desired by the consumer photography industry. In this context, this work addresses the solution of the color constancy problem using an algorithm based on the color correction method presented in (KONZEN; SCHNEEBELI, 2007a). This algorithm corrects colors of a scene captured under unknown illumination so that the scene appears to have been captured under the influence of a standard illumination. If the scene illumination is always the same, the colors of the images show color constancy. This conversion between illuminations is performed by knowing the colors of some points in the scene under the influence of the standard illumination. Finally, we analyze the color constancy algorithm performance by applying it to a sequence of images of scenes subjected to abrupt illumination changes. Also a color based tracking is employed to show the importance of the color constancy algorithm in these scenes. Besides that, a color based visual-servo control working together with the color constancy algorithm is employed to guide a robot in an outdoor navigation task through an environment subjected to the variable illumination of the sun. The color constancy algorithm is also applied on images of an external environment that present illumination changes and the discussion of its utilization in place recognition, a fundamental task in robot localization, is made.