Engenharia Elétrica
URI Permanente desta comunidade
Programa de Pós-Graduação em Engenharia Elétrica
Centro: CT
Telefone: (27) 4009 2663
URL do programa: https://engenhariaeletrica.ufes.br/pt-br/pos-graduacao/PPGEE
Navegar
Navegando Engenharia Elétrica por Autor "Almonfrey, Douglas"
Agora exibindo 1 - 6 de 6
Resultados por página
Opções de Ordenação
- ItemDetecção de Armas de Fogo em Imagens Baseada em Redes Neurais Convolucionais(Universidade Federal do Espírito Santo, 2021-04-30) Cardoso, Guilherme Vinícius Simões; Ciarelli, Patrick Marques; https://orcid.org/0000000331774028; http://lattes.cnpq.br/1267950518719423; Samatelo, Jorge Leonid Aching; Almonfrey, Douglas; Vassallo, Raquel FrizeraThe demand for weapons has grown along with crime rates, being a contemporary problem that haunts several countries. In Brazil, possible changes are being discussed to make the ownership and possession of firearms more flexible, dividing opinions and generating a huge discussion on the subject. This has motivated scientists to devise solutions that can assist in public security in general. In an attempt to find ways to minimize this problem, a research was carried out on the main work related to the classification and detection of firearms, aiming to obtain information on the main techniques used. Thus, in this work is proposed a methodology for the detection of firearms in images using convolutional neural networks. Recent work has used object detectors based on these networks and presented relevant results. Therefore, this work proposes a methodology for detecting weapons using an object detector, called YOLO (You Only Look Once), and an architecture based on convolutional neural networks. Two approaches were taken to evaluate the proposed methodology, taking into account three threshold values for IoU. The first approach, compared with the results found in the literature, points to an improvement in the results, where an accuracy of 93, 67% and a F1 of 93, 05% was achieved, which represents a growth greater than 10% in accuracy and a slight improvement of almost 2% in the F1 metric. The second approach follows the same methodology, but applies a different initial step, where the object detector is modified and used to mark a database and compose a new labeled one. Such approach had a positive impact on the results, where there was an increase in accuracy and almost 4% in the F1 metric. In the three IoU values evaluated, the best one has an accuracy of 89, 91% and, in the same configuration, points to a F1 of 94, 54% with a confidence of 58%. These results show that the proposed methodology is promising to be applied for firearms detection in images.
- ItemEstudo comparativo de detecção e rastreamento de elementos no trânsito utilizando imagens ominidirecionais(Universidade Federal do Espírito Santo, 2024-11-22) Scarparo, Heisthen Mazzei; Vassallo, Raquel Frizera ; https://orcid.org/0000-0002-4762-3219; http://lattes.cnpq.br/9572903915280374; https://orcid.org/0009-0009-8243-3320; http://lattes.cnpq.br/4545146039699801; Almonfrey, Douglas ; https://orcid.org/0000-0002-0547-3494; http://lattes.cnpq.br/1291322166628469; Cavalieri, Daniel Cruz ; https://orcid.org/0000-0002-4916-1863; http://lattes.cnpq.br/9583314331960942Traffic detection and tracking play an important role in the context of smart cities. These technologies have the potential to alleviate congestion, optimize the use of resources, and improve the quality of life of the population. However, one aspect of this field that has not yet been explored is the use of omnidirectional videos, which provide a 360° field of view. Omnidirectional images offer a large field of view of the road environment, allowing for a more complete analysis of traffic and moving objects. This panoramic view makes it possible to detect vehicles, pedestrians, cyclists, and other elements in all directions, including angles that are difficult to capture with conventional cameras. Using this type of imagery for traffic light control makes it easier to obtain information on the trajectory of vehicles in real time and, therefore, configure traffic lights in a more intelligent and efficient way. In addition, omnidirectional images can be used to monitor areas of high traffic density, identify congestion points, and analyze road user behavior patterns. This information is valuable for urban planning, the development of mobility policies, and the implementation of strategies aimed at improving traffic flow and street safety. Although the use of 360° panoramic images in the context of traffic detection and tracking is still an underexplored f ield, it represents a good tool for the implementation of smart cities through its integration with traffic light control and traffic management systems in cities. In this context, this work presents a database containing 25 panoramic videos, with their respective annotations. This database is available for use by the academic community. It also presents a comparative study between the application of the YOLOv5, YOLOv7, and YOLO-NAS networks, together with the use of the DEEPSORT algorithm, for detection and tracking of traffic objects present in the database. To compare the networks, the metrics of Precision, Recall, F1-Score, mAP@.5, and mAP@.5:.95 were used. In this study, the best result was obtained using the YOLOv7 network with training. Such result shows the feasibility of considering the use of omnidirectional images as a tool in the task of traffic monitoring and helping provide urban mobility
- ItemReidentificação baseada em filtro de correlação discriminativo para rastreamento de múltiplos objetos em câmeras de videomonitoramento(Universidade Federal do Espírito Santo, 2025-04-01) Abling, Augusto; Vassallo, Raquel Frizera ; https://orcid.org/0000-0002-4762-3219; http://lattes.cnpq.br/9572903915280374; https://orcid.org/0009-0002-7245-5760; http://lattes.cnpq.br/6477900225667920; Nascimento, Thais Pedruzzi do; https://orcid.org/0000-0002-3962-8941; http://lattes.cnpq.br/8698168347146036; Silva, Bruno Légora Souza da; https://orcid.org/0000-0003-1732-977X; http://lattes.cnpq.br/8885770833300316; Almonfrey, Douglas; https://orcid.org/0000-0002-0547-3494; http://lattes.cnpq.br/1291322166628469; Pereira, Flávio Garcia; https://orcid.org/0000-0002-5557-0241; http://lattes.cnpq.br/3794041743196202This study aims to develop, test, and analyze the use of discriminative correlation filter as a module for object re-identification, integrated with multiple object tracking for use in surveillance cameras with a focus on real-time processing. The study is set in the context of smart cities and Intelligent Transportation Systems (ITS), where object re identification and tracking are fundamental for the creation of advanced technologies. The adopted methodology includes the implementation of a modified discriminative correlation filter for the re-identification task, followed by tests to evaluate the algorithm’s performance in challenging scenarios present in widely recognized datasets in computer vision challenges. The results showed that the proposed correlation filter approaches the accuracy of neural network-based approaches without the need for prior training for specific contexts. Therefore, we may conclude that the integration of this re-identification module with multi-object tracking offers a balanced solution to improve tracking accuracy at a lower computational cost compared to neural networks, contributing to the advancement of technologies in smart cities and ITS
- ItemServiço Flexível de Detecção de Seres Humanos para Espaços Inteligentes Baseados em Redes de Câmeras(Universidade Federal do Espírito Santo, 2018-07-26) Almonfrey, Douglas; Salles, Evandro Ottoni Teatini; Vassallo, Raquel Frizera; Santos-Victor, José Alberto; Salomão, João Marques; Rauber, Thomas Walter; Ciarelli, Patrick MarquesThe topic of intelligent spaces has experienced increasing attention in the last decade. As aninstance of the ubiquitous computing paradigm, the general idea is to extract information fromthe ambient and use it to interact and provide services to the actors present in the environment.The sensory analysis is mandatory in this area and humans are usually the principal actorsinvolved. In this sense, we propose a human detector to be used in an intelligent space based ona multi-camera network. Our human detector is implemented using concepts of cloud computingand service-oriented architecture (SOA). As the main contribution of the present work, thehuman detector is designed to be a service that is scalable, reliable and parallelizable. It isalso a concern of our service to be flexible, decoupled from specific processing nodes of theinfrastructure and less structured as possible, attending different intelligent space applicationsand services. Since it can be easily found already installed in many different environments,a multi-camera system is used to overcome some difficulties traditionally faced by existinghuman detection methods that are based in only one camera. To validate our approach, weimplement three different applications that are proof of concept (PoC) of many day-to-day realtasks. Two of these applications are related to robot navigation and demand the knowledge aboutthe tridimensional localization of the humans present in the environment. With respect to timeand detection performance requirements, our human detection service has proved to be suitablefor interacting with the other services of our Intelligent Space, in order to successfully completethe tasks of each application. As an additional contribution, a feature extraction procedure basedon the independent component analysis (ICA) theory is proposed as part of a detector andevaluated in public datasets. The pedestrian detection area is used as a playground to developthe human detector since it is the most mature research area of the human detection literature.The resulted detector is also used in the pipeline of the proposed human detection service, thus,being also applied in real-time applications in the intelligent space used as our testbed.
- ItemSistema de Reconhecimento de Gestos e Ações em Tempo Real Baseado em Visão Computacional(Universidade Federal do Espírito Santo, 2020-12-17) Santos, Clebeson Canuto dos; Vassallo, Raquel Frizera; https://orcid.org/0000000247623219; http://lattes.cnpq.br/9572903915280374; https://orcid.org/0000000273141934; http://lattes.cnpq.br/; Ciarelli, Patrick Marques; https://orcid.org/0000000331774028; http://lattes.cnpq.br/1267950518719423; Almonfrey, Douglas; https://orcid.org/; http://lattes.cnpq.br/; Filho, Jugurta Rosa Montalvao; https://orcid.org/; http://lattes.cnpq.br/; Bernardino, Alexandre José MalheiroThis thesis aims to investigate and propose mechanisms for recognizing and anticipating dynamic gestures and actions based only on computer vision. Three proposals are focused on gesture recognition: Star RGB - a representation that condenses the montion
- ItemUso da constância de cor na robótica móvel(Universidade Federal do Espírito Santo, 2011-07-21) Almonfrey, Douglas; Schneebeli, Hans-Jörg Andreas; Vassallo, Raquel Frizera; Salles, Evandro Ottoni Teatini; Stemmer, Marcelo RicardoThe color captured by a camera is function of the scene illumination, the reflective characteristics of the surfaces in the scene, the photosensors in the vision systems and mainly the processing made by the brain. Due to this processing performed by the brain, humans show the color constancy phenomenon: the color of a surface is perceived as the same regardless of the environment illumination conditions. However, the variation in the scene illumination implies a change in the color value of a surface registered by an artificial vision system. In the literature, defining surface descriptors that are independent of the illumination is known as color constancy problem. One solution to this problem is to obtain the reflective characteristics of the surfaces apart from the information of the scene illumination. Another approach to solve the color constancy problem is to convert the colors of the surfaces in the image so that the surfaces appear to be always under influence of the same standard illumination. Independently of the chosen approach, this is a hard problem to solve and most existing theories are applied only to synthesized images while others present a limited performance when applied to real images of environments under uncontrolled illumination. Due to the absence of the color constancy phenomenon in artificial vision systems, many automatic systems avoid the use of color information obtained from images captured by these systems. Besides that, the solution of the color constancy problem is also desired by the consumer photography industry. In this context, this work addresses the solution of the color constancy problem using an algorithm based on the color correction method presented in (KONZEN; SCHNEEBELI, 2007a). This algorithm corrects colors of a scene captured under unknown illumination so that the scene appears to have been captured under the influence of a standard illumination. If the scene illumination is always the same, the colors of the images show color constancy. This conversion between illuminations is performed by knowing the colors of some points in the scene under the influence of the standard illumination. Finally, we analyze the color constancy algorithm performance by applying it to a sequence of images of scenes subjected to abrupt illumination changes. Also a color based tracking is employed to show the importance of the color constancy algorithm in these scenes. Besides that, a color based visual-servo control working together with the color constancy algorithm is employed to guide a robot in an outdoor navigation task through an environment subjected to the variable illumination of the sun. The color constancy algorithm is also applied on images of an external environment that present illumination changes and the discussion of its utilization in place recognition, a fundamental task in robot localization, is made.