Doutorado em Ciência da Computação
URI Permanente para esta coleção
Nível: Doutorado
Ano de início:
Conceito atual na CAPES:
Ato normativo:
Periodicidade de seleção:
Área(s) de concentração:
Url do curso:
Navegar
Navegando Doutorado em Ciência da Computação por Data de Publicação
Agora exibindo 1 - 20 de 53
Resultados por página
Opções de Ordenação
- ItemA commitment-based reference ontology for service: harmonizing service perspectives(Universidade Federal do Espírito Santo, 2014-12-10) Nardi, Julio Cesar; Almeida, João Paulo Andrade; Falbo, Ricardo de Almeida; Pires, Luiz Ferreira; Amorim, Fernanda Araújo Baião; Guizzardi, Renata Silva Souza; Barcellos, Monalessa PeriniNowadays, the notion of service has been widely adopted in the practice of economic sectors (e.g., Service, Manufacturing, and Extractive sectors), as well as, in the research focus of various disciplines (e.g., Marketing, Business, and Computer Science). Due to that, a number of research initiatives (e.g., service ontologies, conceptual models, and theories) have tried to understand and characterize the complex notion of service. However, due to particular views of these disciplines and economic sectors, a number of different characterizations of service (e.g., “service as interaction”, “service as value co-creation”, and “service as capability / manifestation of competence”, among others) have been proposed. The existence of these various non-harmonized characterizations, and the focus on a terminological debate about the “service” concept, instead of about the service phenomena from a broad perspective, make the establishment of a unified body of knowledge for service difficult. This limitation impacts, e.g., the establishment of unified conceptualization for supporting the smooth alignment between Business and IT views in service-oriented enterprise architecture (SoEA), and the design and usage of service modeling languages. In this thesis we define a theoretical foundation for service based on the notion of service commitment and claims as basic elements in the characterization of service relations along service life cycle phases (service offer, service negotiation, and service delivery). As discussed in this work, this theoretical foundation is capable of harmonizing a number of service perspectives found in the literature. Such theoretical foundation is specified in a well-founded core reference ontology, named UFO-S, which was designed by adopting a sound ontological engineering apparatus (mainly, a wellfounded ontology representation language, OntoUML, and approaches of model verification and model validation). As a kind of “theory”, UFO-S was applied in the analysis of SoEA structuring principles in order to define a “commitment-based SoEA view”, which remarks social aspects inherent in service relations usually underexplored in widely adopted service-oriented approaches (such as SOA-RM by OASIS, ITIL, and ArchiMate). Based on this, UFO-S was also applied in an ontological analysis of service modeling at ArchiMate’s Business layer. Such ontological analysis showed some limitations concerned to semantic ambiguity and lack of expressiveness for representing service offerings (and type thereof) and service agreements in SoEA. In order to address these limitations, three service modeling patterns (service offering type pattern, service offering pattern, and service agreement pattern) were proposed taking as basis UFO-S. The usefulness of these patterns for addressing these limitations was evidentiated by means of an empirical evaluation. Finally, we can say that, beyond offering a broad and well-founded theoretical foundation for service able to harmonize service perspectives, UFO-S presented benefits as a reference model in the analysis of SoEA structuring principles, and in the (re)design of service modeling languages.
- ItemSistema de rastreamento visual de objetos baseado em movimentos oculares sacádicos(Universidade Federal do Espírito Santo, 2015-04-09) Andrade, Mariella Berger; Santos, Thiago Oliveira dos; Souza, Alberto Ferreira de; Gonçalves, Claudine Santos Badue; Aguiar, Edilson de; Salles, Evandro; França, Felipe Maia GalvãoVisual search is the mechanism that involves a scan of the visual field in order to find an object of interest. The brain region responsible for performing the visual search, performed by saccadic eye movements, is the Superior Colliculus. A computer system for visual search biologically inspired needs to modell the saccadic eye movement, the transformation suffered by the images captured by the eyes in the way from the retina to the Superior Colliculus, and the response of the neurons of the Superior Colliculus to patterns of interest in the visual scene. In this work, we present a biologically inspired long-term object tracking system based on Virtual Generalizing Random Access Memory (VG-RAM) Weightless Neural Networks (WNN). VG-RAM WNN is an effective machine learning technique that offers simple implementation and fast training. Our system models the biological saccadic eye movement, the transformation suffered by the images captured by the eyes from the retina to the Superior Colliculus (SC), and the response of SC neurons to previously seen patterns. We evaluated the performance of our system using a well-known visual tracking database. Our experimental results show that our approach is capable of reliably and efficiently track an object of interest in a video with accuracy equivalent or superior to related work.
- ItemFoundations for multi-level ontology-based conceptual modeling(Universidade Federal do Espírito Santo, 2016-12-16) Carvalho, Victorio Albani de; Guizzardi, Giancarlo; Almeida, João Paulo Andrade; Falbo, Ricardo de Almeida; Souza, Vitor Estêvão Silva; Atkinson, Colin; Parreiras, Fernando SilvaConsidering that conceptual models are produced with the aim of representing certain aspects of the physical and social world according to a specific conceptualization and that ontologies aim at describing conceptualizations, there has been growing interest in the use of ontologies to provide a sound theoretical basis for the discipline of conceptual modeling. This has given rise to a research area called ontology-based conceptual modeling, with significant advances to conceptual modeling in the last decades. Despite these advances, ontology-based conceptual modeling still lacks proper support to address subject domains that require not only the representation of categories of individuals but also the representation of categories of categories (or types of types). The representation of entities of multiple (related) classification “levels” has been the focus of a separate research area under the banner of multi-level modeling, aiming to address the limitations of the conventional two-level modeling paradigm. Despite the relevant contributions of multi-level modeling and ontology-based conceptual modeling, their combination has not yet received due attention. This work explores this gap by proposing the use of formal theories for multi-level modeling in combination with foundational ontologies to support what we call multi-level ontology-based conceptual modeling. To provide a well-founded approach to multi-level conceptual modeling, we develop a theory called MLT that formally characterizes the nature of classification levels and precisely defines the relations that may occur between elements of different classification levels. In order to leverage the benefits of the use of a foundational ontology to domains dealing with multiple classification levels, we combine the proposed multilevel modeling theory with a foundational ontology. This combination results in a hierarchical modeling approach that supports the construction of multi-level conceptual models in a spectrum of levels of specificity, from foundational ontologies to domain models. To demonstrate the applicability of our multi-level ontology-based conceptual modeling approach, we employ it to develop a core ontology for organizational structure, a domain that spans multiple classification levels. Further, we show how MLT can be used as a reference theory to clarify the semantics and enhance the expressiveness of UML with respect to the representation of multi-level models. The resulting UML profile enables the practical application of MLT.
- ItemDevelopment of an entropy-based swarm algorithm for continuous dynamic constrained optimization(Universidade Federal do Espírito Santo, 2017-05-08) Campos, Mauro Cesar Martins; Krohling, Renato Antonio; Gonçalves, Claudine Santos Badue; Zambon, Eduardo; Barbosa, Hélio José Corrêa; Tinós, RenatoDynamic constrained optimization problems form a class of problems WHERE the objective function or the constraints can change over time. In static optimization, finding a global optimum is considered as the main goal. In dynamic optimization, the goal is not only to find an optimal solution, but also track its trajectory as closely as possible over time. Changes in the environment must be taken into account during the optimization process in such way that these problems are to be solved online. Many real-world problems can be formulated within this framework. This thesis proposes an entropy-based bare bones particle swarm for solving dynamic constrained optimization problems. The Shannons entropy is established as a phenotypic diversity index and the proposed algorithm uses the Shannons index of diversity to aggregate the global-best and local-best bare bones particle swarm variants. The proposed approach applies the idea of mixture of search directions by using the index of diversity as a factor to balance the influence of the global-best and local-best search directions. High diversity promotes the search guided by the global-best solution, with a normal distribution for exploitation. Low diversity promotes the search guided by the local-best solution, with a heavy-tailed distribution for exploration. A constraint-handling strategy is also proposed, which uses a ranking method with selection based on the technique for order of preference by similarity to ideal solution to obtain the best solution within a specific population of candidate solutions. Mechanisms to detect changes in the environment and to update particles' memories are also implemented into the proposed algorithm. All these strategies do not act independently. They operate related to each other to tackle problems such as: diversity loss due to convergence and outdated memories due to changes in the environment. The combined effect of these strategies provides an algorithm with ability to maintain a proper balance between exploration and exploitation at any stage of the search process without losing the tracking ability to search an optimal solution which is changing over time. An empirical study was carried out to evaluate the performance of the proposed approach. Experimental results show the suitability of the algorithm in terms of effectiveness to find good solutions for the benchmark problems investigated. Finally, an application is developed, WHERE the proposed algorithm is applied to solve the dynamic economic dispatch problem in power systems.
- ItemClassifier ensemble feature selection for automatic fault diagnosis(Universidade Federal do Espírito Santo, 2017-07-14) Boldt, Francisco de Assis; Varejão, Flávio Miguel; Rauber, Thomas Walter; Salles, Evandro Ottoni Teatini; Carvalho, André Carlos Ponce de Leon Ferreira de; Santos, Thiago Oliveira dos; Conci, AuraAn efficient ensemble feature selection scheme applied for fault diagnosis is proposed, based on three hypothesis: a. A fault diagnosis system does not need to be restricted to a single feature extraction model, on the contrary, it should use as many feature models as possible, since the extracted features are potentially discriminative and the feature pooling is subsequently reduced with feature selection; b. The feature selection process can be accelerated, without loss of classification performance, combining feature selection methods, in a way that faster and weaker methods reduce the number of potentially non-discriminative features, sending to slower and stronger methods a filtered smaller feature set; c. The optimal feature set for a multi-class problem might be different for each pair of classes. Therefore, the feature selection should be done using an one versus one scheme, even when multi-class classifiers are used. However, since the number of classifiers grows exponentially to the number of the classes, expensive techniques like Error-Correcting Output Codes (ECOC) might have a prohibitive computational cost for large datasets. Thus, a fast one versus one approach must be used to alleviate such a computational demand. These three hypothesis are corroborated by experiments. The main hypothesis of this work is that using these three approaches together is possible to improve significantly the classification performance of a classifier to identify conditions in industrial processes. Experiments have shown such an improvement for the 1-NN classifier in industrial processes used as case study.
- ItemAn ontology-based process for domain-specific visual language design(Universidade Federal do Espírito Santo, 2017-08-17) Teixeira, Maria das Graças da Silva; Falbo, Ricardo de Almeida; Gailly, Frederik; Guizzardi, Giancarlo; Almeida, João Paulo Andrade; Campos, Maria Luiza Machado; Poels, Geert; Looy, Amy VanIn het domein van de conceptuele modellering wordt er steeds meer aandacht besteed aan visuele domeinspecifieke modelleertalen en hoe deze talen ondersteuning kunnen bieden bij het representeren van een bepaald domein voor verschillenden belanghebbenden. Bijgevolg is er een absolute noodzaak aan richtlijnen die men kan volgen bij het ontwikkelen van deze domeinspecifieke modelleertalen. Bestaand onderzoek voorziet een aantal richtlijnen maar deze focussen meestal op de abstracte syntax van deze talen en niet op de visuele aspecten (concrete syntax) van deze talen. Er is nochtans een absolute noodzaak aan richtlijnen specifiek voor de ontwikkeleng van de concrete syntax want deze heeft een significante impact op de efficiëntie van de communicatie en probleemoplossende eigenschappen van de met deze talen ontwikkelde conceptuele modellen. De meest gebruikte theorie voor de evaluatie van de concrete syntax van een visuele modelleertaal is de Physics of Notations(PoN). PoN definieert een verzameling principes die men kan gebruiken voor de analyse en ontwerp van een cognitief effectieve visuele notatie voor een modelleertaal. PoN heeft echt ook een aantal tekortkomingen: i) het bevat geen methode die aangeeft hoe de principes moeten gebruikt worden en ii) het helpt niet bij het ontwikkelen van symbolen die overeenstemmen met het domein. In dit PhD project wordt de Physics of Notations Systematized (PoN-S) ontwikkeld en voorgesteld als een oplossing voor de eerste tekortkoming van PoN. PoN-S voorziet een sequentiële set van activiteiten en geeft voor elke activiteit aan welk principe moet worden gebruikt. Bovendien voorziet het ook een groepering voor de verschillende principes die de gebruiker moet helpen bij het gebruik. De tweede tekortkoming wordt in dit PhD project opgelost door gebruik te maken van foundational ontologies. Foundational ontologies worden gebruikt voor het verbeteren van de kwaliteit van zowel de abstracte syntax van een modelleertaal als ook voor het rechtstreeks verbeteren van het conceptueel model. In dit doctoraat wordt het onderzoek van Guizzardi (2013) en meer specifiek het onderzoek rond UFO gebaseerde ontologische richtlijnen gecombineerd met de eerder ontwikkelde verbetering van PoN. Dit resulteert in de Physics of Notations Ontologized and Systematized (PoNTO-S), een systematisch ontwikkelingsproces voor de concrete syntax van visuele modelleertalen waarbij ook rekening wordt gehouden met de ontologische betekenis van de abstracte syntax. Het onderzoek dat uitgevoerd werd in het kader van dit PhD project stemt overeen met een Design Science project met verschillende iteraties die resulteren in verschillende Design Science artefacten die ook werden geëvalueerd. Na de ontwikkeling van PoN-S en PoNTO-S werd er één labo experiment uitgevoerd en werden de artefacten ook deels geëvalueerd door gebruik te maken van twee case studies. Deze studies tonen aan dat PoN-D en PonTO-S nuttig zijn tijdens de ontwikkeling van visuele domeinspecifeke modelleertalen.
- ItemAplicando crowdsourcing na sincronização de vídeos gerados por usuários(Universidade Federal do Espírito Santo, 2017-10-30) Costa Segundo, Ricardo Mendes; Santos, Celso Alberto Saibel; Pereira Filho, José Gonçalves; Guimarães, Rodrigo Laiola; Souza Filho, Guido Lemos de; Willrich, RobertoCrowdsourcing is a solve-problem strategy based on collecting contributions with partial results from individuals and aggregating them into a major problem solution. Based on this strategy, this thesis shows how the crowd can synchronize a set of videos generated by users, correlated to an event. Each user captures the event with its personal viewpoint and according to its limitations. In this scenario, it is not possible to ensure that all generated contents have homogeneous characteristics (starting time and duration, resolution, quality, etc.), hindering the use of a purely automatic synchronization process. Additionally, user generated videos are distributed available between several independent content servers. The assumption of this thesis is that the ability of human intelligence to adapt can be used to render a group of videos produced in an uncoordinated and distributed manner generating its synchronization. To prove this hypothesis, the following steps were executed: (i) the development of a synchronization method for multiple videos from independent sources; (ii) The execution of a systematic mapping about the use of crowdsourcing for processing videos; (iii) The development of techniques for the use of the crowd in synchronizing videos; (iv) The development of a functional model for developing synchronization applications using crowdsourcing, which can be extended for general video applications; and (v) The execution of experiments to show the ability of the crowd to perform the synchronization. The results found show that the crowd can participate in the synchronization process and that several factors can influence the accuracy of the results obtained.
- ItemAppearance-based global localization with a hybrid weightless-weighted neural network approach(Universidade Federal do Espírito Santo, 2018-02-02) Silva, Avelino Forechi; Santos, Thiago Oliveira dos; Souza, Alberto Ferreira de; Oliveira, Elias Silva de; Gonçalves, Claudine Santos Badue; Aguiar, Edilson de; Ciarelli, Patrick MarquesCurrently, self-driving cars rely greatly on the Global Positioning System (GPS) infrastructure, albeit there is an increasing demand for global localization alternative methods in GPS-denied environments. One of them is known as appearance-based global localization, which associates images of places with their corresponding position. This is very appealing regarding the great number of geotagged photos publicly available and the ubiquitous devices fitted with ultra-high-resolution cameras, motion sensors and multicore processors nowadays. The appearance-based global localization can be devised in topological or metric solution regarding whether it is modelled as a classification or regression problem, respectively. The topological common approaches to solve the global localization problem often involve solutions in the spatial dimension and less frequent in the temporal dimension, but not both simultaneously. It was proposed an integrated spatio-temporal solution based on an ensemble of kNN classifiers, where each classifier uses the Dynamic Time Warping (DTW) and the Hamming distance to compare binary features extracted from sequences of images. Each base learner is fed with its own binary set of features extracted from images. The solution was designed to solve the global localization problem in two phases: mapping and localization. During mapping, it is trained with a sequence of images and associated locations that represents episodes experienced by a robot. During localization, it receives subsequences of images of the same environment and compares them to its previous experienced episodes, trying to recollect the most similar “experience” in time and space at once. Then, the system outputs the positions where it “believes” these images were captured. Although the method is fast to train, it scales linearly with the number of training samples in order to compute the Hamming distance and compare it against the test samples. Often, while building a map, one collects high correlated and redundant data around the environment of interest. Some reasons are due to the use of high frequency sensors or to the case of repeating trajectories. This extra data would carry an undesired burden on memory and runtime performance during test if not treated appropriately during the mapping phase. To tackle this problem, it is employed a clustering algorithm to compress the network’s memory after mapping. For large scale environments, it is combined the clustering algorithms with a multi hashing data structure seeking the best compromise between classification accuracy, runtime performance and memory usage. So far, this encompasses solely the topological solution part for the global localization problem, which is not precise enough for autonomous cars operation. Instead of just recognizing places and outputting an associated pose, it is desired that a global localization system regresses a pose given a current image of a place. But, inferring poses for city-scale scenes is unfeasible at least for decimetric precision. The proposed approach to tackle this problem is as follows: first take a live image from the camera and use the localization system aforementioned to return the image-pose pair most similar to a topological database built as before in the mapping phase. And then, given the live and mapped images, a visual localization system outputs the relative pose between those images. To solve the relative camera pose estimation problem, it is trained a Convolutional Neural Network (CNN) to take as input two separated images in time and space in order to output a 6 Degree of Freedom (DoF) pose vector, representing the relative position and orientation between the input images. In conjunction, both systems solve the global localization problem using topological and metric information to approximate the actual robot pose. The proposed hybrid weightless-weighted neural network approach is naturally combined in a way that the output of one system is the input to the other producing competitive results for the Global Localization task. The full approach is compared against a Real Time Kinematic GPS system and a Visual Simultaneous Localization and Mapping (SLAM) system. Experimental results show that the proposed combined approach is able to correctly global localize an autonomous vehicle 90% of the time with a mean error of 1.20m compared to 1.12m of the Visual SLAM system and 0.37m of the GPS, 89% of the time.
- ItemUFO-L: uma ontologia núcleo de aspectos jurídicos construída sob a perspectiva das relações jurídicas(Universidade Federal do Espírito Santo, 2018-02-06) Beccalli, Cristine Leonor Pereira Griffo; Almeida, João Paulo Andrade; Guizzardi, Giancarlo; Guizzardi, Renata Silva Souza; Lima, João Alberto de Oliveira; Brasil Júnior, Samuel Meira; Rover, Aires José; Freitas, Frederico Luiz Gonçalves deIn the past decades, Law has turned to Computation in search of solutions for the representation of legal domain, for storage of large volumes of information and for retrieval of this information to generate knowledge to support decision-making. Among the several solutions proposed for representation of legal domain, we highlight legal ontologies, which propose the representation of a shared conceptualization of legal concepts and their relations. Those legal ontologies that represent generic legal concepts that can be used and reused in the construction of other ontologies or in legal modeling languages are called Legal Core Ontologies (LCOs). The approach of most LCOs is focused on legal norms. However, in this research, we opted for a different approach, namely, on basing the construction of our Legal Core Ontology on legal relations. Although both approaches bring benefits, the advantage of the latter is the possibility of making explicit of concepts and relations that are not evidenced in the former. In particular, the perspective of legal relations as a relation between agents who play legal roles and are in legal positions. In this context, the problem to be addressed lies in the gap between Computation and Law, i.e. in the problem of conceptual modeling applied to carving legal reality and how it is represented. The theoretical basis of this thesis is composed of two theories: Robert Alexy's Theory of Constitutional Rights and the Theory of Ontological Foundations for Structural Conceptual Models proposed by Giancarlo Guizzardi. The result of this investigation is an artifact called UFO-L and its catalog of modeling patterns, applied in ontological analyzes, in the modeling of legal domains and in the construction of visual nodelling languages for legal domain.
- ItemNonlinear multiscale viscosity methods and time integration schemes for solving compressible Euler equations(Universidade Federal do Espírito Santo, 2018-06-29) Bento, Sérgio Souza; Santos, Isaac Pinheiro dos; Catabriga, Lucia; Almeida, Regina Célia Cerqueira de; Malta, Sandra Mara Cardoso; Boeres, Maria Claudia Silva; Valli, Andrea Maria PedrosaIn this work we present nonlinear multiscale finite element methods for solving compressible Euler equations. The formulations are based on the strategy of separating scales – the core of the variational multiscale (finite element) methodology. The subgrid scale space is defined using bubble functions that vanish on the boundary of the elements, allowing to use a local Schur complement to define the resolved scale problem. The resulting numerical procedure allows the fine scales to depend on time. The formulations proposed in this work are residual based considering different ways for the artificial viscosity to act on all scales of the discretization. In the first formulation a nonlinear operator is added on all scales whereas in the second different nonlinear operators are included on macro and micro scales. We evaluate the efficiency of the formulations through numerical studies, comparing them with the SUPG combined with the shock-capturing operator YZβ and the CAU methodologies. Another contribution of this work concerns the time integration procedure. Density-based schemes suffer with undesirable effects of low speed flow including low convergence rate and loss of accuracy. Due to this phenomenon, local preconditioning is applied to the set of equations in the continuous case. Another alternative to solve this deficiency consists of using time integration methods with a stiff decay property. For this purpose, we propose a predictor-corrector method based on Backward Differentiation Formulas (BDF) that is not defined in the traditional sense found in the literature, i.e., using a predictor based on extrapolation.
- ItemAn alternative approach of parallel preconditioning for 2D finite element problems(Universidade Federal do Espírito Santo, 2018-06-29) Lima, Leonardo Muniz de; Catabriga, Lucia; Almeida, Regina Célia Cerqueira de; Santos, Isaac Pinheiro dos; Souza, Alberto Ferreira de; Elias, Renato NascimentoWe propose an alternative approach of parallel preconditioning for 2D finite element problems. This technique consists in a proper domain decomposition with reordering that produces narrowband linear systems from finite element discretization, allowing to apply, without significant efforts, traditional preconditioners as Incomplete LU Factorization (ILU) or even sophisticated parallel preconditioners as SPIKE. Another feature of that approach is the facility to recalculate finite element matrices whether for nonlinear corrections or for time integration schemes. That means parallel finite element application is performed indeed in parallel, not just to solve the linear system. We also employ preconditioners based on element-by-element storage with minimal adjustments. Robustness and scalability of these parallel preconditioning strategies are demonstrated for a set of benchmark experiments. We consider a group of two-dimensional fluid flow problems modeled by transport, and Euler equations to evaluate ILU, SPIKE, and some element-by-element preconditioners. Moreover, our approach provides load balancing and improvement to MPI communications. We study the load balancing and MPI communications through analyzer tools as TAU (Tuning Analysis Utilities).
- ItemIndexação multidimensional para problemas da mochila multiobjetivo com paretos de alta cardinalidade(Universidade Federal do Espírito Santo, 2018-07-31) Baroni, Marcos Daniel Valadão; Varejão, Flávio Miguel; Rodrigues, Alexandre Loureiros; Martins, Simone de Lima; Rauber, Thomas Rauber; Boeres, Maria Claudia SilvaSeveral real problems involve the simultaneous optimization of multiple criteria, which are generally conflicting with each other. These problems are called multiobjective and do not have a single solution, but a set of solutions of interest, called efficient solutions or non-dominated solutions. One of the great challenges to be faced in solving this type of problem is the size of the solution set, which tends to grow rapidly given the size of the instance, degrading algorithms performance. Among the most studied multiobjective problems is the multiobjective knapsack problem, by which several real problems can be modeled. This work proposes the acceleration of the resolution process of the multiobjective knapsack problem, through the use of a k-d tree as a multidimensional index structure to assist the manipulation of solutions. The performance of the approach is analyzed through computational experiments, performed in the exact context using a state-of-the-art algorithm. Tests are also performed in the heuristic context, using the adaptation of a metaheuristic for the problem in question, being also a contribution of the present work. According to the results, the proposal was effective for the exact context, presenting a speedup up to 2.3 for bi-objective cases and 15.5 for 3-objective cases, but not effective in the heuristic context, presenting little impact on computational time. In all cases, however, there was a considerable reduction in the number of solutions evaluations.
- ItemMeta-heurísticas para resolução de alguns problemas de planejamento e controle da produção(Universidade Federal do Espírito Santo, 2018-08-03) Bissoli, Dayan de Castro; Amaral, André Renato Sales; Moraes, Renato Elias Nunes de; Mauri, Geraldo Regis; Lorenzoni, Luciano Lessa; Sousa, Jorge Pinho deThis study addresses the resolution of three different problems, widely encountered in the real context of production planning and control. Initially, a GRASP metaheuristic is proposed to solve an assembly-line balancing problem (SALBP-2). The proposed method presented competitive results in relation to the literature, also focusing on a simplicity of operation to be applied in real cases. Subsequently, the same method was used to solve the Job Shop Scheduling Problem (JSP). The GRASP developed for the JSP also presented good results, with low average relative deviation in relation to the best solutions known in the literature. Next, we approached an extension of the JSP, the Flexible Job Shop Scheduling Problem (FJSP). The JSP is limited to the sequencing of operations on fixed machines, whereas in the FJSP the assignment of an operation is not preset and can thus be processed on a set of alternative machines. Therefore, the FJSP is not restricted to sequencing, extending in the assignment of operations to the appropriate machines (routing). The FJSP is more complex than the JSP because it considers the determination of the assignment of the machine for each operation. In order to solve the FJSP, we proposed four meta-heuristics: GRASP, Simulated Annealing (SA), Iterated Local Search (ILS) and Clustering Search (CS). SA presented lower results, however, incorporating it into a hybrid version of ILS, which uses it as a local search, the results improved, especially in more complex instances. Considering the hybrid characteristic of CS, the SA was also used, in this case as a solution-generating metaheuristic. This approach also presented superior results to SA. Both ILS and CS generated results with values equal to or close to those of the best known solutions for an extensive set of instances for the FJSP, as well as providing some new best known values.
- ItemRDNA: arquitetura definida por resíduos para redes de data centers(Universidade Federal do Espírito Santo, 2018-08-24) Liberato, Alextian Bartholomeu; Ribeiro, Moisés Renato Nunes; Martinello, Magnos; Rothenberg, Christian Rodolfo Esteve; Sampaio, Leobino Nascimento; Mota, Vinícius Fernandes Soares; Villaça, Rodolfo da SilvaRecently, we have seen the increasing use of information and communication technologies. Institutions and users simply require high-quality connectivity of their data, expecting instant access anytime, anywhere. An essential element for providing quality in the connectivity is the architecture of the communication network in Data Center Networks (DCNs). This is because a significant part of Internet traffic is based on data communication and processing that takes place within the Data Center (DC) infrastructure. However, the routing protocols, the forwarding model, and management that are currently running, prove to be insufficient to meet the current demands for cloud connectivity. This is mainly due to the dependency on the table lookup operation, that leads to an end-to-end latency increment. Besides, traditional recovery mechanisms have used additional states in the switch tables, increasing the complexity of management routines, and drastically reducing the scalability for routes protection. Another difficulty is the multicast communication within DC, existing solutions are complex to implement and do not support group configuration at the current required rates. In this context, this thesis explores the numerical system of residues centered in the Chinese remainder theorem (CRT) as a foundation, applied in the design of a new routing system for DCN. More specifically, we introduce RDNA architecture that advances the state-of-the-art from a simplification of the forwarding model to the core, based on the remainder of the division (modulo). In this sense, the route is defined as a residue between a route identification and local identification (prime numbers) assigned to the core switches. Edge switches receive inputs by configuring flows according to the network policy defined by the controller. Each flow is mapped to the edge, through a primary and an emergency route identification. These residue operations allow forwarding the packet through the respective output port. In failure situations, the emergency route identification enables fast recovery by sending the packets through an alternate output port. RDNA is scalable by assuming a 2-tier Clos Network topology widely used in DCNs. In order to compare RDNA with other works of the literature, we analyzed the scalability in terms of the number of bits required for unicast and multicast communication. In the analysis, the number of nodes in the network, the degree of the nodes and the number of physical hosts for each topology were varied. In unicast communication, the RDNA reduced by 4.5 times the header size, compared to the COXCast proposal. In multicast communication, a linear programming model is designed to minimize a polynomial function. RDNA reduced header size by up to 50% compared to the same number of members per group. As proof of concept, two prototypes were implemented, one in the Mininet emulated environment and another in the NetFPGA SUME platform. The results presented that RDNA achieves deterministic latency in packet forwarding, 600 nanoseconds in switching time per core element, ultra-fast failure recovery in the order of microseconds and no latency variation (no jitter) in the core network.
- ItemAvaliação da aprendizagem em jogos digitais baseada em learning analytics sobre dados multimodais(Universidade Federal do Espírito Santo, 2018-09-21) Pereira Junior, Heraclito Amancio; Menezes, Crediné Silva de; Souza, Alberto Ferreira de; Castro Junior, Alberto Nogueira de; Queiroz, Sávio Silveira de; Tavares, Orivaldo de Lira; Cury, DavidsonThe use of digital games as a pedagogical tool has been successfully applied in the development of the skills, abilities and attitudes required of 21st century professionals, both in primary and secondary education, as well as in vocational training. Despite this, one issue has worried educators who think of using digital games: "How to assess the learning of digital games?". Assessment is an important part of the teaching-learning process. This importance, especially with regard to learning based on computational resources, including digital games, led to the emergence of a research area called Learning Analytics, which "applies techniques and methods of Computer Science, Pedagogy, Sociology, Psychology, Neuroscience and Statistics for the analysis of data collected during educational processes". In order to better understand these assessments, the collection has also considered multimodal data, those from different manifestations of the student, captured by sensors, during the learning process (touches, gestures, voices and facial expressions). Although the publications indicate that some methods, techniques and tools have been researched to support learning assessments in learning computing environments, and these studies have already obtained some results, they have not yet been sufficient to provide clear, comprehensive. In particular, with regard to digital games, there is still limited availability of consolidated resources for assessing student learning during play, which has been one of the major factors hindering a broadening of its use for educational purposes. This work brings a contribution to the solution of this problem through: a computational platform, in the form of a framework, designed based on the techniques and methods of Learning Analytics; a specialization of the ECD (Evidency Center Design) approach, for project evaluations of learning based on digital games, and a Process that organizes the stages and activities of this type of evaluation. Experiments, reported here, using a framework instance, have demonstrated both their own merit as an assessment tool and the specialization of ECD and the said process.
- ItemCRF+LG: uma abordagem híbrida para o reconhecimento de entidades nomeadas em português(Universidade Federal do Espírito Santo, 2019-02-07) Pirovani, Juliana Pinheiro Campos; Oliveira, Elias Silva de; Laporte, Éric; Lima, Priscila Machado Vieira; Ciarelli, Patrick Marques; Gonçalves, Claudine Santos BadueNamed Entity Recognition involves automatically identifying and classifying entities such as persons, places, and organizations, and it is a very important task in Information Extraction. Named Entity Recognition systems can be developed using the following approaches: linguistics, machine learning or hybrid. This work proposes the use of a hybrid approach, called CRF+LG, for Named Entity Recognition in Portuguese texts in order to explore the advantages of both linguistics and machine learning approaches. The proposed approach uses Conditional Random Fields (CRF) considering the term classification obtained by a Local Grammar (LG) as an additional informed feature. Conditional Random Fields is a probabilistic method for structured prediction. Local grammars are handmade rules to identify expressions within the text. The aim was to study this way of including the human expertise (Local Grammar) in the machine learning Conditional Random Fields approach and to analyze how it can contribute to the performance of this approach. To achieve this aim, a Local Grammar was built to recognize the 10 named entities categories of HAREM, a joint assessment for the Named Entity Recognition in Portuguese. Initially, the Golden Collection of the First and Second HAREM, considered as a reference for Named Entity Recognition systems in Portuguese, were used as training and test sets, respectively, for evaluation of the CRF+LG. After that, the proposed approach was evaluated in two other datasets. The results obtained outperform the results of systems reported in the literature that were evaluated under equivalent conditions. This gain was approximately 8 percentage points in F-measure in comparison to a system that also used CRF and 2 points in comparison to a system that used Neural Networks. Some systems that used Neural Networks presented superior results, but using massive corpora for unsupervised learning of features, which was not the case of this work. The Local Grammar built can be used individually when there is no training set available and in conjunction with other machine learning techniques to improve its performance. We also analyzed the boundaries (lower bound and upper bound) of the proposed approach. The lower bound indicates the minimum performance and the upper bound indicates the maximum gain that we can achieve for the task in question when using this approach.
- ItemNovas técnicas de amostragem tendenciosa para os algoritmos de análise de agrupamento k-médias e DBSCAN(Universidade Federal do Espírito Santo, 2019-03-28) Luchi, Diego; Varejao, Flavio Miguel; https://orcid.org/; http://lattes.cnpq.br/6501574961643171; https://orcid.org/; http://lattes.cnpq.br/; Carvalho, Alexandre Plastino de; https://orcid.org/; http://lattes.cnpq.br/; Santos, Thiago Oliveira dos; https://orcid.org/; http://lattes.cnpq.br/5117339495064254; Rodrigues, Alexandre Loureiros; https://orcid.org/; http://lattes.cnpq.br/; Rauber, Thomas Walter; https://orcid.org/0000000263806584; http://lattes.cnpq.br/0462549482032704abstract
- ItemLaura architecture: towards a simpler way of building situation-aware and business-aware final WSN/IOT applications(Universidade Federal do Espírito Santo, 2019-03-28) Teixeira, Sergio; Pereira Filho, Jose Goncalves ; https://orcid.org/0000-0002-8056-3836; http://lattes.cnpq.br/8265854560095874; https://orcid.org/0000-0002-9550-0475; http://lattes.cnpq.br/1532026052664459; Martinello, Magnos; https://orcid.org/0000-0002-8111-1719; http://lattes.cnpq.br/7471111924336519; Rosa, Pedro Frosi; https://orcid.org/0000-0001-8820-9113; http://lattes.cnpq.br/7828441075514905; Farias, Clever Ricardo Guareis de; https://orcid.org/0000-0002-8105-2923; http://lattes.cnpq.br/0652038346929546; Santos, Celso Alberto Saibel; https://orcid.org/0000000232875843; http://lattes.cnpq.br/7614206164174151The explosion of smart objects made companies rethink their Business Model (BM) using Wireless Sensor Networks (WSN) and the Internet of Things (IoT) aiming to improve their Business Processes (BP) to achieve competitiveness. Business environments are complex due to the wide variety of technologies, hardware and software solutions that compose heterogeneous enterprise environments. On the other hand, putting real-world IoT scenarios into practice is still a challenge for even experienced developers, because it requires low level programming skills and, at the same time, specific domain knowledge of a company`s BM. This thesis proposes LAURA - Lean AUtomatic code generation for situation-aware and business-awaRe Applications, a flexible, service-oriented and general open source conceptual architecture, designed to support the deployment of decoupled IoT applications. An empirical evaluation has shown that LAURA simplifies the development of final Situation-Aware or Business-Aware applications, reducing the need for specialized IoT low level knowledge, while showing an acceptable performance. LAURA also provides the freedom and independence to modify, adapt or integrate its architecture according to specific needs of the stakeholders
- ItemProgrammable, Expressive, and Agile Service Function Chaining for Edge Data Centers(Universidade Federal do Espírito Santo, 2019-08-23) Dominicini, Cristina Klippel; Martinello, Magnos; https://orcid.org/; http://lattes.cnpq.br/7471111924336519; https://orcid.org/0000000278031830; http://lattes.cnpq.br/; Mota, Vinicius Fernandes Soares; https://orcid.org/; http://lattes.cnpq.br/9305955394665920; Pasquini, Rafael; https://orcid.org/; http://lattes.cnpq.br/; Gaspary, Luciano Paschoal; https://orcid.org/; http://lattes.cnpq.br/; Rothenberg, Christian Rodolfo EsteveAbstract The edge computing paradigm transfers processing power from large remote data centers (DCs) to distributed DCs at the edge of the network. This shift requires the ability to provide network functions virtualization (NFV) solutions that can effic
- ItemNovel techniques for mapping and localization of self driving cars using grid maps(Universidade Federal do Espírito Santo, 2019-09-02) Mutz, Filipe Wall; Souza, Alberto Ferreira de; https://orcid.org/0000000315618447; http://lattes.cnpq.br/7573837292080522; https://orcid.org/0000000229519207; http://lattes.cnpq.br/3123292310632540; Goncalves, Claudine Santos Badue; https://orcid.org/0000-0003-1810-8581; http://lattes.cnpq.br/1359531672303446; Franca, Felipe Maia Galvao; https://orcid.org/0000-0002-8980-6208; http://lattes.cnpq.br/1097952760431187; Komati, Karin Satie; https://orcid.org/0000-0001-5677-4724; http://lattes.cnpq.br/9860697624155451; Santos, Thiago Oliveira dos; http://lattes.cnpq.br/5117339495064254This work proposes novel techniques for building grid maps of large-scale environments, and for estimating the localization of self-driving cars in these maps. The mapping technique is employed for creating occupancy, reflectivity, colour, and semantic grid maps. The localization is based on particle filters. New methods for computing the particles’ weights using semantic and colour information are presented. The deep neural network DeepLabv3+ is used for visual semantic segmentation of images captured by a camera. The estimation of the vehicle poses for mapping is modelled as a Simultaneous Localization and Mapping (SLAM) problem. The values of the poses are obtained by using the GraphSLAM algorithm to fuse odometry and GPS data. These values are refined using loop-closure information. The optimized poses are used for building maps of the environment. The self-driving cars localizations are computed in relation to these maps. The mapping and localization techniques were evaluated in several complex and large-scale environments using a real self-driving car – the Intelligent and Autonomous Robotic Automobile (IARA). The impact of using different types of grid maps in the localization accuracy as well as its robustness to adverse conditions of operation (e.g., variable illumination, and intense traffic of vehicles and pedestrians) were evaluated quantitatively. As far as we know, the mapping and localization techniques, the methodology for producing the localization ground truth, and the evaluation of which type of grid map leads to more accurate localization are novelties
- «
- 1 (current)
- 2
- 3
- »