Informática
URI Permanente desta comunidade
Programa de Pós-Graduação em Informática
Centro: CT
Telefone: (27) 4009 2324 R*5126
URL do programa: http://www.informatica.ufes.br/pt-br/pos-graduacao/PPGI
Navegar
Navegando Informática por Autor "Aguiar, Camila Zacché de"
Agora exibindo 1 - 3 de 3
Resultados por página
Opções de Ordenação
- ItemConcept maps mning for text summarization(Universidade Federal do Espírito Santo, 2017-03-31) Aguiar, Camila Zacché de; Zouaq, Amal; Cury, Davidson; Oliveira, Elias Silva de; Villavicencio, Aline; Menezes, Crediné Silva deConcept maps are graphical tools for the representation and construction of knowledge. Concepts and relationships form the basis for learning and, therefore, concept maps have been extensively used in different situations and for different purposes in education, one of them being representation of written text. Even a complex and grammatically difficult one can be represented by a concept map containing only concepts and relationships that represent what was expressed in a more complicated way. However, the manual construction of a concept map requires quite a bit of time and effort in the identification and structuring of knowledge, especially when the map should not represent the concepts of the author's cognitive structure. Instead, the map should represent the concepts expressed in a text. Thus, several technological approaches have been proposed in order to facilitate the process of constructing concept maps from texts. This dissertation proposes a new approach to automatically build concept maps as a summarization of scientific texts. The summarization aims to produce a concept map as a summarized representation of the text while maintaining its various and most important characteristics. The summarization facilitates the understanding of texts, as the students are trying to cope with the cognitive overload caused by the increasing amount of available textual information. This increase can also be harmful to the construction of knowledge. Thus, we hypothesized that the summarization of a text represented by a concept map may contribute for assimilating the knowledge of the text, as well as decrease its complexity and the time needed to process it. In this context, we conducted a review of literature from between the years of 1994 and 2016 on the approaches aimed at the automatic construction of concept maps from texts. From it, we built a categorization to better identify and analyze the features and characteristics of these technological approaches. Furthermore, we sought to identify the limitations and gather the best features of the related works to propose our approach. Besides, we present a process for Concept Map Mining elaborated following four dimensions: Data Source Description, Domain Definition, Elements Identification and Map Visualization. In order to develop a computational architecture to automatically build concept maps as summarization of academic texts, this research resulted in the public tool CMBuilder, an 7 online tool for the automatic construction of concept maps from texts, as well as a public api java called ExtroutNLP, which contains libraries for information extraction and public services. In order to reach the proposed objective, we used methods from natural language processing and information retrieval. The main task to reach the objective is to extract propositions of the type (concept, relation, concept) from the text. Based on that, the research introduces a pipeline that comprises the following: grammar rules and depth-first search for the extraction of concepts and relations between them from text; preposition mapping, anaphora resolution, and exploitation of named entities for concept labeling; concepts ranking based on frequency and map topology; and summarization of propositions based on graph topology. Moreover, the approach also proposes the use of supervised learning techniques of clustering and classification associated with the use of a thesaurus for the definition of the text domain and the construction of a conceptual vocabulary of the domain. Finally, an objective analysis to validate the accuracy of ExtroutNLP library is performed and presents 0.65 precision on the corpus. Furthermore, a qualitative analysis to validate the quality of the concept map built by the CMBuilder tool is performed, reaching 0.75/0.45 for precision/recall of concepts and 0.57/0.23 for precision/recall of relationships in English language, and reaching 0.68/0.38 for precision/recall of concepts and 0.41/0.19 for precision/recall of relationships in Portuguese language. In addition, an experiment to verify if the concept map summarized by CMBuilder has influence for the understanding of the subject addressed in a text is conducted, reaching 60% of hits for maps extracted from small texts with multi-choice questions and 77% of hits for maps extracted from extensive texts with discursive questions.
- ItemDetecção e correção de inconsistências em mapas conceituais(Universidade Federal do Espírito Santo, 2018-09-18) Azeredo, Ramon Ahnert; Aguiar, Camila Zacché de; Gava, Tânia Barbosa Salles; Cury, Davidson; Magalhães, José Francisco de; Menezes, Crediné Silva deThe Concept maps are graphic tools for knowledge organization and representation that has special place in educational environments. Therefore, tools has built to support, make easy and extend the use of concept maps. In this work, the purpose is build a tool to recognition of erros in concept maps and to suggest fixes. For that, a literature review has made with the purpose of show the mains related concepts about concept maps. After that, concept maps from different groups was collected to identification of main erros. After these two phases, the conceptual model of tool is showed and built to execute experiments. The results showed that most frequent errors are about linking phrases. Futhermore, the precision of tool features could be measured. Thus, this work also suggest that the use of proposed tool support the construction of maps with a minimal formalism and make more easily to be interpreted.
- ItemINTEROPERABILIDADE SEMÂNTICA ENTRE CÓDIGOS FONTE BASEADA EM ONTOLOGIA(Universidade Federal do Espírito Santo, 2021-11-24) Aguiar, Camila Zacché de; Souza, Vitor Estevao Silva; https://orcid.org/0000000318695704; http://lattes.cnpq.br/2762374760685577; https://orcid.org/0000-0001-7945-6489; http://lattes.cnpq.br/1194248632540081; Garcia, Rogerio Eduardo; https://orcid.org/0000-0003-1248-528X; http://lattes.cnpq.br/8031012573259361; Amorim, Fernanda Araujo Baiao; https://orcid.org/0000-0001-7932-7134; http://lattes.cnpq.br/5068302552861597; Barcellos, Monalessa Perini; https://orcid.org/0000-0002-6225-9478; http://lattes.cnpq.br/8826584877205264; Guizzardi, Giancarlo; https://orcid.org/0000-0002-3452-553X; http://lattes.cnpq.br/5297252436860003The source code is a well-formed sequence of computer instructions expressed in a programming language, composed of a set of symbols organized with their respective syntax and semantics. The different representations of source code in programming languages create a heterogeneous context, as does the use of multiple programming languages in a single source code. This scenario prevents the direct exchange of information between source codes of different programming languages, requiring specialized knowledge of their languages and diversity of tools and practices. In this sense, as a way to mitigate heterogeneity between programming languages, we apply semantic interoperability to ensure that shared information have their meanings understood and operationalized by code written in different source programming languages. To do this, we adopt ontologies to ensure uniform interpretations that share a consistent common commitment about the source code domain. In addition to acting as an interlanguage between different source codes, ontologies are widely accepted in the literature as tools to provide semantics and interoperability between entities with different natures. To apply ontologies to source code interoperability, this research presents a source code ontology network called SCON — Source Code Ontology Network and a method for source code interoperability based on ontology called OSCIN — Ontology-Based Source Code Interoperability. While SCON semantically represents common and consensual concepts about the domain of source code, regardless of the programming language, OSCIN aims to apply this representation for different purposes in a unified way. The method is based on the source code subdomain that will be represented, the programming language that it is capable of handling, and the application purpose that will be applied. In order to provide a set of solutions to support the application of the OSCIN method in different source code subdomains, programming languages and application purposes, this research presents the OSCINF framework — Ontology-based Source Code Interoperability Framework, which generates the artifacts expected by the OSCIN method and defines the SABiOS method — Systematic Approach for Building Ontologies with Scrum for the construction of well-founded ontologies. Finally, this research evaluates source code interoperability by applying the OSCIN method to detect smells, software metrics and code migration in source codes from different programming languages.