Doutorado em Ciência da Computação
URI Permanente para esta coleção
Nível: Doutorado
Ano de início:
Conceito atual na CAPES:
Ato normativo:
Periodicidade de seleção:
Área(s) de concentração:
Url do curso:
Navegar
Navegando Doutorado em Ciência da Computação por Assunto "Alinhamento axiológico"
Agora exibindo 1 - 1 de 1
Resultados por página
Opções de Ordenação
- ItemMagni calculi ratiocinatoris: a theoretical universal logical and axiological calculation framework for generative large language models(Universidade Federal do Espírito Santo, 2023-12-13) Brasil Junior, Samuel Meira; Varejão, Flavio Miguel; http://lattes.cnpq.br/6501574961643171; https://orcid.org/0000000264784743; http://lattes.cnpq.br/1600831611942868; Rezende, Solange Oliveira; Boldt, Francisco de Assis; Santos, Thiago Oliveira dos; http://lattes.cnpq.br/5117339495064254; Rauber, Thomas Walter; https://orcid.org/0000000263806584; http://lattes.cnpq.br/0462549482032704This dissertation explores the idea of developing a theoretical universal framework as a calculation of thought, specifically focusing on integrating value alignment in logical reasoning upon generative large language models. The research delves into the historical search for a "calculus ratiocinator," a universal logical calculus, underlying the "Characteristica Universalis," a universal language, and positions large language models as a contemporary manifestation of the latter. The study includes a conceptual review of the foundational models’ learning strategies, including pre-training transformer-based LLMs, transfer learning, and in-context learning methodologies such as zero-shot and few-shot learning, chain-of-thoughts, tree-of-thoughts, self-consistency, and automatic prompt engineer. Following the theoretical framework, the work comprises a thorough literature review, examining the logical reasoning abilities of large language models (LLMs) and evaluating their strengths and weaknesses, scalability, and quality of generated texts. The research then focuses on fine-tuning a pre-trained LLM for value-aligned logical reasoning. It explores methods such as Logical and Axiological Weights (LAW), a rank-based theory for weight adaptation that introduces value-aligned logical reasoning weights in subnetworks of pre-trained models, utilizing the parameter-efficient fine-tuning LoRA/QLoRA. The results from these methods are presented and discussed in detail. Additionally, the work explores prompting methodologies for value-aligned logical reasoning. This includes techniques such as Logic-of-Reasoning (LoR), an in-context learning approach that incorporates the decomposition of input for logical subproofs, tree-based and forest-based sequent calculi, and natural deduction prompts. It also includes aligning the generated text with human value through the Value-Aligned Logic-of-Reasoning (VALoR) prompt. The dissertation concludes by presenting experimental results that validate the proposed methodologies. This research contributes to the field by offering a holistic approach to improve logical reasoning aligned with human values on generative large language models. It provides a foundation for future work in developing more human-aligned AI systems that can reason logically while upholding human values.