Magni calculi ratiocinatoris: a theoretical universal logical and axiological calculation framework for generative large language models

Nenhuma Miniatura disponível
Data
2023-12-13
Autores
Brasil Junior, Samuel Meira
Título da Revista
ISSN da Revista
Título de Volume
Editor
Universidade Federal do Espírito Santo
Resumo
This dissertation explores the idea of developing a theoretical universal framework as a calculation of thought, specifically focusing on integrating value alignment in logical reasoning upon generative large language models. The research delves into the historical search for a "calculus ratiocinator," a universal logical calculus, underlying the "Characteristica Universalis," a universal language, and positions large language models as a contemporary manifestation of the latter. The study includes a conceptual review of the foundational models’ learning strategies, including pre-training transformer-based LLMs, transfer learning, and in-context learning methodologies such as zero-shot and few-shot learning, chain-of-thoughts, tree-of-thoughts, self-consistency, and automatic prompt engineer. Following the theoretical framework, the work comprises a thorough literature review, examining the logical reasoning abilities of large language models (LLMs) and evaluating their strengths and weaknesses, scalability, and quality of generated texts. The research then focuses on fine-tuning a pre-trained LLM for value-aligned logical reasoning. It explores methods such as Logical and Axiological Weights (LAW), a rank-based theory for weight adaptation that introduces value-aligned logical reasoning weights in subnetworks of pre-trained models, utilizing the parameter-efficient fine-tuning LoRA/QLoRA. The results from these methods are presented and discussed in detail. Additionally, the work explores prompting methodologies for value-aligned logical reasoning. This includes techniques such as Logic-of-Reasoning (LoR), an in-context learning approach that incorporates the decomposition of input for logical subproofs, tree-based and forest-based sequent calculi, and natural deduction prompts. It also includes aligning the generated text with human value through the Value-Aligned Logic-of-Reasoning (VALoR) prompt. The dissertation concludes by presenting experimental results that validate the proposed methodologies. This research contributes to the field by offering a holistic approach to improve logical reasoning aligned with human values on generative large language models. It provides a foundation for future work in developing more human-aligned AI systems that can reason logically while upholding human values.
Descrição
Palavras-chave
Grandes Modelos de Linguagem Generativos , Raciocínio lógico , Alinhamento axiológico , Ajuste-fino de modelos pré-treinados , Aprendizagem em contexto
Citação