Generalização do extreme learning machine regularizado para problemas de múltiplas saídas

Nenhuma Miniatura disponível
Data
2018-02-28
Autores
Inaba, Fernando Kentaro
Título da Revista
ISSN da Revista
Título de Volume
Editor
Universidade Federal do Espírito Santo
Resumo
Extreme Learning Machine (ELM) has recently gained popularity and has been successfully applied to a wide range of applications. Variants using regularization are now a common practice in the state of the art in the ELM field. The most commonly used regularization is the `2 norm, which improves generalization but results in a dense network. Regularization based on the elastic net has also been proposed but mainly applied to regression and binary classification problems. In this thesis, it is proposed a generalization of regularized ELM (RELM) for multiclass classification and multitarget regression problems. The use of `2,1 and Frobenius norm provided an appropriate generalization. Consequently, it was possible to show that R-ELM and OR-ELM are a particular case of the methods proposed in this thesis, termed GR-ELM and GOR-ELM, respectively. Furthermore, another method proposed in this thesis is the DGR-ELM, which is an alternative method of GR-ELM for dealing with data that are naturally distributed. The alternating direction method of multipliers (ADMM) is the algorithm used to solve the resulting optimization problems. Message Passing Interface (MPI) in a Single Program, Multiple Data (SPMD) programming style is chosen for implementing DGR-ELM. Extensive experiments are conducted to evaluate the proposed method. The conducted experiments show that GR-ELM, DGR-ELM, and GOR-ELM have similar training and testing performance when compared to R-ELM and OR-ELM, although usually inferior testing time is obtained with our method due to the compactness of the resulting network.
Descrição
Palavras-chave
Máquinas de aprendizado extremo (ELM)
Citação