Classifier ensemble feature selection for automatic fault diagnosis

dc.contributor.advisorRauber, Thomas Walter
dc.contributor.advisor-coVarejão, Flávio Miguel
dc.contributor.refereeSalles, Evandro Ottoni Teatini
dc.contributor.refereeCarvalho, André Carlos Ponce de Leon Ferreira de
dc.contributor.refereeSantos, Thiago Oliveira dos
dc.contributor.refereeConci, Aura
dc.date.accessioned2018-08-02T00:04:07Z
dc.date.available2018-08-01
dc.date.available2018-08-02T00:04:07Z
dc.identifier.citationBOLDT, Francisco de Assis. Classifier ensemble feature selection for automatic fault diagnosis. 2017. 112 f. Tese (Doutorado em Ciência da Computação) - Universidade Federal do Espírito Santo, Centro Tecnológico, Vitória, 2017.por
dc.identifier.urihttp://repositorio.ufes.br/handle/10/9872
dc.publisherUniversidade Federal do Espírito Santopor
dc.publisher.countryBRpor
dc.publisher.courseDoutorado em Ciência da Computaçãopor
dc.publisher.initialsUFESpor
dc.publisher.programPrograma de Pós-Graduação em Informáticapor
dc.subjectSeleção de características (Computação)por
dc.subjectClassifier ensembleen
dc.subjectFeature selectionen
dc.subjectAutomatic fault diagnosisen
dc.subject.br-rjbnLocalização de falhas (Engenharia)por
dc.subject.br-rjbnClassificadores (Linguistica)por
dc.subject.cnpqCiência da Computaçãopor
dc.subject.udc004
dc.titleClassifier ensemble feature selection for automatic fault diagnosisen
dc.typedoctorThesisen
dcterms.abstractAn efficient ensemble feature selection scheme applied for fault diagnosis is proposed, based on three hypothesis: a. A fault diagnosis system does not need to be restricted to a single feature extraction model, on the contrary, it should use as many feature models as possible, since the extracted features are potentially discriminative and the feature pooling is subsequently reduced with feature selection; b. The feature selection process can be accelerated, without loss of classification performance, combining feature selection methods, in a way that faster and weaker methods reduce the number of potentially non-discriminative features, sending to slower and stronger methods a filtered smaller feature set; c. The optimal feature set for a multi-class problem might be different for each pair of classes. Therefore, the feature selection should be done using an one versus one scheme, even when multi-class classifiers are used. However, since the number of classifiers grows exponentially to the number of the classes, expensive techniques like Error-Correcting Output Codes (ECOC) might have a prohibitive computational cost for large datasets. Thus, a fast one versus one approach must be used to alleviate such a computational demand. These three hypothesis are corroborated by experiments. The main hypothesis of this work is that using these three approaches together is possible to improve significantly the classification performance of a classifier to identify conditions in industrial processes. Experiments have shown such an improvement for the 1-NN classifier in industrial processes used as case study.en
dcterms.creatorBoldt, Francisco de Assis
dcterms.formatTexten
dcterms.issued2017-07-14
dcterms.languageenen
Arquivos
Pacote Original
Agora exibindo 1 - 1 de 1
Carregando...
Imagem de Miniatura
Nome:
Doctor_thesis.pdf
Tamanho:
2.25 MB
Formato:
Adobe Portable Document Format
Descrição: