schliessen

Filtern

 

Bibliotheken

Kernel mixture model for probability density estimation in Bayesian classifiers

Estimating reliable class-conditional probability is the prerequisite to implement Bayesian classifiers, and how to estimate the probability density functions (PDFs) is also a fundamental problem for other probabilistic induction algorithms. The finite mixture model (FMM) is able to represent arbitr... Full description

Journal Title: Data Mining and Knowledge Discovery 2018, Vol.32(3), pp.675-707
Main Author: Zhang, Wenyu
Other Authors: Zhang, Zhenjiang , Chao, Han-Chieh , Tseng, Fan-Hsun
Format: Electronic Article Electronic Article
Language: English
Subjects:
ID: ISSN: 1384-5810 ; E-ISSN: 1573-756X ; DOI: 10.1007/s10618-018-0550-5
Link: http://dx.doi.org/10.1007/s10618-018-0550-5
Zum Text:
SendSend as email Add to Book BagAdd to Book Bag
Staff View
recordid: springer_jour10.1007/s10618-018-0550-5
title: Kernel mixture model for probability density estimation in Bayesian classifiers
format: Article
creator:
  • Zhang, Wenyu
  • Zhang, Zhenjiang
  • Chao, Han-Chieh
  • Tseng, Fan-Hsun
subjects:
  • Kernel mixture model
  • Probability density estimation
  • Bayesian classifier
  • Clustering
ispartof: Data Mining and Knowledge Discovery, 2018, Vol.32(3), pp.675-707
description: Estimating reliable class-conditional probability is the prerequisite to implement Bayesian classifiers, and how to estimate the probability density functions (PDFs) is also a fundamental problem for other probabilistic induction algorithms. The finite mixture model (FMM) is able to represent arbitrary complex PDFs by using a mixture of mutimodal distributions, but it assumes that the component mixtures follows a given distribution, which may not be satisfied for real world data. This paper presents a non-parametric kernel mixture model (KMM) based probability density estimation approach, in which the data sample of a class is assumed to be drawn by several unknown independent hidden subclasses. Unlike traditional FMM schemes, we simply use the k -means clustering algorithm to partition the data sample into several independent components, and the regional density diversities of components are combined using the Bayes theorem. On the basis of the proposed kernel mixture model, we present a three-step Bayesian classifier, which includes partitioning, structure learning, and PDF estimation. Experimental results show that KMM is able to improve the quality of estimated PDFs of conventional kernel density estimation (KDE) method, and also show that KMM-based Bayesian classifiers outperforms existing Gaussian, GMM, and KDE-based Bayesian classifiers.
language: eng
source:
identifier: ISSN: 1384-5810 ; E-ISSN: 1573-756X ; DOI: 10.1007/s10618-018-0550-5
fulltext: fulltext
issn:
  • 1573-756X
  • 1573756X
  • 1384-5810
  • 13845810
url: Link


@attributes
ID499010023
RANK0.07
NO1
SEARCH_ENGINEprimo_central_multiple_fe
SEARCH_ENGINE_TYPEPrimo Central Search Engine
LOCALfalse
PrimoNMBib
record
control
sourcerecordid10.1007/s10618-018-0550-5
sourceidspringer_jour
recordidTN_springer_jour10.1007/s10618-018-0550-5
sourcesystemOther
pqid2002116454
galeid537723652
display
typearticle
titleKernel mixture model for probability density estimation in Bayesian classifiers
creatorZhang, Wenyu ; Zhang, Zhenjiang ; Chao, Han-Chieh ; Tseng, Fan-Hsun
ispartofData Mining and Knowledge Discovery, 2018, Vol.32(3), pp.675-707
identifier
subjectKernel mixture model ; Probability density estimation ; Bayesian classifier ; Clustering
descriptionEstimating reliable class-conditional probability is the prerequisite to implement Bayesian classifiers, and how to estimate the probability density functions (PDFs) is also a fundamental problem for other probabilistic induction algorithms. The finite mixture model (FMM) is able to represent arbitrary complex PDFs by using a mixture of mutimodal distributions, but it assumes that the component mixtures follows a given distribution, which may not be satisfied for real world data. This paper presents a non-parametric kernel mixture model (KMM) based probability density estimation approach, in which the data sample of a class is assumed to be drawn by several unknown independent hidden subclasses. Unlike traditional FMM schemes, we simply use the k -means clustering algorithm to partition the data sample into several independent components, and the regional density diversities of components are combined using the Bayes theorem. On the basis of the proposed kernel mixture model, we present a three-step Bayesian classifier, which includes partitioning, structure learning, and PDF estimation. Experimental results show that KMM is able to improve the quality of estimated PDFs of conventional kernel density estimation (KDE) method, and also show that KMM-based Bayesian classifiers outperforms existing Gaussian, GMM, and KDE-based Bayesian classifiers.
languageeng
source
version4
lds50peer_reviewed
links
openurl$$Topenurl_article
openurlfulltext$$Topenurlfull_article
backlink$$Uhttp://dx.doi.org/10.1007/s10618-018-0550-5$$EView_full_text_in_Springer_(Subscribers_only)
search
creatorcontrib
0Zhang, Wenyu
1Zhang, Zhenjiang
2Chao, Han-Chieh
3Tseng, Fan-Hsun
titleKernel mixture model for probability density estimation in Bayesian classifiers
descriptionEstimating reliable class-conditional probability is the prerequisite to implement Bayesian classifiers, and how to estimate the probability density functions (PDFs) is also a fundamental problem for other probabilistic induction algorithms. The finite mixture model (FMM) is able to represent arbitrary complex PDFs by using a mixture of mutimodal distributions, but it assumes that the component mixtures follows a given distribution, which may not be satisfied for real world data. This paper presents a non-parametric kernel mixture model (KMM) based probability density estimation approach, in which the data sample of a class is assumed to be drawn by several unknown independent hidden subclasses. Unlike traditional FMM schemes, we simply use the k -means clustering algorithm to partition the data sample into several independent components, and the regional density diversities of components are combined using the Bayes theorem. On the basis of the proposed kernel mixture model, we present a three-step Bayesian classifier, which includes partitioning, structure learning, and PDF estimation. Experimental results show that KMM is able to improve the quality of estimated PDFs of conventional kernel density estimation (KDE) method, and also show that KMM-based Bayesian classifiers outperforms existing Gaussian, GMM, and KDE-based Bayesian classifiers.
subject
0Kernel mixture model
1Probability density estimation
2Bayesian classifier
3Clustering
general
010.1007/s10618-018-0550-5
1English
2Springer Science & Business Media B.V.
3SpringerLink
sourceidspringer_jour
recordidspringer_jour10.1007/s10618-018-0550-5
issn
01573-756X
11573756X
21384-5810
313845810
rsrctypearticle
creationdate2018
addtitle
0Data Mining and Knowledge Discovery
1Data Min Knowl Disc
searchscopespringer_journals_complete
scopespringer_journals_complete
lsr30VSR-Enriched:[pqid, pages, galeid]
sort
titleKernel mixture model for probability density estimation in Bayesian classifiers
authorZhang, Wenyu ; Zhang, Zhenjiang ; Chao, Han-Chieh ; Tseng, Fan-Hsun
creationdate20180500
facets
frbrgroupid4152308384175845968
frbrtype5
newrecords20180509
languageeng
creationdate2018
topic
0Kernel Mixture Model
1Probability Density Estimation
2Bayesian Classifier
3Clustering
collectionSpringerLink
prefilterarticles
rsrctypearticles
creatorcontrib
0Zhang, Wenyu
1Zhang, Zhenjiang
2Chao, Han-Chieh
3Tseng, Fan-Hsun
jtitleData Mining And Knowledge Discovery
toplevelpeer_reviewed
delivery
delcategoryRemote Search Resource
fulltextfulltext
addata
aulast
0Zhang
1Chao
2Tseng
aufirst
0Wenyu
1Zhenjiang
2Han-Chieh
3Fan-Hsun
au
0Zhang, Wenyu
1Zhang, Zhenjiang
2Chao, Han-Chieh
3Tseng, Fan-Hsun
atitleKernel mixture model for probability density estimation in Bayesian classifiers
jtitleData Mining and Knowledge Discovery
stitleData Min Knowl Disc
risdate201805
volume32
issue3
spage675
epage707
issn1384-5810
eissn1573-756X
genrearticle
ristypeJOUR
abstractEstimating reliable class-conditional probability is the prerequisite to implement Bayesian classifiers, and how to estimate the probability density functions (PDFs) is also a fundamental problem for other probabilistic induction algorithms. The finite mixture model (FMM) is able to represent arbitrary complex PDFs by using a mixture of mutimodal distributions, but it assumes that the component mixtures follows a given distribution, which may not be satisfied for real world data. This paper presents a non-parametric kernel mixture model (KMM) based probability density estimation approach, in which the data sample of a class is assumed to be drawn by several unknown independent hidden subclasses. Unlike traditional FMM schemes, we simply use the k -means clustering algorithm to partition the data sample into several independent components, and the regional density diversities of components are combined using the Bayes theorem. On the basis of the proposed kernel mixture model, we present a three-step Bayesian classifier, which includes partitioning, structure learning, and PDF estimation. Experimental results show that KMM is able to improve the quality of estimated PDFs of conventional kernel density estimation (KDE) method, and also show that KMM-based Bayesian classifiers outperforms existing Gaussian, GMM, and KDE-based Bayesian classifiers.
copNew York
pubSpringer US
doi10.1007/s10618-018-0550-5
pages675-707
date2018-05