第27卷第1期, 2014年1月 宁波大学学报(理工版) 首届中国高校优秀科技期刊奖
Vol.27 No.1, Jan. 2014 JOURNAL OF NINGBO UNIVERSITY ( NSEE ) 浙江省优秀科技期刊一等奖
Image Segmentation via Variational Mixture of Gaussions
ZHANG Yuan-yuan, ZHONG Yi-wei
( Faculty of Information Science and Technology, Ningbo University, Ningbo 315211, China )
Abstract: Gaussian mixture model (GMM) has been effectively used in image segmentation. In this case,
the features of an image are described by a mixture model with
K different components. However, how to
choose the number of mixture components
and estimate model parameters are still short of solutions.
Current algorithms such as maximum likelihood and sampling methods are known for their own limitations.
So we present an alternative algorithm based on Bayesian variational method and apply it in image
segmentation. This method works at less computational cost than sampling methods, and can also naturally
handle the model selection problem. In the model’s iterative process, the algorithm can automatically
determine the number of mixture components in view of the data collected. By comparing our method
against other classical segmentation methods on natural images acquired from Berkeley Segmentation Data
Set, it suggests that our method provides better performance on image segmentation.
Key words: image segmentation; variational inference; Gaussian mixture models; expectation-maximization
CLC number: TP391.41 Document code: A Article ID: 1001-5132(2014)01-0023-06
Image segmentation is an important problem in
computer vision which is aim to partition an image into
disjoint regions, after that the pixels in the same region
have similar visual characteristics. Recently, Gaussian
mixture model is such a useful mathematical model used
for unsupervised image segmentation
[1-3]
. This method
models the image with mixture of Gaussians, and the
segmentation is a process of assigning each pixel to the
component which it most likely belongs to. In such case,
image segmentation is converting to an inference
problem that optimizes model parameters to fit the data.
A common technique used to inference is Expectation-
Maximization (EM) algorithm which is an iterative
technique to get the maximum likelihood estimates
[2,4-5]
.
The key issue of EM algorithm is model selection
problem that is how to determine the number of compo-
nents in order to avoid over-fitting or under-fitting. If
the choice of number is inappropriate, the speed of
convergence will be very slow and the result of
clustering may be poor. Many information rules are
selected to try to solve this problem
[6-10]
, such as
Bayesian Information Criterion (BIC), Akaike Infor-
mation Criterion (AIC) which compute relative loss
information. But they do not know what the true model
is like, so with these rules we cannot explain the
accuracy in an absolute sense. Even if all the candidate
models are not fit well, these rules will not give any
hints. Another crucial problem is how to guarantee EM
algorithm to get the global optimum which is depends
on the initial value of models. Currently improved
method is to use K-means to get a rough estimate, and
then use this estimate as the initial value of EM
Received date: 2013−06−09. JOURNAL OF NINGBO UNIVERSITY ( NSEE ): http://journallg.nbu.edu.cn/
Foundation items: Supported by the National Natural Science Foundation of China (61175026); Discipline Project of Ningbo University (XKL09154).
The first author: ZHANG Yuan-yuan ( 1981− ), female, Ninghai Zhejiang, PhD, experimentalist, research domain: machine learning and computer vision.
E-mail: zhangyuanyuan@nbu.edu.cn