Color and Texture Segmentation Using an Unified MRF Model

Abstract

The color image segmentation problem has two main issues to be solved. The proper choice of a color model and the choice of an appropriate image model are the key issues in color image segmentation. In this work, Ohta (I1, I2, I3) is taken as the color model and different variants of Markov Random Field (MRF) models are proposed. In this regard, a Compound Markov Random Field (COMRF) model is porposed to take care of inter-color-plane and intra-color-plane interactions as well. In continuation to this model, a Constrained Compound Markov Random Field Model (CCOMRF) has been proposed to model the color images. The color image segmentation problem has been formulated in an unsupervised framework. The performance of the above proposed models has been compared with the standard MRF model and some of the state-of-the-art methods, and found to exhibit improved performance.

Share and Cite:

Panda, S. and Nanda, P. (2022) Color and Texture Segmentation Using an Unified MRF Model. Journal of Computer and Communications, 10, 139-164. doi: 10.4236/jcc.2022.106012.

1. Introduction

The problem of image segmentation, is the subject of an active research topic over a period of quite a few years. It is connected with object labeling process, that is, assigning to each object a different label (all pixel of the object receive the same value). The problem is more compounded in the real world environment which is colored. Now-a-days, color imagery has become an integral part of human life, because of its tremendous use in internet as well as social media platforms. As color Image segmentation, convey much more information about the objects in scenes, this is a crucial issue while designing the front end of automated machine vision system. Even though a good number of image segmentation techniques and strategies has been reported in the literature [1], problem of color image segmentation is a challenging task. Color image segmentation poses two major challenges; 1) devise of appropriate representation or modeling of color components, and 2) the intrinsic characteristics of the images need to be ascertained. There has been conscious effort by researchers to understand the underlying notion behind the real world colors. Attempts have been made to devise different color spaces in both linear and nonlinear frame work [2]. Because of the complex correlation among different color planes, devising appropriate color model for the real world images is a hard task. In color image, besides the complexity of color model, an appropriate image model, taking care of spatial intrinsic characteristics, needs to be designed for appropriate image analysis. In literature, different image models have been proposed for image restoration, filtering, segmentation, object detection, and recognition etc. These models can broadly be categorized as deterministic and stochastic models. Stochastic models found to be used in various image analysis and computer vision applications. Specifically, “Markov Random Field (MRF) models” [3] [4] have the potentiality of modeling the spatial intrinsic characteristic, of an image and is extensively used in “image processing and computer vision” for nearly three decades. MRF model and its variants have been found to be suitable models for many real world images. MRF model is a stochastic model with non linear feature and appropriate modeling of a given image needs to have proper MRF model parameters. Because of the availability of different color spaces for real world color modeling and MRF model as a suitable image model, many image analysis and pattern recognition problems in a real world environment could be addressed. Therefore, the model based color image segmentation and restoration could be an alternate choice to the non parametric methods [3]. Researchers of the image analysis and computer vision community have proposed a wide variety of MRF models based on supervised and unsupervised image segmentation schemes [4] for automated vision system in real world environment.

In this work, an unsupervised segmentation algorithm is developed to segment the different regions of a color image. In this framework the image label as well as image model parameters have been estimated in a concurrent manner. The current unsupervised algorithm, is successfully tested on different real world images, texture images from Berkeley database. However, for the sake of convenience, five results are presented and a comparison is made with Kato’s method and JSEG method. As there is a dependence between, the image label estimates and the model parameter estimates, determination of optimal estimates of image labels and model parameters is a hard problem. In order to ameliorate this issue, a “recursive scheme” has been presented, where both the image labels and model parameters are estimated simultaneously. This recursive scheme yields “partial optimal solutions” instead of “global optimal solutions”. As far as the image model is concerned, two different MRF models namely “Compound Markov Random Field (COMRF) model” and “Constrained Compound Markov Random Field (CCOMRF) Model” are proposed. The CCOMRF model has been found to posses the “unifying property” of modeling texture and scene as well. In order to model the input color image, Ohta ( I 1 , I 2 , I 3 ) color space has been used to model the input color image. The Compound MRF model controls the correlation of the color plane for efficient modeling. In both the MRF models two types of clique potential functions, namely Weak Membrane Model and Reward-punishment model have been used. In the present work, the model parameters are estimated in one step and thereafter in the second step these estimated model parameters are used to estimate the image label estimates. The parameter estimation problem has been cast using “Maximum Conditional Pseudo Likelihood (MCPL)” principle and the estimates of the model parameters have been obtained using the proposed “homotopy continuation algorithm”. The MCPL estimation problem reduces to solving a set of nonlinear equations whose zeros are the estimates of the MRF parameters. These zeros of the nonlinear function, have been found by tracing the zero curves of the homotopy map. Fixed point based homotopy continuation method with the homotopy parameter “ λ ” is developed. An algorithm is developed to trace the zero curve of the homotopy continuation method to determine the zeros of the desired function and hence the parameter estimates. The label estimation problem has been cast using “Maximum a Posteriori (MAP)” estimation principle and the proposed hybrid algorithm is used to obtain the MAP estimates.

In the following, Section II describes the related work while Section III presents the Compound MRF (COMRF) model. The proposed constrained MRF model (CMRF) model has been given in Section IV and Section V deals with the Constrained Compound MRF (CCOMRF) model. Section VI consists of the formulation in unsupervised framework and Section VII presents image label estimation followed by model parameter estimation presented in Section VIII. The simulation results are presented in Section IX. The concluding remarks are given in Section X.

Our Contributions

The list of contributons along with the respective sections can be summerized as follows:

A Compound Markov Random Field Color Image Model is proposed taking into account both theintra-color-planeas well asinter-color-planeinteraction of pixels in RGB and Ohta model (Section 3);

A new MRF model called as Constrained Compound MRF (CCOMRF) model is proposed that is found to possess the unifying property of modeling color texture as well as scene images (Section 5);

Anunsupervised color Image Segmentation schemeusingHomotopy Continuation methodis proposed for simultaneous estimation of model parameters as well as image labels (Section 6);

A MRF-MAP based supervised image segmentation using Homotopy Continuation Method and hybrid algorithm is proposed (Section 8).

2. Related Work

There has been conscious effort to devise color models which when used cannot be distinguished from those in original by a human observer. Towards this end, many linear and nonlinear color models have been proposed [2] [5] earlier. The most commonly used linear models are RGB and Ohta ( I 1 , I 2 , I 3 ) [5] model.

Ohta has found out an effective set of color features, that is R + G + B 3 , R B

and ( 2 G R B ) / 2 , while segmenting eight kinds of color pictures and these features, popularly known as Ohta ( I 1 , I 2 , I 3 ) model. In literature, different color spaces used for segmentation have been investigated and an overview of different color spaces from perceptual, historical and application specific has been presented [6]. This overview [6] highlights both potentiality and also the limitations of different color spaces.

For image restoration and segmentation applications, MRF models found to be very efficient [3] [4] [7]. Geman and Geman [3] have proposed binary line fields together with the random field model of the non edge pixels. Subsequently, Besag [4] proposed a segmentation scheme where the segmentation problem has been cast as a “pixel labeling problem” and the pixel labels have been estimated using “Iterated Conditional Mode (ICM) algorithm”. In Besag’s [4] formulation, MRF model parameters have been estimated together with the image labels and performed image restoration in MAP framework. MRF model has also been proposed combining color and texture features [8]. Kato et al. [8] have used perceptually uniform CIELUV color values as color features and a set of Gabor filters as texture features. Kato et al. [9] in their subsequent work have proposed a new MRF model based segmentation scheme, where the model consists of three layers; two of which correspond to two features and a special layer called the combined layer. This scheme has produced quite satisfactory results for multi class textured images.

In addition to the MRF based methods, another method known as JSEG has been proposed to segment color images in unsupervised framework [10]. In this approach, a multistage “J-image” has been created and region growing strategy has been used for segmentation. MRF based clustering approach [11] has been proposed for color image segmentation. In MRF based clustering, MRF model has been applied over the pre-segmented data obtained by grouping regions over similar group of pixels. Therefore, clustering has been carried out on a multidimensional feature space. In another research, the “Hidden Markov Random Field (HMRF) model” parameters have been used for segmentation of natural color texture images [12]. MRF model parameters have also been estimated by Kato et al. [13] for segmentation of color texture images. EM algorithm has successfully been used to estimate the model parameters. They have used perceptually uniform “CIE-L*U*V*” color values as color features and a set of “Gabor filters” as texture features. The EM algorithm based estimated parameters have yielded appreciable segmentation results. Panjwani et al. [14] have proposed an unsupervised scheme for color textures using MRF model. They have used “Gaussian Markov Random Field (GMRF)” model for color textures and the algorithm is a region based one. It consists of a region splitting phase and an agglomerative clustering phase. Color and motion have jointly been processed in an unsupervised scheme proposed by Lievin et al. [15]. In this work from a logarithmic model a non-linear color transform relevant for hue segmentation is derived. The proposed “hierarchical segmentation scheme” is based on MRF modeling that combines hue and motion detection within a “spatiotemporal neighborhood”. Another “unsupervised color image segmentation scheme”, based on feature space, has been proposed by Guo et al. [16], where the feature space consists of two distinct source models and valley. The model parameters have been estimated and a “labeling algorithm” has been developed to determine the segmentation. The segmentation process is completely autonomous.

An unsupervised color segmentation algorithm has been proposed [17] using “multiscale texture model”. A new MRF model known as “Associative Hierarchical Random Field (AHRF)” has been proposed [18]. They also proposed a new algorithm for optimization. This work is a generalization of many previous “super-pixel based methods” in a random field framework. Here the MAP estimation is carried out using a graph cut based move making algorithms. Besides, another unsupervised algorithm has been proposed [19] in Expectation Maximization (EM) framework, where the model parameters and the pixel labels have been estimated simultaneously. Very recently, Karadag et al. [20] have proposed an unsupervised segmentation algorithm in MRF framework. The bottom up phase takes care of the model parameters while the top down segmentation maps are constructed from the domain specific information. Besides, another color image segmentation scheme has been proposed by Chen et al. [21] who have used both the MRF and Dempster-Shafer evidence theory to obtain segmentation. They have demonstrated for two label segmentation, however, it can be extended for multi label case. Abes et al. [22] have proposed a segmentation scheme where the image structure has been representated by its segmentation graph derived from the low-level hierarchical multi scale image segmentation. A novel “Decoupled Active Contour (DAC)” method is proposed [23] to extract the boundaries accurately. The notions such as “viterbi search resampling and Bayesian estimation” are the key steps of DAC. A method based on a multilocal creaseness analysis of the histogram has been proposed for shape extraction [24] and the resulting segmentation scheme has been found to be robust. A multiscale method using edge and intensity information has been proposed for brain MR image segmentation [25]. In [26], the notion of coupled nonlinear diffusion has been used for feature extraction and enhancement. These features have subsequently been used for segmentation. A novel technique based on geometrical properties of “lattice auto-associative memories” has been proposed [27] for color image segmentation adhering to a different color space. Other than MRF model, research has also been focused on using various other methods such as Graph Cut based methods, Level set methods, Histogram thresholding based methods etc. The review based on segmentation using two powerful attributes, that is color and texture has been presented in [28]. An iterated region merging based graph cut algorithm has been presented in [29]. This is an extension of standard graph cut algorithm. The proposed algorithm starts from the user labeled sub-graph and works iteratively to label the surrounding un-segmented regions. With the same amount of user input, their algorithm can achieve better segmentation results than standard graph cuts, when the object is extracted from a complex background.

A new segmentation algorithm called “Histogram Thresholding-Fuzzy C-means Hybrid (HTFCM)” is proposed in [30]. HTFCM consists of two modules 1) “histogram thresholding module” and 2) the “FCM module”. Histogram thresholding technique contains three phases 1) peak finding technique, 2) region initialization and 3) merging process. An outdoor scene image segmentation algorithm based on “background recognition and perceptual organization” is proposed in [31]. It consists of a “Perceptual Organization Model (POM)” that captures the structural relationships among the constituent parts of the structured objects. Boix et al. [32] have proposed a new consistency potential for image labeling, known as the “Harmony Potential”. They have presented a new “Conditional Random Field (CRF) model” for object class image segmentation.

3. Compound Markov Random Field (COMRF) Model

Stochastic models such as MRF models have been extensively used in image analysis [3] [4]. Over the last two decades, there have been extensive applications of MRF models and towards this end many variants of MRF models have been proposed for gray scale as well as color images [2]. In case of color images, the accuracy of color image segmentation greatly depends upon appropriate color model as well as proper image model. Therefore, in this work attempts have been made to develop appropriate color as well as image models for image segmentation. It is known that the RGB color model is not a suitable color model for image segmentation because of the existence of “strong correlation” among the different color planes where as Ohta model, because of its “weak correlation” among different planes, has been widely used for color image segmentation. Therefore, a Compound Markov Random Field model has been proposed to introduce controlled correlation among different color planes through MRF model parameter. This model with Ohta color model proved to be quite effective for image segmentation. In Ohta color space.

In the present section, a compound MRF model is deveioped which is based on both spatial and temporal modeling. In otherwords a spatio temporal MRF modeling is developed in the color space. In our earlier work [33], the notion of Constrained Compound MRF model has been proposed, but in this section, the clique potential function is provided in more detail. Since, the Ohta color model is considered in this research, each plane, for example ( I 1 , I 2 , I 3 ) has been modeled as MRF and the inter plane interactions among different planes have also been modeled as MRF. This model takes care of the “intra color plane and inter color plane interactions” ( I 1 I 2 or I 2 I 3 or I 3 I 1 ) as MRF model. The above has been motivated because of the existence of “strong correlation” in between different color planes of RGB and “weak correlation” among different color planes of Ohta ( I 1 , I 2 , I 3 ) model. The above spatio temporal MRF model is mooted to ameliorate the above limitation and thus introduce the notion of controlled correlation among different planes. This controlled correlation has been expected to achieve superior segmentation results to the existing RGB or OHTA model. Hence, the proposed COMRF model takes care of “controlled correlation” among different planes and the “degree of correlation” has been monitored by the associated parameters of “clique potential function”. All the images have been assumed to be defined on a “discrete rectangular lattice” of size M × N . Each site ( x i , j ) of the input image X is modeled as a random variable taking a value from 0 to G (gray values). Since, the image has been defined over two dimensions, the observed imgae X has been modeled as a random field and x denotes the realization, the given image. Similarly the segmented images is modeled as the label process Z with the number of labels as L. The three color planes of Ohta model has been presented in Figure 1(a) and we model each color plane i.e. ( I 1 or I 2 or I 3 ) separately as MRF model. The kth plane (k = 1, 2, 3) label process is denoted as Z and then the spatial interaction of Z plane has been modeled as MRF, the joint probability distribution P ( Z 1 = z 1 ) is known to be “Gibbs distributed” and can be presented as follows

P ( Z 1 = z 1 | θ ) = 1 Z e U ( z 1 , θ ) (1)

where, Z = z 1 e U ( z 1 , θ ) is the “partition function”, U ( z 1 , θ ) is called the energy function and is of the form U ( z 1 , θ ) = c C V c ( z 1 , θ ) , with V c ( z 1 , θ ) known as the “clique potential function” and θ is the associated “clique parameter

(a) (b)

Figure 1. (a) Interaction of I 1 , I 2 , I 3 plane (b) Interaction of single pixel of I 1 plane with I 2 plane.

vector”. Analogously the spatial interactions of I 2 and I 3 planes can be defined. This prior MRF model incorpating the three spatial planes would result in the “energy function” of the following form

U ( z , θ ) = i , j V c ( z , θ ) (2)

where, V c ( z , θ ) denotes the “clique potential function” for the three color planes I 1 , I 2 and I 3 respectively. In order to complete the model, the “inter plane interactions together with the intra plane interactions” have been taken care. Thus Z has been modeled as a compound MRF where the “spatio temporal MRF model” takes care of the spatial as well as the temporal interactions. Figure 1(a) shows the interaction among different color planes and as an illustration Figure 1(b) represents interaction of ( i , j ) th pixel of I 2 plane with the corresponding pixel of I 1 plane taking the first order neighborhood structure. The MRF prior with the above interaction can be presented as

P ( Z i , j 2 = z i , j 2 | Z k , l 1 = z k , l 1 , ( k , l ) ( i , j ) , ( k , l ) M × N ) = P ( Z i , j 2 = z i , j 2 | Z k , l 1 = z k , l 1 , ( k , l ) ( i , j ) , ( k , l ) η i , j 1 ) .

Let z denote the labels of pixels taking care of all three color planes. In other words, z denotes the pixel labels of the input color image. For example, z i , j refers to the ( i , j ) th pixel label comprising of three color components. The “prior probability” of z has been contributed by the “intra color plane interactions and inter color plane interactions” of pixels. Hence, the prior model of z consists of the “clique potential functions V c s ( z ) and V c t ( z ) ” corresponding to “intra color plane interactions and inter color plane interactions” respectively. The vertical and horizontal line fields for different color planes ( k = 1 , 2 , 3 ) are denoted as v k and h k respectively. Both horizontal and vertical line fields are defined as follows. Let f v ( z i , j k , z i , j 1 k ) for the kth color plane be defined as f v ( z i , j k , z i , j 1 k ) = | z i , j k z i , j 1 k | . Vertical line field for each plane is set i.e. v i , j k = 1 for k = 1 , 2 , 3 , if f v ( z i , j k , z i , j 1 k ) > T h r e s h o l d , else v i , j k = 0 . Similarly, in case of horizontal line field let f h ( z i , j k , z i 1, j k ) be defined as f h ( z i , j k , z i 1 , j k ) = | z i , j k z i 1 , j k | . Horizontal line field for kth plane is set, i.e. h i , j k = 1 for k = 1 , 2 , 3 , if f h ( z i , j , z i 1 , j ) > T h r e s h o l d else h i , j k = 0 . Since the COMRF model takes care of “intra color plane as well as inter color plane interactions” the prior probability distribution equation is given by (1), where the energy function is represented as,

U ( z , θ ) = U s ( z , θ ) + U t ( z , θ ) (3)

where,

U s ( z , θ ) = i , j V c s ( z i , j ) (4)

U t ( z , θ ) = i , j V c t ( z i , j ) (5)

Here, U s ( z , θ ) and U t ( z , θ ) represents the energy function of “intra-color-plane and inter-color-plane” respectively. V c s ( z i , j ) corresponds to the “intra-color-plane pixels” and V c t ( z i , j ) corresponds to “inter-color-plane pixels”. Let h s k for k = 1 , 2 , 3 denote the “horizontal line field” for each color plane in intra-color-plane and h t k for k = 1 , 2 , 3 denote the “vertical line fields” for inter-color-plane directions. Thus the compound MRF model will have the energy function given by (3). The (4) can be presented as,

V c s ( z i , j ) = k = 1 3 α k [ ( z i , j k z i , j 1 k ) 2 ( 1 v i , j k ) + ( z i , j k z i 1 , j k ) 2 ( 1 h i , j k ) ] + β k [ v i , j k + h i , j k ] . (6)

Here z 1 , z 2 , z 3 refers to ( I 1 , I 2 , I 3 ) planes respectively. Equation (5) can be expressed as,

V c t ( z i , j ) = α 1 [ ( z i , j 1 z i , j 1 2 ) 2 ( 1 v i , j 1 ) ] + α 1 [ ( z i , j 1 z i 1 , j 2 ) 2 ( 1 h i , j 1 ) ] + β 1 [ v i , j 1 + h i , j 1 ] + α 2 [ ( z i , j 2 z i , j 1 3 ) 2 ( 1 v i , j 2 ) ] + α 2 [ ( z i , j 2 z i 1 , j 3 ) 2 ( 1 h i , j 2 ) ] + β 2 [ v i , j 2 + h i , j 2 ] + α 3 [ ( z i , j 3 z i , j 1 1 ) 2 ( 1 v i , j 3 ) ] + α 3 [ ( z i , j 3 z i 1 , j 1 ) 2 ( 1 h i , j 3 ) ] + β 3 [ v i , j 3 + h i , j 3 ] . (7)

where z 1 shows the interaction between I 1 I 2 color planes, z 2 denotes the interaction between I 2 I 3 color planes and z 3 represents the interaction between I 3 I 1 color planes respectively. Here we have assumed, α 1 = α 2 = α 3 = α and β 1 = β 2 = β 3 = β . The unknown parameters [ α , β ] T have been chosen on an adhoc manner. The boundary of a given segment is represented by edge pixels and the line fields correspond to the edge pixels. Hence, it is not necessary to have the similarity measure for boundary pixels and thus the “clique potential function” given by (7) shall consist of penalty functions only. Therefore, region formation, with similarity measure, should not be contributed by the boundary pixels.

4. Constrained Markov Random Field (CMRF) Model

In order to model textures besides the scene the model need to take care local properties of a given pixel. MRF takes care of the spatial neighborhood, however, it is needed to reinforces the dependency of the pixel on the neighborhood. This will give rise to a new a priori model for the label process.

A “discrete-time martingale” is a discrete-time “stochastic process” (i.e., a sequence of random variables X 1 , X 2 , X 3 , ) which satisfies for all n,

E ( | X n | ) <

E ( X n + 1 | X 1 , X 2 , X 3 , , X n ) = X n

i.e., the “conditional expected value” of the next observation, given all of the past observations, is equal to the last observation. Let Z ( i ) , i = 1 , 2 , , n , be a “martingale sequence”. For all i = 1 , 2 , , n , E [ | Z ( n ) | ] < and E [ Z ( n + 1 ) / Z ( 1 ) , , Z ( n ) ] = Z ( n ) . Now, let Z 1 , Z 2 , , Z n be the random variables associated with the labels of image of size n = N 2 . Therefore, E [ Z i , j / Z k , l , k , l i , j ] = Z i 1 , j for any k , l η i , j , where η i , j is the neighborhood of i , j .

E [ Z i , j | Z k , l , k , l i , j ] = z i , j L z i , j P [ Z i , j = z i , j | Z k , l = z k , l , k , l i , j ] (8)

Assuming that Z is a “Markov process”, one obtains,

E [ Z i , j | Z k , l , k , l i , j ] = z i , j L z i , j P [ Z i , j = z i , j | Z k , l = z k , l , k , l η i , j ] = z i , j L z i , j P ( Z = z ) z i , j l P ( Z = z ) . (9)

Since Z is a MRF,

E [ Z i , j | Z k , l , k , l i , j ] = z i , j L z i , j e U ( z ) z i , j L e U ( z ) (10)

Since Z i , j is a “martingle sequence” E [ Z i , j | Z k , l , k , l i , j ] = z k , l k , l η i , j

z k , l = z i , j L z i , j e U ( z ) z i , j L e U ( z ) (11)

Considering first order neighborhood and choosing one of the neighborhood pixels for example z i 1, j , Equation (11) can be expressed as

z i 1 , j = z i , j L z i , j e U ( z ) z i , j L e U ( z )

Instead of taking a given pixel from the neighborhood z i 1, j , the average of the neighborhood pixels is computed.

5. Constrained Compound (CCOMRF) Model

The constrained model proposed in the previous section need to be used with the COMRF taking care of the spatial and temporal interaction. Thus the constrained compound MRF model proposed by [34] [35] has the following energy function. The following model corresponds to only the constrained neighborhood in the spatial framework.

U s c ( z ) = i , j U s ( z s i , j ) + λ c { z i , j a v g z i , j L z i , j e U ( z i , j ) z i , j L e U ( z i , j ) } 2 (12)

where

U s c ( z i , j ) = V c s ( z i , j ) + λ c { z i , j a v g z i , j L z i , j e U ( z i , j ) z i , j L e U ( z i , j ) } 2 (13)

where U s c denote the energy function corresponding to “intra color plane interactions” and V c s ( z i , j ) is defined by (6). Where,

z i , j a v g = z i , j L z i , j e U ( z ) z i , j L e U ( z ) and λ c is the constrained model parameter.

The resultant energy function taking care of both “intra-color-plane and inter-color-plane interactions” with intra plane constraints is represented by

U ( z ) = U s c ( z i , j , θ ) + U t c ( z i , j , θ ) (14)

where U s c ( z i , j , θ ) is defined by (13) and U t c ( z i , j , θ ) is defined by (5) and V c s ( z i , j ) and V c t ( z i , j ) are given by (6) and (7) respectively. Different variants of MRF models are considered with two types of clique potential functions i.e. Weak Membrane Model and Reward-punishment model. The clique potential function and the a priori energy function for Weak Membrane Model defined by Equations (12)-(16) have been used in our simulation. The energy function in case of the first order an isotropic weak membrane model is given by

U ( z , θ ) = i , j k = 1 3 [ α 1 k { ( z i , j k z i 1 , j k ) 2 ( 1 v i , j k ) } + α 2 k { ( z i , j k z i , j 1 k ) 2 ( 1 h i , j k ) } + α 3 k { ( z i , j k z i 1 , j 1 k ) 2 ( 1 v i , j k ) } + α 4 k { ( z i , j k z i + 1 , j 1 k ) 2 ( 1 h i , j k ) } ] + β k [ h i , j k + v i , j k + h i , j k + v i , j k ] (15)

where h i , j , v i , j and h i , j , v i , j are the horizontal and vertical line fields for the first order an isotropic weak membrane model. Besides the reward punishment model is also considered in our simulation. The corresponding clique potential function in general is defined as

V c ( z ) = { δ c if | z i , j z i 1 , j | = 0 + δ c if | z i , j z i 1 , j | 0 } (16)

where z i , j and z i 1, j denote the pixel values of the sites ( i , j ) and ( i 1, j ) respectively. The clique potential function for the first order an isotropic reward punishment model can be expressed as follows

V c ( z ) = { δ 1 if | z i , j z i 1 , j | = 0 + δ 1 if | z i , j z i 1 , j | 0 δ 2 if | z i , j z i , j 1 | = 0 + δ 2 if | z i , j z i , j 1 | 0 δ 3 if | z i , j z i + 1 , j | = 0 + δ 3 if | z i , j z i + 1 , j | 0 δ 4 if | z i , j z i , j + 1 | = 0 + δ 4 if | z i , j z i , j + 1 | 0 } (17)

The “local reinforcement” is also extended to the inter-color-plane interactions and hence introduce the notion of constrained model in the inter-color-plane interactions. Thus, there is one clique potential corresponding to intra-color-plane interactions and another clique potential function corresponding to inter-color-plane interactions. In the line of the constrained model according to (12) is now applied to intra as well as inter color plane processes. The constrained condition is among I 1 I 2 , I 2 I 3 and I 3 I 1 color planes. The energy function for constrained compound model can be expressed as follows.

U ( z ) = i , j U ( z i , j 1 ) + λ c { z i , j a v g 2 z i , j L z i , j 1 e U ( z 1 ) z i , j L e U ( z 1 ) } 2 + U ( z i , j 2 ) + λ c { z i , j a v g 3 z i , j L z i , j 2 e U ( z 2 ) z i , j L e U ( z 2 ) } 2 + U ( z i , j 3 ) + λ c { z i , j a v g 1 z i , j L z i , j 3 e U ( z 3 ) z i , j L e U ( z 3 ) } 2 (18)

Accordingly U ( z 2 ) and U ( z 3 ) are defined.

6. Unsupervised Framework

In this framework, neither the model parameters nor the image labels are assumed to be known. Both the estimate of the model parameters and image labels are interdependent. Therefore, in an unsupervised scheme, the MAP estimates of the labels and the estimates of the model parameters are carried out concurrently. Thus, an estimation strategy need to be developed which, using the observed image X, will yield an optimal pair ( Z o p t , θ o p t ) . Towards this end, the following joint optimality criterion is considered,

( Z o p t , θ o p t ) = arg max z , θ P ( Z = z | X = x , θ ) (19)

The pair ( Z o p t , θ o p t ) estimated using (19) is the global optimal estimates. But the image labels Z and the model parameter θ are unknown initially and they are interdependent, thus compounding the problem to be very hard. In order to handle this situation, the problem can be reformulated to achieve sub optimal solutions instead of optimal ones. It may be noted, this function P ( Z = z / X = x , θ ) in (19) is maximized with parameter Z and θ . The interdependency of parameters that makes the problem intractable can be handled using the notion of parameter spliting proposed by Wendell and Horter [36] in deterministic framework. The approach suggested by Wendell and Horter [36] yields suboptimal solution instead of optimal solution. Their approach is to split the parameter set into two sets and estimate the parameter recursively and it has been shown that this recursive estimation eventually leads to partial optimal solutions. Since our formulation is in stochastic framework, the same notion is adhered to and it is attempted to split the above problem into two separate problems of estimating labels Z and paramters θ separately. This can be expressed as follows.

( Z * ) = arg max z P ( Z = z | X = x , θ * ) (20)

( θ * ) = arg max θ P ( Z = z * | X = x , θ ) . (21)

The estimated parameters Z * and θ * are not global maxima, but are almost always local optimal solutions [36]. But with θ = θ * , the estimate z * is global optimal satisfying Equation (20) and analogously for z = z * , θ * is global optimal satisfying Equation (21). Since neither θ * nor z * is known, a recursive scheme is adopted where the model parameter estimation and segmentation is alternated. Let at the kth iteration θ k = [ α k , β k ] T be the estimate of model parameters and z k be the estimate of the labels of the observed image. Since, both Z * and θ * are unknown, a recursive scheme has been proposed to estimate the model parameters θ and image labels Z recursively,

( Z k + 1 ) = arg max z P ( Z = z | X = x , θ k ) (22)

( θ k + 1 ) = arg max θ P ( Z = z k + 1 | X = x , θ ) . (23)

The image labels Z k + 1 at the k + 1 th iterartion have been estimated using Bayesian approach [3]. The MAP estimates Z in the Bayesian framework has been obtained by proposed hybrid algorithm, a combination of SA and ICM. In one combined iteration, estimate of z k and θ k are obtained. This recursion has been continued and after a finite number of steps z * and θ * are obtained, which are the partial optimal solutions of Z and θ . But the problem in (22) has been solved by maximum conditional pseudolikelihood approach. This pseudolikelihood estimates of θ has been obtained by the proposed Homotopy continuation method. The flow chart of the unsupervised algorithm is shown in Figure 2.

Salient Steps of the Unsupervised Algorithm

1) Initialize parameter vector as θ 0 , pixel label estimates z 0 for k = 0 , 1 , 2 , , N do

Figure 2. Flow chart of unsupervised image segmentation scheme.

2) Using θ k , observed image x and initial segmented image z k , obtain the MAP estimate of the labels z ^ k + 1 .

3) With z ^ k + 1 , obtain the MCPL estimate of the parameter vector θ ^ k + 1 , using homotopy continuation based algorithm.

4) Compare θ ^ k + 1 with the previous estimate of θ ^ k , if | θ ^ k + 1 θ ^ k | < t h r e s h o l d , set θ k = θ k + 1 go to step 2 else go to step 5.

5) Set estimate of parameter vector θ * = θ ^ k + 1 .

6) Estimate z * (segmented image) using θ * , z ^ k + 1 and observed image x.

7. Image Label Estimation

The segmentation problem is formulated as a pixel labelling problem, where each pixel can be assigned a label from the set of lebels 0-L. All our labels are defined over an image of size S = M × N . Let every pixel (i, j) is modeled as a random variable denoted as Z i , j . Thus, the given image has been viewed as a realization z from the label process Z. The posterior probability P ( Z = z | X = x , θ ^ ) has been maximized to obtain the label estimates z ^ . Thus, the optimality criterion at the kth combined iteration, can be expressed as follows,

z ^ k + 1 = arg max z P ( Z = z | X = x , θ ^ k ) (24)

where, θ ^ denotes the estimates of the associated parameter vector of the MRF model and z ^ k + 1 denotes the estimates of the labels. Since z is unknown, P ( Z = z | X = x , θ ^ k ) in (24) cannot be computed. Using Bayes’s theorem,

P ( Z = z | X = x , θ ^ k ) can be expressed as

P ( Z = z | X = x , θ ^ k ) = P ( X = x | Z = z , θ ^ k ) P ( Z = z ) P ( X = x | θ ^ k ) (25)

Since the observed image X has been provided, the denominator in (25) i.e. P ( X = x | θ ^ k ) becomes a constant quantity. P ( Z = z ) is the a priori probability distribution of the labels. The degradation process is assumed to be a Gaussian process, denoted by W and the corresponding realization is w. Hence P ( X = x | Z = z , θ k ) of (25) can be written as P ( X = x | Z = z k + 1 , θ k ) = P ( X = z k + 1 + w k + 1 | Z , θ k ) = P ( W = x k + 1 z k + 1 | Z , θ k ) . Since, W is a Gaussian process, and there are three spectral components present in a color image, one obtains,

P ( W = x z | Z , θ k ) = 1 ( 2 π ) n det [ K ¯ ] e 1 2 ( x z ) T K ¯ 1 ( x z ) (26)

where K ¯ is the covariance matrix. Hence, this minimization can be expressed as,

z ^ = arg min z i , j k = 1 3 ( x k z k ) 2 2 σ 2 + V c s ( z i , j k ) + V c t ( z i , j k ) , (27)

where V c s ( z i , j ) and V c t ( z i , j ) are as defined by (6) and (7) respectively. Solving (27) yields the MAP estimates of the image labels and hence segmentation. The color image has three spectral components x k , z k , k = 1 , 2 , 3 , V c is the clique potential function for all the three spectral components.

8. Model Parameter Estimation

Using the “ground truth” image z, the a priori model parameters are estimated. The associated MRF parameters of this “ground truth” image is θ . Therefore, the problem can be stated as the follows

ϕ k + 1 = arg max θ P ( Z = z k + 1 | θ ) (28)

Since Z is a MRF, we have,

ϕ k + 1 = arg max θ exp ( U ( z k + 1 , θ ) ) ξ exp ( U ( ξ , θ ) ) (29)

where ξ ranges over all realizations of the image z. Because of the denominator of (29), computation of the joint probability P ( Z = z k + 1 | θ ) is extremely difficult task. Here, the “pseudolikelihood function” is maximized P ^ ( Z = z k + 1 | θ ) instead of the “likelihood function” P ( Z = z | θ ) where,

( i , j ) L P ( Z i , j = z i , j k + 1 | Z m , n = z m , n k + 1 , ( m , n ) η i , j , θ ) P ^ ( Z = z k + 1 | θ ) (30)

From the definition of “marginal conditional probability”, it can be written as,

P ( Z i , j = z i , j k + 1 | Z k , l = z k , l k + 1 , ( k , l ) ( i , j ) , ( i , j ) L , θ ) = P ( Z = z k + 1 | θ ) z i , j M P ( X = x | θ ) (31)

Because of MRF assumption,

P ( Z i , j = z i , j k + 1 | Z m , n = z m , n k + 1 , m , n η i , j , θ ) = exp ( c C V c ( z k + 1 , θ ) ) z i , j M exp ( c C V c ( z k + 1 , θ ) ) (32)

Substituting Equation (32) in (30), the following is obtained.

P ^ ( Z = z k + 1 | θ ) ( i , j ) L exp ( c C V c ( z k + 1 , θ ) ) z i , j M exp ( c C V c ( z k + 1 , θ ) ) (33)

So, the problem of maximization (34) reduces to

arg max θ P ^ ( Z = z k + 1 | θ ) = arg max θ ( i , j ) L exp ( c C V c ( z k + 1 , θ ) ) z i , j M exp ( c C V c ( z k + 1 , θ ) ) (34)

In (34), the summation is taken over all possible labels M. (34) is “highly nonlinear” in nature and no “a priori knowledge” of the solution is available. As θ is the parameter vector θ [ α , β , σ , γ ] , (34) reduces to a set of complex non-linear equations f ( θ ) . Since, no “a priori knowledge” about the initial guess for determining the solution is available, we have developed a “globally convergent homotopy map” to find out the solution starting from an arbitrary initial guess. It is very difficult to solve the resulting non-linear equations and therefore a globally convergent based “Homotopy Continuation method” is developed. The homotopy curve is shown in Figure 3.

9. Simulation

A wide variety of examples are considered in simulation, but for the sake of illustration of the potentiality of the proposed models and the algorithm, two textured images and three general images have been presented in this paper.

9.1. Synthetic Images

Texture images having two and five classes have been considered and are shown in Figure 4 and Figure 5. The proposed Compound Constrained MRF model has the unifying property of modeling texture as well as scene images. In this research, the texture and the scene images are considered to validate this unifying modeling property. The two class synthetic image is shown in Figure 4(a) and the corresponding ground truth image is shown in Figure 4(b). Two types of clique potential functions namely Weak Membrane model and Reward Punishment model have been considered. Figures 4(c)-(j) show the results obtained by MRF, COMRF and CCOMRF (with Weak Membrane model and Reward Punishment model) and JSEG and Kato’s method. As observed from these figures, MRF model with weak membrane clique potential function could not classify the texture. This result is improved, but misclassification persisted, with the compound MRF model with same weak membrane model. As observed from Figure 4(f), the classification with Reward Punishment model is better than that of Weak Membrane model. The two classes could be classified properly with CCOMRF model and also by JSEG and Kato’s method. Since this is unsupervised algorithm, the model parameters and the image labels have been estimated alternately. One combined iteration consists of one step of parameter estimation and one step of image label estimation. The model parameters obtained from our unsupervised algorithm are given in Table 1. These optimal values of the model parameters have been used to obtain the MAP estimate of the image labels by hybrid algorithm. Our hybrid algorithm [34] consists of few iterations of Simulated Annealing (SA) and thereafter the Iterated Conditional Mode (ICM) algorithm.

Figure 3. Homotopy curve.

Figure 4. (a) Original “two texture” image (200 × 200); (b) ground truth; (c) weak membrane MRF model using Hybrid Algorithm; (d) weak membrane COMRF model using Hybrid Algorithm; (e) weak membrane CCOMRF model using Hybrid Algorithm; (f) reward punishment MRF model using Hybrid Algorithm; (g) reward punishment COMRF model using Hybrid Algorithm; (h) reward punishment CCOMRF model using Hybrid Algorithm; (i) segmentation result showing JSEG method; (j) segmentation result showing kato method.

Table 1. Parameters for images of different classes.

Figure 5. (a) Original “five Texture” image (128 × 128); (b) ground truth; (c) weak membrane MRF model using Hybrid Algorithm; (d) weak membrane COMRF model using Hybrid Algorithm; (e) weak membrane CCOMRF model using Hybrid Algorithm; (f) reward punishment MRF model using Hybrid Algorithm; (g) reward punishment COMRF model using Hybrid Algorithm; (h) reward punishment CCOMRF model using Hybrid Algorithm; (i) segmentation result showing JSEG method; (j) segmentation result showing Kato method.

As observed from Figure 4, the CCOMRF based algorithm with both the clique potential functions yielded accurate classification and this is reflected in the misclassification error, which is very low. The “Percentage of Misclassification Error (PME)” in comparison with Ground Truth image is defined as

PME = number of misclassified pixels in all the classes total number of pixels of the image × 100 .

These are shown in Figure 4(e) and Figure 4(h). Even Kato’s method classified the two classes accurately. Other models did not produce the appropriate results. These have been reflected in the table showing misclassification error.

The next example considered is a five class textured image shown in Figure 5. The results obtained are shown in Figure 5(c) and Figure 5(g). As observed from Table 2 and Table 3, the CCOMRF model produced five classes and the percentage of misclassification error in Weak Membrane Model and Reward-Punishment model are 4.25 and 3.25 respectively. As seen from Table 3, the percentages of misclassification for MRF and COMRF model were 11.58 and 8.11 respectively. These were much higher than those of the CCOMRF model. As far as all these three models are concerned, the CCOMRF model could efficiently model the textures. In this particular example the result obtained by CCOMRF model is comparable to those of JSEG and Kato’s methods.

9.2. Real Images

The next image to be considered for experiment is the Red-house image shown in Figure 6(a) and this image has both textural background with other scene objects. The unifying property of our proposed Constrained Compound model

Figure 6. (a) “Red house” image (256 × 256); (b) ground truth; (c) weak membrane MRF model using Hybrid Algorithm; (d) weak membrane COMRF model using Hybrid Algorithm; (e) weak membrane CCOMRF model using Hybrid Algorithm; (f) reward punishment MRF model using Hybrid Algorithm; (g) reward punishment COMRF model using Hybrid Algorithm; (h) reward punishment CCOMRF model using Hybrid Algorithm; (i) segmentation result showing JSEG method; (j) segmentation result showing kato method.

Table 2. Comparison of results with JSEG method and Kato’s method.

Table 3. Percentage (% age) of missclassification error for different images using “reward-punishment Model”.

have been validated and the corresponding ground truth image is shown in Figure 6(b). The unsupervised algorithm has been applied for this image and the model parameters of the proposed COMRF and CCOMRF model have been estimated together with the image labels. The model parameters are given in Table 1. Figures 6(c)-(j) show the results obtained by our unsupervised algorithm and also JSEG and Kato’s method. The results obtained by the CCOMRF model with the Weak Membrane and the Reward Punishment clique potential model are shown in Figure 6(e) and Figure 6(h) respectively. In case of Figure 6(e), it can be observed that the grass portion in the ground has been classified while some portions of the roof of the house has been misclassified thereby leading the misclasssification error to 2.31%. In case of Reward Punishment model, as observed from Figure 6(h), the roof portion has been classified properly while some portions of the grass background, which is having textural ones, has been misclassified. However the major portions of the grass background have been segmented properly. Thus, the percentage of misclassification error is 2.5%. Figure 6(d) shows results obtained by the COMRF model with weak membrane clique potential and it can be observed that some portions of the grass has been misclassified and hence the misclassification error is 7.95 percent and this misclassification increased with only MRF model as shown in Figure 6(f). This indicates that use of only MRF model does not possess the potentiality of modeling textured as well as non-textured objects in the scene. The results obtained JSEG and Kato’s method are also presented in Figure 6 and are not properly segmented. The percentage of misclassification errors are also more than those of CCOMRF models.

The other two examples considered are shown in Figure 7 and Figure 8. Figure 7 is the boat image where the water and the reflection of the boat in the water pose problem. The corresponding ground truth is shown in Figure 7(b). It has the texture as well as non textured objects as seen from Figure 7(a). The parameters estimated by the unsupervised algorithm are presented in Figures 9(d)-(f) respectively. These figure show the convergence of the parameters for 7

Figure 7. (a) “Water boat” image (348 × 522); (b) ground truth; (c) weak membrane MRF model using Hybrid Algorithm; (d) weak membrane COMRF model using Hybrid Algorithm; (e) weak membrane CCOMRF model using Hybrid Algorithm; (f) reward punishment MRF model using Hybrid Algorithm; (g) reward punishment COMRF model using Hybrid Algorithm; (h) reward punishment CCOMRF model using Hybrid Algorithm; (i) segmentation result showing JSEG method; (j) segmentation result showing kato method.

Figure 8. (a) “Berkley crow” image (481 × 321); (b) ground truth; (c) weak membrane MRF model using Hybrid Algorithm; (d) weak membrane COMRF model using Hybrid Algorithm; (e) weak membrane CCOMRF model using Hybrid Algorithm; (f) reward punishment MRF model using Hybrid Algorithm; (g) reward punishment COMRF model using Hybrid Algorithm; (h) reward punishment CCOMRF model using Hybrid Algorithm; (i) segmentation result showing JSEG method; (j) segmentation result showing kato method.

combined iterations while Figures 9(a)-(c) show the estimates of the parameters for one combined iteration with homotopy continuation method. Figure 7(e) shows the result with CCOMRF model and Weak Membrane model and it could segment with few misclassified pixels and with PME 2.67. The result shown in Figure 7(h) has not improved and the PME is 3.18. In this case also, the CCOMRF model exhibited the potentiality of modeling both texture and scene as well. The proposed CCOMRF model’s performance has been found to best among other models. The model parameters and the misclassification errors have been presented in Tables 1-3 respectively. The fourth example considered was the crow image from the Berkeley data base and this image has non-uniform lighting conditions. This is shown in Figure 8(a). The corresponding ground truth, manually constructed, is shown in Figure 8(b). In this case also the results obtained by CCOMRF model have been superior to the use of other models. These are shown in Figure 8(e) and Figure 8(h) respectively. This has also reflected in the percentage of misclassification error. The proposed unsupervised algorithm has been applied with all the proposed MRF models. The model parameters α , β and σ as obtained by the homotopy continuation algorithm are shown in Figures 9(a)-(c) respectively. The x-axis parameter λ denotes

Figure 9. (a) Estimation of alpha for “Five Texture” image; (b) estimation of beta for “Five Texture” image; (c) estimation of lambda for “Five Texture” image; (d) combined iteration of alpha for “Water-boat” image; (e) combined iteration of beta for “water-boat” image; (f) combined iteration of lambda for “water-boat” image.

the homotopy parameter λ and varies from 0 to 1. The value at zero corresponds to the arbitrary starting point and the value at λ = 1 corresponds to the solution of unknown function and hence MCPL estimates. Similarly Figure 9(b) and Figure 9(c) corresponds to the MCPL estimates obtained at λ = 1 . These values have been used to obtain the image label estimates by the hybrid algorithm.

Thus, from our simulation results, it has been concluded that the proposed CCOMRF model with the reward punishment model performed well as compared to others. It has also been demonstrated that this model possess the unifying property of modeling texture and non textured objects in the scene as well.

10. Conclusion

In this work, an unsupervised color image segmentation algoritm is proposed with two new image models such as COMRF and CCOMRF models. Here, image segmentation is viewed as the problem of recovering a “true” image consisting of a few “homogeneous regions” from a noisy image by labeling individual pixels according to region type. The proposed CCOMRF model is found to have the unifying property of modeling scene and texture image as well. The proposed compound MRF model has the potentiality of modeling color with the notion of controlled correlations. The model parameters have been estimated by the proposed “Homotopy Continuation method”. It has been found that CCOMRF model produced better results visually and numerically, than those of other models. Further this model was found to possess “unifying property” of modeling scenes as well as texture images. The only parameter that was selected on trial and error basis was σ , the degradation parameter. Currently, attempts are made to reformulate the problem to estimate σ with all other associated model parameters.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Pal, N.R. and Pal, S.K. (1993) A Review on Image Segmentation Techniques. Pattern Recognition, 26, 1277-1294.
https://doi.org/10.1016/0031-3203(93)90135-J
[2] Cheng, H.D., Jiang, X.H., Sun, Y. and Wang, J. (2001) Color Image Segmentation: Advances and Prospects. Pattern Recognition, 34, 2259-2281.
https://doi.org/10.1016/S0031-3203(00)00149-7
[3] Geman, S. and Geman, D. (1984) Stochastic Relaxation, Gibbs Distributions and the Bayesian Restoration of Images. IEEE Transaction on Pattern Analysis and Machine Intelligence, 6, 721-741.
https://doi.org/10.1109/TPAMI.1984.4767596
[4] Besag, J. (1986) On the Statistical Analysis of Dirty Pictures. Journal of the Royal Statistical Society. Series B (Methodological), 68, 259-302.
https://doi.org/10.1111/j.2517-6161.1986.tb01412.x
[5] Ohta, Y.I., Kanade, T. and Sakai, T. (1980) Color Information for Region Segmentation. Computer Graphics and Image Processing, 13, 222-241.
https://doi.org/10.1016/0146-664X(80)90047-7
[6] Tkalcic, M. and Tasic, J.F. (2003) Colour Spaces—Perceptual, Historical and Applicational Background. IEEE Region 8 Eurocon Conference, Computer as a Tool, Vol. 1, 304-308.
[7] Li, S.Z. (1995) Markov Random Field Modeling in Computer Vision. Springer, Berlin.
https://doi.org/10.1007/978-4-431-66933-3
[8] Kato, Z., Pong, T.C. and Lee, J.C.M. (2001) Color Image Segmentation and Parameter Estimation in a Markovian Framework. Pattern Recognition Letters, 22, 309-321.
https://doi.org/10.1016/S0167-8655(00)00106-9
[9] Kato, Z., Pong, T.C. and Qian, S.G. (2002) Multicue MRF Image Segmentation: Combining Texture and Color Features. IEEE Computer Society ICPR, Vol. 22, 309-321.
https://doi.org/10.1109/ICPR.2002.1044836
[10] Deng, Y. and Manjunath, B.S. (2001) Unsupervised Segmentation of Color-Texture Regions in Images and Video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 800-810.
https://doi.org/10.1109/34.946985
[11] Mukherjee, J. (2002) MRF Clustering for Segmentation of Color Images. Pattern Recognition Letters, 23, 917-929.
https://doi.org/10.1016/S0167-8655(02)00022-3
[12] Destrempes, F, Max, M. and Francois, A.J. (2005) A Stochastic Method for Bayesian Estimation of Hidden Markov Random Field Models with Application to a Color Model. IEEE Transactions on Image Processing, 14, 1096-1108.
https://doi.org/10.1109/TIP.2005.851710
[13] Kato, Z. and Pong, T.C. (2006) A Markov Random Field Image Segmentation Model for Color Textured Images. Image and Vision Computing, 24, 1103-1114.
https://doi.org/10.1016/j.imavis.2006.03.005
[14] Panjwani, D. and Healey, G. (1995) Markov Random Field Models for Unsupervised Segmentation of Textured Color Images. IEEE Transaction on Pattern Analysis and Machine Intelligence, 17, 939-954.
https://doi.org/10.1109/34.464559
[15] Lievin, M. and Luthon, F. (2004) Nonlinear Color Space and Spatiotemporal MRF for Hierarchical Segmentation of Face Features in Video. IEEE Transaction on Image Processing, 13, 63-71.
https://doi.org/10.1109/TIP.2003.818013
[16] Guo, L., Hou, Y.M. and Lun, X.M. (2008) An Unsupervised Color Image Segmentation Algorithm Based on Context Information. Pattern Recognition and Artificial Intelligence, 21, 82-87.
[17] Scarpa, G., Gaetano, R., Haindl, M. and Zerubia, J. (2009) Hierarchical Mulitiple Markov Chain Model for Unsupervised Texture Segmentation. IEEE Transactions on Image Processing, 18, 1830-1843.
https://doi.org/10.1109/TIP.2009.2020534
[18] Ladicky, L., Russell, C., Kohli, P. and Torr, P.H.S. (2014) Associative Hierarchical Random Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 1056-1077.
https://doi.org/10.1109/TPAMI.2013.165
[19] Islam, M., Yearwood, J. and Vamplew, P. (2008) Unsupervised Color Textured Image Segmentation Using Cluster Ensembles and MRF Model. Advances in Computer and Information Sciences and Engineering (Springer Science), Vol. 21, Springer, Berlin, 323-328.
https://doi.org/10.1007/978-1-4020-8741-7_59
[20] Karadag, O.Z. and Vural, F.T.Y. (2014) Image Segmentation by Fusion of Low Level and Domain Specific Information via Markov Random Field Models. Pattern Recognition Letters, 46, 75-82.
https://doi.org/10.1016/j.patrec.2014.05.010
[21] Chen, Y., Cremers, A.B. and Cao, Z. (2014) Interactive Color Image Segmentation via Iterative Evidential Labeling. Information Fusion, 20, 292-304.
https://doi.org/10.1016/j.inffus.2014.03.007
[22] Akbas, E. and Ahuja, N. (2014) Low-Level Hierarchical Multiscale Segmentation Statistics of Natural Images. IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI), 36, 1900-1906.
https://doi.org/10.1109/TPAMI.2014.2299809
[23] Mishra, A.K., Fieguth, P.W. and Clausi, D.A. (2011) Decoupled Active Contour (DAC) for Boundary Detection. IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI), 33, 917-930.
https://doi.org/10.1109/TPAMI.2010.83
[24] Vazquez, E., Baldrich, R., Weijer, J. and Vanrell, M. (2011) Describing Reflectances for Color Segmentation Robust to Shadows, Highlights and Textures. IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI), 33, 917-930.
https://doi.org/10.1109/TPAMI.2010.146
[25] Niessen, W.J., Vincken, K.L., Weickert, J., Romeny ter Haar, B.M. and Viergever, M.A. (1999) Multiscale Segmentation of Three-Dimensional MR Brain Images. International Journal of Computer Vision, 31, 185-202.
https://doi.org/10.1023/A:1008070000018
[26] Brox, T., Rousson, M., Deriche, R. and Weickert, J. (2010) Colour, Texture and Motion in Level Set Based Segmentation and Tracking. Image and Vision Computing, 28, 376-390.
https://doi.org/10.1016/j.imavis.2009.06.009
[27] Urcid, G., Valdiviezo, J.C. and Ritter, G.X. (2012) Lattice Algebra Approach to Color Image Segmentation. Journal of Mathematical Imaging and Vision, 42, 150-162.
https://doi.org/10.1007/s10851-011-0302-2
[28] Ilea, D.E. and Whelan, P.F. (2011) Image Segmentation Based on the Integration of Colour-Texture Descriptors—A Review. Pattern Recognition, 44, 2479-2501.
https://doi.org/10.1016/j.patcog.2011.03.005
[29] Peng, B., Zhang, L., Zhang, D. and Yang, J. (2011) Image Segmentation by Iterated Region Merging with Localized Graph Cuts. Pattern Recognition, 44, 2527-2538.
https://doi.org/10.1016/j.patcog.2011.03.024
[30] Tan, K.S. and Isa, N.A.M. (2011) Color Image Segmentation Using Histogram Thresholding-Fuzzy C-Means Hybrid Approach. Pattern Recognition, 44, 1-15.
https://doi.org/10.1016/j.patcog.2010.07.013
[31] Cheng, C., Koschan, A., Chen, C.H., Page, D.L. and Abidi, M.A. (2012) Outdoor Scene Image Segmentation Based on Background Recognition and Perceptual Organization. IEEE Transactions on Image Processing, 21, 1007-1019.
https://doi.org/10.1109/TIP.2011.2169268
[32] Boix, X., Gonfaus, J.M., Van de Weijer, J., Bagdanov, A.D., Serrat, J. and Gonzalez, J. (2012) Fusing Global and Local Scale for Semantic Image Segmentation. International Journal of Computer Vision, 96, 83-102.
https://doi.org/10.1007/s11263-011-0449-8
[33] Panda, S. and Nanda, P.K. (2008) Color Image Segmentation Using Constrained Compound Markov Random Field Model and Homotopy Continuation Method. Proceeding of the First International Conference on Distributed Frameworks and Applications, Penang, 21-22 October 2008, 151-158.
https://doi.org/10.1109/ICDFMA.2008.4784429
[34] Panda, S. and Nanda, P.K. (2009) Unsupervised Color Image Segmentation Using Compound Markov Random Field Model. In: Proceeding of the Third International Conference on Pattern Recognition and Machine Intelligence, Indian Institute of Technology (IIT), Delhi and IUPRAI in Collaboration with Machine Intelligence Unit, Indian Statistical Institute, Kolkata, 1-6.
[35] Panda, S. and Nanda, P.K. (2009) Constrained Compound Markov Random Field Model with Graduated Penalty Function for Color Image Segmentation. IEEE International Conference on Control, Robotics and Cybernetics (ICCRC-2011), Delhi, 19-20 March 2009, VI-126-VI-132.
[36] Wendell, R.E. and Horter, A.P. (1976) Minimization of a Non-Separable Objective Function Subject to Disjoint Constraints. Operations Research, 24, 643-657.
https://doi.org/10.1287/opre.24.4.643

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.