An Introduction to Information Sets with an Application to Iris Based Authentication


This paper presents the information set which originates from a fuzzy set on applying the Hanman-Anirban entropy function to represent the uncertainty. Each element of the information set is called the information value which is a product of the information source value and its membership function value. The Hanman filter that modifies the information set is derived by using a filtering function. Adaptive Hanman-Anirban entropy is formulated and its properties are given. It paves the way for higher form of information sets called Hanman transforms that evaluate the information source based on the information obtained on it. Based on the information set six features, Effective Gaussian Information source value (EGI), Total Effective Gaussian Information (TEGI), Energy Feature (EF), Sigmoid Feature (SF), Hanman transform (HT) and Hanman Filter (HF) features are derived. The performance of the new features is evaluated on CASIA-IRIS-V3-Lamp database using both Inner Product Classifier (IPC) and Support Vector Machine (SVM). To tackle the problem of partially occluded eyes, majority voting method is applied on the iris strips and this enables better performance than that obtained when only a single iris strip is used.

Share and Cite:

Hanmandlu, M. , Bansal, M. and Vasikarla, S. (2020) An Introduction to Information Sets with an Application to Iris Based Authentication. Journal of Modern Physics, 11, 122-144. doi: 10.4236/jmp.2020.111008.

1. Introduction

Representing the uncertainty in the fuzzy sets conceptualized by the pioneering work of Zadeh [1] is the main theme of this work. The fuzziness of a fuzzy set is called the uncertainty by another exponent of fuzzy sets, Yager [2] who has introduced the concept of specificity as an important measure of uncertainty in a fuzzy set or possibility distributions. As we are aware any crisp set is deemed to have zero fuzziness, finding the difference between the uncertainty and the specificity [3] of a fuzzy subset containing one and only one element is one way of measuring the uncertainty. Representing the uncertainty in the fuzzy sets by the entropy functions is another way.

Most of the entropy functions were defined in the probabilistic domain as an entropy measure gives the degree of uncertainty associated with a probability distribution. The Shannon entropy function [4] defined in the probabilistic domain has the logarithmic gain function which creates problems with zero probability; so it is replaced with the exponential gain in Pal and Pal entropy function [5]. The Hanman-Anirban entropy function [6] contains polynomial exponential gain with free parameters which enable it to become a membership function.


The motivation for this work stems from two reasons. 1) To expand the scope of information sets in [6] by defining an adaptive exponential gain function that empowers a membership function to act as an agent, and 2) To develop higher form of information sets such as Hanman Transform that helps evaluate the information source values by way of higher level uncertainty representation and Hanman filter that helps modify the information.

In our previous work [7] we have introduced the information set and also developed some features and inner product classifier (IPC) for the authentication based on ear. In the present work we embark on extending the information sets to represent higher forms of uncertainty in addition to formulating a new classifier. The original information set features were derived from the non-normalized Hanman-Anirban entropy, which is not suitable for representing higher forms of uncertainty because of its constant parameters; hence this entropy needs to be made adaptive by assuming its parameters as variables. The power of the resulting adaptive entropy is immense as it can tackle both time varying and spatially varying situations. Our main consideration is here to see the applicability and suitability of information set based features for the distinct and unique iris textures.

The paper is organized as follows: Section 2 introduces the information set and describes the extraction of features based on this set in Section 3. Segmentation of iris and use of the information set based features for iris authentication are discussed in Section 4. Inner Product classifier (IPC) is described along with the formulation of Hanman Transform classifier in Section 5. The results of application of IPC on the Iris database using the proposed features are given in Section 6 followed by the conclusions in Section 7.

2. An Introduction to Information Sets

Assume a fuzzy set formed from a set of gray levels { I i j } termed as the information source values and the corresponding membership function values { μ i j } . Each pair ( I i j , μ i j ) in the fuzzy set becomes a product in the information set on representing the uncertainty in the information source values using the Hanman-Anirban entropy function proved later.

Probability vs. Possibility: We consider here two types of uncertainty: probabilistic uncertainty which results from the probability distribution of the information source values (gray levels) and possibilistic uncertainty which results from their possibility distribution. The uncertainty in the probability distribution is defined by the Shannon entropy function [4] as

H S h = p i j log p i j (1)

where p i j = 1 . Pal and Pal [5] have used the exponential gain function in place of the logarithmic gain function to define

H P P = p i j e 1 p i j (2)

These two entropy functions give a measure of the probabilistic uncertainty. If we replace p i j by the normalized I i j in the range [0,1] the logarithmic gain function log I i j from (1) and the exponential gain function e 1 I i j from (2) can’t model the possibility distribution of I i j due to lack of parameters in them. Unlike probability distribution the possibilistic distribution requires a membership function which in turn needs parameters to model the distribution. As Hanman-Anirban entropy function being information theoretic entropy function contains parameters in its exponential gain function, which we can use to convert the gain function into a membership function. The non-normalized form of this function is defined as.

H = p i j e ( a p i j 3 + b p i j 2 + c p i j + d ) (3)

Just as (1) and (2), (3) is also probability based but it can represent the possibility distribution of I i j after substituting it to replace p i j in (3) and then choosing the parameters in the exponential gain function as statistical. The well known membership functions to represent possibility distribution are exponential and Gaussian membership functions, given by

μ i j e = e { | I i j I r e f | f h ( r e f ) 2 } (4)

μ i j g = e [ | I i j I r e f | 2 f h ( r e f ) ] 2 (5)

where μ i j e is the exponential membership function, μ i j g is the Gaussian membership functions and I r e f is taken as I max . The fuzzifier f h ( r e f ) 2 [8] that gives the spread of the information source values with respect to the reference is defined as

f h ( r e f ) 2 = i = 1 W j = 1 W ( I r e f I i j ) 4 i = 1 W j = 1 W ( I r e f I i j ) 2 (6)

This gives more spread than possible with variance. We will now consider a triangular membership function given by

μ i j t r = | I a v g I i j | I max (7)

Note that I r e f = I max is the maximum of I i j in a window or sub image. Assuming ( a I i j 3 + b I i j 2 + c I i j + d ) > 0 and setting the parameters as, a = 0 , b = 0 , c = 1 f h ( r e f ) 2 , d = I r e f f h ( r e f ) 2 , Equation (3) takes the form with gain function becoming exponential:

H e = I i j μ i j e (8)

Similarly with another choice of parameters, a = 0 , b = 1 2 f h ( r e f ) 2 , c = 2 I r e f 2 f h ( r e f ) 2 , d = I r e f 2 2 f h ( r e f ) 2 , Equation (3) takes another form with gain function becoming Gaussian:

H g = I i j μ i j g (9)

It may be noted that in the derivation of (8) and (9) the parameters are chosen to be statistical computed from the statistics of the sub images in windows and we are avoiding the normalization of the information, H in all equations but the normalization is inevitable during feature generation because of practicality. For the generality of membership function we ignore the superscripts e and g in Equations (8) and (9) respectively and represent the information set as

H ( I ) = { I i j μ i j } = { H i j ( I ) } = I H i j I i j ; I [ 0 , 1 ] (10)

We can also derive the entropy function using the triangular membership function. Assuming a = 0 , b = 0 , c = 1 I max , d = I a v g I max ; we have, H t r = I i j e μ i j t r = I i j μ ¯ i j t r since e μ i j t r = 1 μ i j t r = μ ¯ i j t r . The information set denoted by H t r ( I ) = { I i j μ ¯ i j } contains the complement of membership function. In the context of information sets, the role of the membership is enlarged by terming it as an agent, which can be its complement, square or intuitive. The agent can take care of both spatially and time varying information source values.

Definition of Information Set: A set of information source values can be converted into an Information set by representing the uncertainty in their distribution. The basic information set consists of a set of information values with each value being the product of information source value (property/attribute) and its membership value (agent in the general case). It is denoted by

H ( I ) = { I i j μ i j } (11)

Note that the membership function not only represents the distribution of information source values but also acts as an agent that helps generate different information sets such as { I i j μ i j 2 } , { I i j 2 μ i j } , { I i j μ i j 1 / 2 } .

Derivation of Information Sets by the Mamta-Hanman Entropy Function: The 2D non-normalized form of this entropy function [9] is given by

H M H = i = 1 n j = 1 n p i j γ e ( c p i j α + d ) β (12)

This entropy function allows us to change not only the exponential gain function but also the information source values thereby facilitating the generation of different types of information sets very easily. It is easy to derive (9) by fixing p i j = I i j , c = 1 f h ( r e f ) 2 , d = I r e f f h ( r e f ) 2 , γ = 1 , α = 1 and β = 2 . The exponential gain function in (11) becomes μ i j β = e { I i j I r e f f h ( r e f ) 2 } β leading to H = I i j γ μ i j β and the corresponding information set is H ( I ) = { I i j γ μ i j β } . This form allows to derive different information sets { I i j γ μ i j 2 } and { I i j γ μ i j 1 / 2 } with β = 2 and β = 1 2 respectively by converting the exponential gain function, e ( a p i j α + b ) β into a membership function μ i j .

2.1. Hanman Transforms

These transforms are higher form of information sets. Note that information sets are the result of determining the uncertainty in the information source values whereas the transforms will be shown to be the result of determining the uncertainty in the information source values by the information gathered on them. The formulation of transforms is only possible if the parameters in the Hanman-Anirban entropy function are varying though they are assumed to be constant [6]. We now present the adaptive entropy function and its properties.

2.2. The Adaptive Hanman-Anirban Entropy Function

The non-normalized Hanman-Antropy function with the varying parameters is called the adaptive entropy function which is relevant to spatially varying and time varying information source values. To this end, we modify this entropy function by taking two parameters a and b as zeros and other two parameters c and d as variables. The resulting adaptive entropy function is therefore:

H T ( I ) = i = 1 n j = 1 n I i j e ( c i j I i j + d i j ) (13)

0 I i j 1 . We will now prove that (13) satisfies the properties of an entropy function when c i j and d i j are varying.

The Proof of Properties:

1) The exponential gain function also called the information gain I ( p i j ) = e ( c i j p i j + d i j ) is the continuous function (here I ( p i ) should not be confused with Iij which stands for the information source) for all p i j [ 0 , 1 ] and c i j , d i j [ 0 , 1 ] , so p i j e ( c i j p i j + d ) is a continuous function being the product of two continuous functions and hence H being the sum of continuous functions is also a continuous function.

2) I ( p i j ) is bounded. Since e ( c i j p i j + d i j ) < 1 which means that p i j e ( c i j p i j + d i j ) < 1 . As p i j e ( c i j p i j + d i j ) is bounded for each I, H is also bounded.

3) With increase in p i j , I ( p i j ) decreases since c i j > 0 .

4) When p i j = 1 n , then H is an increasing function of n.

H = i = 1 n j = 1 n p i j e ( c i j p i j + d i j ) = i = 1 n j = 1 n 1 n e ( c i j n + d i j ) = e ( c i j n + d i j ) (14)

H n = c i j n 2 e ( c i j n + d i j ) > 0 (15)

so H is an increasing function of n.

5) H = i = 1 n j = 1 n p i j e ( c i j p i j + d i j ) is a concave function where p i j [ 0 , 1 ] and i = 1 n j = 1 n p i j = 1 .

The function is concave if the Hessian matrix is negative definite.

H p i j = e ( c i j p i j + d i j ) c i j p i j e ( c i j p i j + d i j ) (16)

2 H p i j 2 = 2 c i j e ( c i j p i j + d i j ) + c i j 2 p i j e ( c i j p i j + d i j ) = c i j e ( c i j p i j + d i j ) ( c i j p i j 2 ) (17)

as c i j and p i j are in the range [0, 1].

p i j = 1 n , 2 H p i j 2 = c i j ( c i j n 2 ) < 0 (18)

The Hessian matrix H F = 2 H p i j 2 is the second order partial derivative of square matrix having the following form:

H F = [ β 1 0 0 0 0 β 2 0 0 0 0 β 3 0 0 0 0 β n ] (19)

where β i = c i j ( c i j p i j 2 ) < 0 ; hence all the Eigen values of this Hessian matrix are negative. Thus Hessian is negative definite. So the entropy H is concave.

6) Entropy H is maximum when all p i j ’s are equal. In other words, p i j = 1 n , i , j

p i j = 1 n (20)

In that case, β i = c i j ( c i j n 2 ) < 0 , i .

7) The entropy is minimum if and only if all p i j ’s are equal to 0’s and single p i j = 1 .

Significance of Adaptive Hanman-Anirban Entropy Function: We have already seen the role of the information gain as an agent when the parameters are constant. We will now examine its usefulness in the context of varying parameters, i.e. I ( p i j ) = e ( c i j p i j + d i j ) . Taking the derivative of I ( p i j ) w.r.t. “ c i j ” we get, G ( p i j ) c i j = p i j e ( c i j p i j + d i j ) . This means that the absolute derivative of the information gain with respect to the parameter, i.e. | I ( p i j ) c i j | gives the information value. When the information gain is changing as a result of change in the parameter responsible for modifying the information source value, i.e. p i j in this case, it produces the information set { p i j e ( c i j p i j + d i j ) } after adjusting the sign as the information value must be positive. The higher form of the information set results if the parameter is also an agent by itself. We will now derive the transforms based on this concept.

2.3. The Adaptive Hanman-Anirban Entropy Function as the Transform

Fixing d = 0 , c i j = μ i j / I max in (13) the entropy function takes the new incarnation called Hanman transform which transforms the spatial domain information source values into the information domain as:

H T ( I ) = I i j e ( μ i j I i j / I max ) (21)

In this, the exponential gain is made as a function of information value μ i j I i j which is already shown to be a measure of the uncertainty. The new gain function termed as an agent is a function of the information value.

Note that the information source value weighted by this new agent in Hanman transform (21) gives a better representation of the uncertainty. The division of μ i j by the maximum gray level in a window, I max is necessitated from the fact that this ratio serves as a better statistic than mere μ i j in (21). Note that if information source values are normalized already, no division is needed.

Proof: The zero order transform can be obtained if we take c i j = 0 , d i j = μ i j in (13) leading to H T ( μ ) = I i j e μ i j = I i j ( 1 μ i j ) . Similarly we can have H T ( p ) = I i j e p i j = I i j ( 1 p i j ) . Note that the deviations of possibility distribution and probability distribution from unity are causing the uncertainty in the information source values. Here agents are e μ i j and e p i j . In the case of Laplace transform the agent is e s t where s = 1 / t . If we choose d i j = i μ i j t ; then the agent in Fourier transform is e ( i μ i j t ) which is complex. On the other hand H T ( p ) = p i j e p i j = p i j ( 1 p i j ) is not a transform as it is a function of p i j only. The first order transforms are: I i j e I i j μ i j and I i j e I i j p i j where I i j μ i j and I i j p i j represent possibilistic and probabilistic information values respectively. In view of this discussion the definition of transform now follows.

Definition of Transform: The gain function in the adaptive entropy function can be a function of the probabilistic information (distribution) or possibilistic information (distribution) and it weights the information source values giving rise to the first order (zero order) transform.

The Relevance of Transforms to the Real Life Scenario: The information source values received by our senses are perceived by the mind as the information values; hence these are natural variables just as the fuzzy variables. That is, using the information values perceived by the agent on the information source values, the entropy improves its uncertainty representation.

The Relation between Information Sets and Hanman Transforms: The information sets are derived directly from the Hanman-Anirban entropy function and those derived from the adaptive Hanman-Anirban entropy function are higher form of information sets. The latter are useful for the representation of time varying and spatially varying information source values.

The Heterogeneous Transforms:

If the agent e ( μ i j I i j / I max ) is from another information source Í i j along with its membership function μ i j ' and the reference parameter Í max then (21) becomes what is called Heterogeneous Transform.

H ¯ T ( I ) = I i j e ( μ i j ' Í i j / Í max ) (22)

In this the agent from a different information source evaluates the information source of interest.

Algorithm for Hanman Transform Features

1) Compute the membership function value for each gray level in a window of size W × W. In our experimental study we have used μ i j e using Equation (4) for computing the membership value.

2) Obtain the normalized information value by dividing the information value with the maximum gray level in the window.

3) Multiply the normalized information value from Step 2 with the corresponding gray level in Equation (21).

4) Repeat Steps 2 and 3 in a window and sum all the products to get a feature value.

5) Form a feature vector by repeating Steps 1 - 4 on all windows of an iris strip.

2.4. Hanman Filter

Invariably the information sets derived from the fuzzy sets may not possess desirable characteristics. By modifying the information sets by certain functions or operators it is possible to get better features. The modification of the information is required to meet certain objectives like better classification or a new interpretation.

Let us see how to modify the information H i j = I i j μ i j at a pixel in a window. This is done by taking the membership function as a function of parameter s. The modified H i j is defined as

H i j ( s ) = I i j μ i j ( s ) (23)

The dependency of the membership function in (22) on s is incorporated as

μ i j ( s ) = e | I i j I a v g s f h ( r e f ) 2 | for s { 0.4 , 0.6 , 0.8 , 1 } (24)

In type-1 fuzzy sets, the fuzzier is constant as in (24) but the type-2 fuzzy sets result from varying f h ( r e f ) 2 . Here the membership function depends on scale. We will modify { H i j ( s ) } by using an agent to provide a new content through the Hanman-Anirban entropy function with the substitution: p i j ( s ) = H i j ( s ) , c = 0 , d i j = log ( cos 2 π F i j ( s , u ) ) leading to

H F ( I ) = i = 1 n j = 1 n H i j ( s ) cos 2 π F i j ( s , u ) (25)

where the parametric frequency of the cosine function is defined as F i j ( s , u ) = F max 2 u / 2 [ I i j I a v g 2 s ] ; u [ 1 , 2 , 3 ] , s [ 0.4 , 0.6 , 0.8 , 1 ] with F max = 0.1 . We can write the r.h.s. of Equation (25) as H i j ( s ) cos 2 π F i j ( s , u ) = I i j μ i j ( s ) cos 2 π F i j ( s , u ) , which is a product of the information value and the cosine function. This filter is different from Gabor filter which is the convolution of image and the product of the Gaussian and cosine functions. We have no such restriction for μ i j ( s ) in (25). By using Fij we can create several information images having varied frequency components. These images are aggregated to get a composite image. Next windows of varying size are used to partition this image and the values within a window are averaged to get a feature value.

Definition of 1st Order Filter: If μ i j ( s ) is a function of Iij as in (24) then (25) is termed as the first order Hanman filter.

Definition of Zero-Order Filter: If μ i j ( s ) is not a function of Iij but only a constant then (25) is termed as the zero-order Hanman filter. Let us choose d = i 2 + j 2 s f h ( r e f ) 2 in the exponential gain function by keeping a = b = c = 0 then it converts the zero-order Hanman-filter similar to Gabor type as given by

µ i j ( s ) = e ( i 2 + j 2 s f h ( r e f ) 2 ) (26)

We can fix “s” in (25) to any value. In the general case s is fixed to the window size, i.e. s = w. Then we have

μ i j ( s ) = e ( i 2 + j 2 w 2 f h ( r e f ) 2 ) (27)

An algorithm for the extraction of Hanman filter features is as follows: 1) Generate 12 information sets using a window of size W × W for W = 7 from an image for 3 values of u and four values of s, 2) Form the composite information set by aggregating all 12 sets, 3) Consider the average value in a window as the feature, 4) Repeat Steps 1 - 3 on all windows in an iris image to produce a feature vector, and 5) Generate different feature vectors corresponding to different values of W.

The Utility of Hanman Filter: Its utility is vested with the choice of a suitable type of functions that can modify the information. Consider an example of charcoal the elements of which may be represented as { I i μ i } whereas the elements of the burning charcoal may be represented as the product of information value and temperature of charcoal, i.e. { I i μ i f ( T 4 ) } .

The Difference between the Hanman Transform and the Hanman Filter: The function of Hanman transform is to evaluate the information source values by the gain function using the information already obtained on it while the function of Hanman filter is to modify the information using a suitable function. They lead to higher forms of the information sets because the gain functions used are functions of information values.

Hanman Filter Features

An Example: Let us consider window of size 5 × 5 from Iris strip. The original gray levels are represented by I i j 0 , the normalized gray levels by p i j , probability distribution by p i j and membership function values (Gaussian) μ i j . Features of the First order HF are extracted using Equation (25) and those of the Zero order HF are extracted using Equation (26).

Two typical feature values for three values of frequency change (u) and two values of scale change (s) are shown in Table 1. A comparison of recognition rates due to different feature types is shown in Table 2 in which the basic Information values yield (3rd column) the highest recognition rate and the next highest recognition rate is by a kind of Hanman transform (5th column) that evaluates the information source values based on the membership function values instead of information values as in Hanman transform (7th column).

2.5. Divergence

If two memberships in the role of agents evaluate the same information source value, we get the divergent information. Let I i j be the set of information source values and let μ i j 1 and μ i j 2 be the two membership functions that look at I i j differently. Then the divergent information is expressed as

Table 1. Typical feature values of Hanman filter.

Table 2. Comparison of different features based on the results of authentication.

H D = I i j ( μ i j 1 μ i j 2 ) (28)

The divergent evaluation simply follows from Hanman Transform as given by

H D E = I i j e I i j ( μ i j 1 μ i j 2 ) (29)

We can use this measure in quantifying the quality of evaluation of any information source.

2.6. Random Information

By changing the membership function values randomly one can distort the distribution pattern present in the information values. If r is the random number the basic information can be turned into random by using:

H R = r I i j μ i j (30)

The corresponding random evaluation is expressed as,

H R E = I i j e r I i j μ i j (31)

Assuming r I i j μ i j = I i j μ ¯ i j as the complementary information I R E can be termed as the twisted information. This leads to the twisted evaluation expressed as,

H T E = I i j e I i j μ ¯ i j (32)

3. Derivation of Information Set Based Features

3.1. Effective Information Source Value

This feature directly emerges from the definition of the basic information set. The Effective Information source value from the kth window is computed from:

I ¯ k = i j μ i j I i j i j μ i j (33)

Replacing μ i j with the Gaussian membership function μ i j g in (33) leads to what we term as Effective Gaussian Information (EGI):

I ¯ g ( k ) = i j μ i j g I i j i j μ i j g (34)

3.2. Total Effective Gaussian Information (TEGI)

Just as the above, this feature also comes directly from the basic information. TEGI is defined as the product of Effective Gaussian Information I ¯ g ( k ) and the Effective Gaussian membership function value μ ¯ g ( k ) , given by

I ¯ T ( k ) = I ¯ g ( k ) μ ¯ g ( k ) (35)

where μ ¯ g ( k ) is computed using:

μ ¯ g ( k ) = i j μ i j g I i j i j I i j

We can also consider μ i j e instead of μ i j g or any arbitrary function but we have adopted only μ i j g in our study.

3.3. Energy Features (EF)

From (12) we can write the gain function as { e ( I i j I a v g I max ) } 2 = { e μ i j t r } 2 = ( μ ¯ i j t r ) 2 . Here we have converted the gain function into the triangular function. Hence the energy feature from kth window taking γ = 1 is written as:

E k = 1 m × n i = 1 m j = 1 n I i j ( μ ¯ i j t r ) 2 (36)

It may be noted that the choice of an appropriate membership function is an important issue that is evaded here by going in for an experimentally proven function.

3.4. Sigmoid Features (SF)

Unlike the energy features, these features are the result of considering the information values { μ i j t r I i j } in the form of the sigmoid function, SF expressed as

S k = 1 m × n i = 1 m j = 1 n I a v g 1 + e μ i j t r I i j (37)

where I a v g is the average gray level in the kth window.

To extract features an iris strip is divided into windows of size 7 × 7 and the gray levels are normalized. The number of features is equal to the number of non overlapping windows fitted into an iris strip. The classification of features is performed using the Inner Product Classifier (IPC) in [7].

4. Formulation of Inner Product Classifier (IPC)

This classifier makes use of the error vectors between the training feature vectors of a user and a single test feature vector. As our objective is to get the error vector of the least disorder we generate all possible t-normed error vectors by applying t-norms on any two error vectors of a user at a time. As each normed error vector involves two training feature vectors; these are averaged to get the aggregated training feature vector. The inner product of each t-normed error vector and the corresponding aggregated training feature vector must be the least to represent a user. The infimum of all the least inner products of all users gives the identity of a user. This is the concept behind the design of IPC.

Before presenting an algorithm, let us denote the number of users by N l , the number of training samples per user by N i and the number of feature values N r . The features are normalized by using:

H f = H f min ( H f ) max ( H f ) min ( H f ) (38)

where H f denotes the Information set based feature such as Effective Gaussian Information I ¯ g ( k ) (EGI), Total Effective Gaussian Information I ¯ T ( k ) (TEGI), Energy feature E k (EF), Sigmoid feature S k (SF), Hanman Transform H T (HT) feature and Hanman Filter H F (HF) feature. Note that H f stands for any one of the feature type { I ¯ g ( k ) , I ¯ T ( k ) , E k , S k , H T , H F } .

Algorithm for IPC

1) Compute the error vector e i l ( j ) pertaining to a user (l) between the feature vectors H V denoted as the feature vectors, H t r l ( i , j ) of the training samples of each user and as the feature vector H t e l ( k , j ) of the unknown test sample, given by

e i l ( j ) = | H t r l ( i , j ) H t e l ( j ) | (39)

where i = 1 , 2 , , N i ; j = 1 , 2 , , N r ; l = 1 , 2 , , N l where i stands for ith sample of lth user and N i is the number of samples of a user and N r is the number of feature values.

2) Compute the normed error vectors from all possible pairs ( i , k ) of error vectors ( e i l ( j ) , e k l ( j ) ) belonging to the lth user using the Frank t-norm as follows:

C i k ( j , l ) = t F ( e i l ( j ) , e k l ( j ) ) (40)

where t F is the Frank t-norm given by:

t F = log ψ [ 1 + ( ψ e i l ( j ) 1 ) ( ψ e k l ( j ) 1 ) ψ 1 ] for ψ = 2

As i , k = 1 , , N i , the number of pairs ( i , k ) generated from (35) is N c = i = 2 N i ( N i i + 1 ) . Let q = 1 , , N c be the index for the number of pairs.

3) Find the average feature value of ith and kth training samples from

f q ( j , l ) = f i k ( j , l ) = H t r l ( i , j ) + H t e l ( k , j ) 2 (41)

The above normed error vectors C q ( j , l ) act as support vectors and the average feature vectors f q ( j , l ) act as weights. The necessary and sufficient condition is that the inner product of C q ( j , l ) and f q ( j , l ) must be the least for the training sample to be matched with the test sample.

4) Evaluate the inner product from

E q ( l ) = j = 1 N u C q ( j , l ) f q ( j , l ) ; i k (42)

The h ( l ) = min { E q ( l ) } overall q is the error measure of associated with the lth user. While matching, which ever user yields the minimum of all min { h ( l ) } over all l provides the identity of the test user that owns the training sample.

Extensions of IPC

Assuming that the exponential membership function of C q ( j , l ) is μ q ( j , l ) = e C q ( j , l ) and the corresponding information value is C q ( j , l ) μ q ( j , l ) . Then replacing f q ( j , l ) with the exponential of this information value in (42) gives the Hanman Transform classifier, expressed as,

E q T ( l ) = j = 1 N u C q ( j , l ) e C q ( j , l ) μ q ( j , l ) ; i k (43)

Another extension is to have the weighted Hanman Transform classifier obtained by combining (42) and (43) as

E q w T ( l ) = j = 1 N u f q ( j , l ) C q ( j , l ) e C q ( j , l ) μ q ( j , l ) ; i k (44)

5. Application to Iris Based Authentication

The above information-set based features are now implemented on iris textures to demonstrate their effectiveness in the authentication of users. Many approaches are in vogue in the literature for the iris recognition but they fail to yield good recognition rates on the partially occluded irises. As the texture is a region concept the proposed approach proceeds with the granularization of an image by varying the window size on the iris strip so as to get an appropriate texture representation. Moreover the proposed information set based approach is capable of modifying the information on the texture to facilitate easy classification. No new approach is attempted on segmentation of iris, so we have used the existing methods for segmentation. In this case study our emphasis is mainly on the texture representation and classification using the information set based features.

5.1. A Brief Review of Iris as a Biometric

Iris has been a topic of interest for person authentication ever since the pioneering works of Daugman [10] and Wildes [11]. In iris recognition, the onus is on selecting the most suitable features that enable accurate classification. As iris is endowed with a specific texture, it can be used for investigating new texture representations and classifiers.

Gabor filter has played a significant role in characterizing the iris texture by way of iris codes generated using the phase information; hence it is one of the best tools to characterize and classify textures [12]. The advantage of using Gabor filter is its ability to quantify the spatio-temporal component of texture. It may be noted that better recognition of irises can only stem out of better understanding of textures. Even after nearly 20 years of the inception of iris technology, efforts are still on finding better features and classifiers [13] [14].

5.2. Literature Survey

The original works of Daugman [10] and Wildes [11] are the harbinger for the iris based personal authentication. Daugman [10] [15] uses Gabor wavelet phase information whereas Wildes uses the Laplacian of the Gaussian filter at multiple scales as features. Some important contributions on iris recognition are now discussed.

Segmentation of iris texture region plays a pivotal role in the iris recognition. Different approaches like morphological operations [16], thresholding using histogram curve analysis [17] are used for segmentation. Camus and Wildes [18] have presented a method that doesn’t rely on the edge detection by Hough transform for segmentation. The method of Du et al. [19] determines the accuracy of iris recognition for a partial iris image. There are a host of problems such as non-circular shape of iris and pupil and off axis images, which have prompted special consideration [20] [21]. It has been proved that better iris segmentation will help in improving the overall performance of iris recognition [22]. Many new methods on iris segmentation can be found in [23].

Gabor filter features are the most sought after so far as the texture is concerned [24]. Other feature extraction methods like Hilbert transform [25], Wavelet based filters [26] are also extensively used in the literature. About the classification algorithms, mention may be made of the correlation of phase information from windows [27], Support Vector Machine (SVM) [28] apart from simple Euclidean distance classifiers.

Practical implementation of iris based biometrics requires faster and more efficient data storage and a possible solution to this problem is suggested using FPGA [29]. Spoofing of iris from iris codes is a sure bet and to circumvent this, counterfeiting measures are developed in [30]. Factors affecting the quality of iris images captured using visible wavelength are investigated in [31]. Concerns regarding degradation of quality due to compression techniques are dispelled in [32]. The quality of iris images and its effect on the recognition rates are analysed with respect to the visible area of the iris texture region [33]. An attempt is made to enable iris recognition using directional wavelets [34]. New methodology on biometric recognition using periocular region (facial region close to the eye) rather than the texture features from the visible iris in Near Infrared (NIR) lighting conditions are discussed in [35] whereas iris recognition using the score level fusion techniques on video frames is presented in [36].

5.3. Segmentation of Iris and Generation of Strips

Segmentation forms a very important part of iris recognition as is evident from its effect on the performance improvement [22]. Though segmentation is not the main concern of this paper we will discuss the segmentation methodology briefly. The iris segmentation is done using the Hough transform based approach [37]. In this, Canny edge detector [38] is applied to get the segmented regions followed by the Hough transform that detects the boundaries of circular regions in the segmented regions. For strip generation polar to rectangular conversion is employed without recourse to the interpolation. A sample image from the database and the corresponding iris are depicted in Figure 1. The iris strips are affected by the occlusion of eyes due to eyelids and eyelashes as evident from Figure 1(b). To rectify this problem the iris strip is juxtaposed with itself and the middle portion of the resulting strip is bereft of occlusion as in Figure 2(b). These middle rectangular strips are enhanced and normalized before feature extraction.

The database, CASIA-Iris-V3-Lamp [39] collected using a hand-held iris sensor has eye images of 411 people with at least 10 images per user. The intra class

(a) (b)

Figure 1. Sample image of iris and the rectangular strip that is generated from it.

(a) (b)

Figure 2. Generation of iris strip devoid of occlusions and eyelids.

variation was introduced in the database by turning the lamps on or off during the acquisition. The experiments were carried out on 4100 left eye images of 411 people with the training to test sample ratio of 9:1 using k-fold validation. This database also contains some samples having rotation, translation, occlusion and illumination effects as shown in Figure 3.

6. Results and Discussion

The extracted features from each iris strip are EGI, TEGI, SF, EF, HF and HT. The dimensions of all test strips are normalized before matching with the training strips.

6.1. The Features Used for Comparison

The performance of the above feature types is evaluated and compared with that of the conventional Gabor filter using SVM in [40]. After numerous trails the parameters of Gabor filter are set as follows: The standard deviations: σx = 3 and σy = 3, Phase offset: 0, Aspect ratio: 1, Orientations: θ = π/4, 2 π /4, 3 π /4 and π and Wavelengths: λ = 1, 2, 3.

6.2. Performance Evaluation of the Proposed Features

As shown in Table 3, IPC and Linear SVM (SVML) show comparable results with the proposed features and Gabor features but Polynomial SVM (SVMP) gives good results only with HT and HF. The accuracies are the mean values of the recognition rates under the k-fold validation. IPC gives the best recognition rate of 98.1% with EF while SVML gives the best recognition rate of 99.2% with SF. The recognition rates with Gabor filter are 90.3% and 97.3% using IPC and SVML respectively. As Gabor features are very large numbering more than 10,000, all classifiers are slower by 10 times.

To tackle the problem of partially occluded eyes, we will apply the majority voting on the iris strips which enables better performance than that of the individual iris strips.

Figure 3. Example iris images in CASIA-Iris-Lamp.

Table 3. Features and their mean recognition rates with different classifiers after k fold validation.

6.3. Majority Voting

As noted in [41] certain regions of an iris strip like the middle region possess the discriminative texture. It may be noted that significant texture regions are present in iris at different radial distances away from the papillary boundary. This might be attributed to the fact that for some persons, the iris textures are spread over the region between the papillary boundary and the limbic boundary [41] while the majority of people have iris texture features lying closer to the papillary boundary. The aggregation of results from iris strips of different sizes enhances the overall recognition rate. In a few cases, correct classification is obtained with the small sized iris strips; hence the need for considering features from iris strips of different sizes.

Based on the above observation, the iris region between the papillary boundary and limbic boundary is divided into three sizes along with full size. The number of features depends upon the window size chosen to partition an iris. In our study, the window size is taken as 7. The feature vectors corresponding to iris strips of 1/4, 1/2, 3/4 and full size are 78, 156, 234 and 273 respectively. The original iris strip size is 48 × 270. The accuracy achieved with IPC on a particular strip size is given in the 3rd column of Table 4. The maximum recognition rate is obtained on 3/4 size strip by all feature types. The features extracted closer to the papillary boundary have less accuracy of detection than those closer to the middle of the iris region.

At the matching stage, each region of the test iris strip is matched with the corresponding regions of all the training strips considering only one type out of the six types of features using IPC. With a view to improve the results of IPC on individual strips, majority voting method is applied on the results of four iris strips obtained using features of one type at a time. It gives the identity of the concerned user which ever training iris strip gets the maximum votes (validation) from four strips of different sizes (similar to four classifiers) [42].

Table 4. Majority voting results for different features with IPC.

As mentioned above, when the decisions from the individual feature types on strips of different sizes are combined using the majority voting method, the final decision is as shown in the last but one column of Table 4. Further enhancement in the recognition rates is obtained when the results from all iris strips are combined using the classification accuracies due to individual feature types as weights similar to ranks [43] using IPC. Then the combined recognition rate from all the feature types on all four strips attains 100% as shown in the last column of Table 4. By applying the majority voting on the matching results of four iris strips of different sizes the effect of occlusions can be minimized to a great extent.

This type of segmental approach for iris recognition is proposed in [44]. Instead of accept option that we have used in the majority voting method, the reject option can also be used to detect the possibility of erroneous classification in case we are unable to reach a consensus by the accept option.

6.4. A Comparison with the Existing Methods

We have also compared the performance of our features as in Table 4 in which the results correspond to 3/4th size of iris with that of the existing features such as PCA, ICA [45], Local binary patterns (LBP) [46], Gabor [24] and Log Gabor [47] on the same database using k-fold validation in Table 5. The highest performance (99.35%) is obtained with HF, EF, SF and HT using IPC whereas the highest performance of 96.2% is obtained with ICA using SVML.

6.5. Verification Evaluation

At the verification level, IPC is compared with Euclidean distance classifier (EC) on the proposed features. The performance of IPC and EC is shown in terms of two separate ROCs on six features denoted by EF, HF, SF, EGI, TEGI, and HT also judged by the recognition rates.

The Euclidean distance based ROC plot in Figure 4(b) shows the maximum GAR of 93.3% at FAR of 0.1% with HF features. A maximum GAR of 99% at FAR of 0.1% is achieved with HT by IPC in ROC of Figure 4(a). The perfromance of IPC is better than that of EC as shown in Figure 4(a) and Figure 4(b).

At the verification level the proposed features are also compared with Gabor filter as it is extensively used for iris. As shown in Figure 4(a) the proposed features perform better than Gabor filter.

Table 5. Comparison of the existing features using SVML.

(a) (b)

Figure 4. ROC of average authentication by k-fold validation using different features with (a) IPC; (b) EC.

7. Conclusions

This paper moots the important concept of transform to represent higher form of uncertainty. This transform derived from the adaptive Hanman-Anirban entropy function is called the Hanman transform (HT). The transforms have an immense potential as they cater to both spatially varying and time varying situations. As the information need not be in the desirable form, this paper shows how to modify the information sets using a filter function resulting in Hanman Filter (HF) of zero-order and the first order. In addition to these two types of features, we have formulated four feature types that include: Effective Gaussian Information source value (EGI), Total Effective Gaussian Information (TEGI), Energy feature (EF) and Sigmoid Feature (SF). These features are extracted from the rectangular iris strip by partitioning it into windows of different sizes. The performance of IPC is similar to that of SVML, but consistent on all feature types. IPC gives the best results on EF whereas SVML gives the best results on SF. Out of all feature types EF and HT have an edge over other features. Thus the new features and IPC are shown to be effective on the iris database.

The results of authentication using iris strips of four sizes show that 3/4 size strips yield the best results on all feature types using IPC. An application of majority voting on the authentication results obtained with a single feature type on all four strips provides 99.8% accuracy whereas the second level majority voting with six feature types on all four strips achieves the 100% accuracy.

This paper makes several contributions that include: 1) proof of properties of the adaptive Hanman-Anirban entropy, 2) extension of information sets to Hanman filter and Hanman transforms, 3) derivation of information set based features, viz., EGI, TEGI, EF and SF and validation of these features on iris based authentication, and 4) formulation of Hanman Transform classifier.

One ramification of this work is that we can generate a plethora of features from information sets for tackling different kinds of problems though we have chosen iris to vindicate the effectiveness of our features.


This research work was funded by Department of Science and Technology (DST), Government of India. We acknowledge the database CASIA-IrisV3 from the Chinese Academy of Sciences, Institute of Automation.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.


[1] Zadeh, L.A. (1965) Information and Control, 8, 338-353.
[2] Yager, R.R. (2008) Fuzzy Sets and Systems, 159, 2193-2210.
[3] Yager, R.R. (1992) Fuzzy Sets and Systems, 50, 279-292.
[4] Shannon, C.E. (1948) Bell System Technical Journal, 27, 379-423.
[5] Pal, N.R. and Pal, S.K. (1992) Information Sciences, 66, 113-117.
[6] Hanmandlu, M. and Das, A. (2011) Defence Science Journal, 61, 415-430.
[7] Mamta and Hanmandlu, M. (2013) Expert Systems with Applications, 40, 6478-6490.
[8] Hanmandlu, M., Jha, D. and Sharma, R. (2003) Pattern Recognition Letters, 24, 81-87.
[9] Mamta and Hanmandlu, M. (2014) Expert Systems with Applications, 36, 269-286.
[10] Daugman, J. (1993) IEEE Transactions on Pattern Analysis and Machine Intelligence, 15, 1148-1161.
[11] Wildes, R.P. (1997) Proceedings of the IEEE, 85, 1348-1363.
[12] Daugman, J. (1998) IEEE Transactions on Acoustics Speech and Signal Processing, 36, 1169-1179.
[13] He, Z., Tan, T., Sun, Z. and Qiu, X. (2009) IEEE Transactions on Pattern Analysis and Machine Intelligence, 31, 1670-1684.
[14] Bowyer, K.W., Hollingsworth, K. and Flynn, P.J. (2008) Computer Vision and Image Understanding, 110, 281-307.
[15] Daugman, J. (1998) Journal of the Optical Society of America A, 2, 1160-1170.
[16] Bonney, B., Ive, R., Etter, D. and Du, Y. (2004) Iris Pattern Extraction Using Bit Planes and Standard Deviations. 38th Asilomar Conference on Signals, Systems, and Computers, Vol. 1, 582-586.
[17] Lili, P. and Mei, X. (2005) The Algorithm of Iris Image Processing. 4th IEEE Workshop on Automatic Identification Advanced Technologies, Vol. 1, 134-138.
[18] Camus, T.A. and Wildes, R.P. (2002) Reliable and Fast Eye Finding in Close-Up Images. 16th International Conference on Pattern Recognition, Quebec, Vol. 1, 389-394.
[19] Du, Y., Bonney, B., Ives, R., Etter, D. and Schultz, R. (2005) Analysis of Partial Iris Recognition Using a 1-D Approach. International Conference on Acoustics, Speech, Signal Processing, Philadelphia, Vol. 2, 961-964.
[20] Pillai, J.K., Patel, V.M., Chellappa, R. and Ratha, N.K. (2011) IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 1877-1893.
[21] Abhyankar, A., Hornak, L. and Schuckers, S. (2005) Off-Angle Iris Recognition Using Bi-Orthogonal Wavelet Network System. 4th IEEE Workshop Automatic Identification Advanced Technologies, Buffalo, 16-18 October 2005, 239-244.
[22] Vatsa, M., Singh, R. and Noore, A. (2008) IEEE Transactions on Systems, Man, and Cybernetic B, 38, 1021-1035.
[23] Daugman, J. (2207) IEEE Transactions on Systems, Man, and Cybernetic B, 37, 1167-1175.
[24] Ma, L., Wang, Y. and Tan, T. (2002) Iris Recognition Based on Multichannel Gabor Filtering. Proceedings of the 5th Asian Conference on Computer Vision, Vol. 1, 279-283.
[25] Tisse, C., Martin, L., Torres, L. and Robert, M. (2002) Person Identification Technique Using Human Iris Recognition. 15th International Conference on Vision Interface, Calgary, 27-29 May 2002, 294-299.
[26] Huang, H. and Hu, G. (2005) Iris Recognition Based on Adjustable Scale Wavelet Transform. 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Shanghai, 1-4 September 2005, 7533-7536.
[27] Miyazawa, K., Ito, K., Aoki, T., Kobayashi, K. and Nakajima, H. (2005) An Efficient Iris Recognition Algorithm Using Phase-Based Image Matching. International Conference on Image Processing, Genoa, Vol. 2, 49-52.
[28] Roy, K. and Bhattacharya, P. (2006) Iris Recognition with Support Vector Machines. Lecture Notes in Computer Science Vol. 3832, International Conference on Biometrics, Hong Kong, 5-7 January 2006, 486-492.
[29] Rakvic, R.N., Ulis, B.J., Broussard, R.P., Ives, R.W. and Steiner, N. (2009) IEEE Transactions on Information Forensics and Security, 4, 812-823.
[30] Venugopalan, S. and Savvides, M. (2011) IEEE Transactions on Information Forensics and Security, 6, 385-396.
[31] Proença, H. (2011) IEEE Transactions on Information Forensics and Security, 6, 82-95.
[32] Daugman, J. and Downing, C. (2008) IEEE Transactions on Information Forensics and Security, 3, 52-62.
[33] Belcher, C. and Du, Y. (2008) IEEE Transactions on Information Forensics and Security, 3, 572-578.
[34] Velisavljevic, V. (2009) IEEE Transactions on Information Forensics and Security, 4, 410-418.
[35] Park, U., Jillela, R.R., Ross, A. and Jain, A.K. (2011) IEEE Transactions on Information Forensics and Security, 6, 96-106.
[36] Hollingsworth, K., Peters, T., Bowyer, K.W. and Flynn, P.J. (2009) IEEE Transactions on Information Forensics and Security, 4, 837-849.
[37] Masek, L. and Kovesi, P. (2003) MATLAB Source Code for a Biometric Identification System Based on Iris Patterns. The University of Western Australia, Crawley.
[38] Canny, J.F. (1986) IEEE Transactions on Pattern Analysis and Machine Intelligence, 8, 679-697.
[39] CASIA-IrisV3-Lamp.
[40] Burges, C.J.C. (1998) Data Mining and Knowledge Discovery, 2, 121-167.
[41] Hollingsworth, P.K., Bowyer, K.W. and Flynn, P.J. (2009) IEEE Transactions on Pattern Analysis and Machine Intelligence, 31, 964-973.
[42] Narasimhamurthy, A. (2005) IEEE Transactions on Pattern Analysis and Machine Intelligence, 27, 1988-1995.
[43] Monwar, M.M. and Gavrilova, M.L. (2009) IEEE Transaction on system Man and Cybernetics B, 39, 867-878.
[44] Sayeed, F., Hanmandlu, M., Ansari, A.Q. and Vasikarla, S. (2011) Iris Recognition Using Segmental Euclidean Distances. 8th International Conference on Information Technology: New Generations, Las Vegas, 11-13 April 2011, 520-525.
[45] Wang, Y. and Han, J.-Q. (2005) Iris Recognition Using Independent Component Analysis. Proceedings of the 4th International Conference on Machine Learning and Cybernetics, Guangzhou, 18-21 August 2005, 18-21.
[46] Sun, Z., Tan, T. and Qiu, X. (2006) Graph Matching Iris Image Blocks with Local Binary Pattern. International Conference on Biometrics, Hong Kong, 5-7 January 2006, 366-372.
[47] Seif, A., Zewail, R., Saeb, M. and Hamdy, N. (2003) Iris Identification Based on Log-Gabor Filtering. IEEE 46th Midwest Symposium on Circuits and Systems, Vol. 1, 333-336.

Copyright © 2023 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.