Jensen Inequality of Bivariate Function in the G-Expectation Framework

Abstract

In the G-expectation framework, Wang [1] first obtained the Jensen inequality of one-dimensional function. In this paper, under some stronger conditions, we obtain the Jensen inequality of bivariate function based on Wang’s proof method. And we give some examples to illustrate the application of Jensen inequality of bivariate function.

Share and Cite:

Feng, L. (2020) Jensen Inequality of Bivariate Function in the G-Expectation Framework. Journal of Mathematical Finance, 10, 35-41. doi: 10.4236/jmf.2020.101004.

1. Introduction

As we know, expected utility theory has been widely used in the field of mathematical finance, especially in measuring the preference and aversion of risk. However, because the classical mathematical expectation is linear, the Von-Neumann expected utility cannot accurately measure risk aversion. Hence, economists hope to find a tool that can have certain properties of the classical expectation and accurately measure risk aversion. Driven by this problem, Peng [2] introduced a nonlinear expectation—g-expectation by backward stochastic differential equation in 1997 and in 2006, Peng [3] [4] [5] introduced a new nonlinear expectation—G-expectation through the nonlinear heat equation and established a systematical theoretical framework. Both of these are sublinear expectation, but compared with g-expectation, G-expectation doesn’t have to build on a given probability space and is more effective and general. In the G-expectation framework, Peng obtained the central limit theorem [6] and law of large numbers [7] under nonlinear expectations.

Since Peng’s pioneering work, many scholars devoted themselves to the study of related problems and obtained a wealth of scientific achievements. Bai and Buckdahn [8] gave some application of G-expectation in risk measurement and studied the problem of optimal risk transfer and convolution formula under G-expectations. Gao [9] studied the pathwise properties and homeomorphic flows for stochastic differential equations driven by G-brownian motion. In 2009, Hu and Peng [10] gave the representation theorem of G-expectation and proved the existence of weakly compact probability family, and studied paths of G-Brownian Motion.

When expected utility function denotes the aversion and preference of risk, the Jensen’s inequality of mathematical expectation takes an important role. Because of the importance of the Jensen inequality, many scholars studied the Jensen inequality in different cases. In the g-expectation framework, Li [11] proved the Jensen inequality of g-expectation when function g ( x ) is convex, concave or piecewise. Jiang [12] gave the sufficient and necessary conditions of Jensen inequality for g-expectation. Moreover, Jiang [13] proved the Jensen inequality of bivariate function when g ( x ) is a sublinear generator. Correspondingly, in the G-expectation framework, Wang [1] studied the Jensen inequality of one-dimensional function under some sufficient and necessary conditions and illustrated the significant application of Jensen inequality in the G-martingale theory. However, we found the theorems in [1] do not hold true for the bivariate function under some weaker condition. Thus in this paper, based on Wang’s proof method, under some reasonable conditions, we obtain the Jensen inequality of bivariate function in the G-expectation framework. Moreover, we use some examples to illustrate the application of Jensen inequality of binary function.

This paper is organized as follows. In Section 2, we present a brief review of the primary concepts under the G-framework, including the definition and some useful properties of G-expectation. Then, we give the basic concept about the G-Browian and the computation of E ^ [ | B t | n ] . In Section 3, we demonstrate the G-Jensen inequality of bivariate function under the stronger conditions and give some examples of Jensen inequality of binary function.

2. Preliminaries and Notation

In this section, we will give some basic theories about G-expectation and G-Brownian motion. Some more details can be found in literatures [3] [4] [5]. Let Ω be a given set and let H be a vector lattice of real functions defined on Ω containing 1 such that X 1 , , X n H implies φ ( X 1 , , X n ) H for each φ C l . L i p ( n ) , where C l . L i p ( n ) denotes the linear functions space satisfying condition:

| φ ( x ) φ ( y ) | C ( 1 + | x | m + | y | m ) | x y | , x , y n , (1)

for C > 0 , m rely on φ . For each T [ 0, ) , let

L i p ( F T ) : = { φ ( B t 1 , , B t n ) : 0 t 1 , , t n T , φ C l . L i p ( n ) , n }

where B t is a canonical process. Let L G p ( F ) : = n = 1 L i p ( F n ) . For a given p 1 , we also denote L G p ( F ) the completion of L i p ( F ) under norm X p : = ( E ^ [ | X | p ] ) 1 p . Then let G ( x ) be a monotonic and sublinear function:

G ( x ) = 1 2 ( x + σ _ 2 x ) , x ,

where x + = max { 0 , x } , x = ( x ) + .

2.1. G-Expectation and Its Properties

Firstly, we introduce some notations about G-expectations.

Definition 1. [3] A nonlinear expectation E ^ : H on H is a functional satisfying the following properties: for all X , Y H , we have

a) Monotonicity: If X Y , then E ^ [ X ] E ^ [ Y ] .

b) Preserving of constants: E ^ [ c ] = c , c .

c) Sub-additivity: E ^ [ X ] E [ Y ] E ^ [ X Y ] .

d) Positive homogeneity: E ^ [ λ X ] = λ E ^ [ X ] , λ 0 .

e) Constant translatability: E ^ [ X + c ] = E ^ [ X ] + c , c .

The triple ( Ω , H , E ^ ) is called a sublinear expectation spaces. If only c) and d) are satisfied, E ^ is called a sublinear functional.

Remark 1. If the inequality in c) is equality, then E ^ is a linear expectation on H . Moreover, the sublinear expectation E ^ can be represented as the upper expectation of a subset of linear expectation { E θ : θ Θ } , i.e., E ^ [ X ] = sup θ Θ E Θ [ X ] . In most cases, this subset is often treated as an uncertain model of probabilities { P θ : θ Θ } and the notion of sublinear expectation provides a robust way to measure a risk loss X.

The following simple properties is very useful in sublinear analysis.

Lemma 1. [3] 1) Let X , Y H be such that E ^ [ Y ] = E ^ [ Y ] , then we have

E ^ [ X + Y ] = E ^ [ X ] + E ^ [ Y ] .

In particular, if E ^ [ Y ] = E ^ [ Y ] = 0 , then E ^ [ X + Y ] = E ^ [ X ] .

2) According to the property d) of G-expectation, it is easy to deduce that

E ^ [ λ X ] = λ + E ^ [ X ] + λ E ^ [ X ] , λ R .

3) For arbitrary X , Y H , we have

| E ^ [ X ] E ^ [ Y ] | E ^ [ X Y ] E ^ [ Y X ] E ^ [ | X Y | ] .

For this properties of G-expectation will often be used in this article. They can can simplify our calculations.

Now let us introduce the notation about G-Brownian.

Definition 2. [5] A d-dimensional process ( B t ) t 0 on a sublinear expectation space ( Ω , H , E ^ ) is called a G-Brownian motion if the following properties are satisfied

1) B 0 ( ω ) = 0 ;

2) For each t , s 0 , the increment B t + s B t and B s have the identically distribution. For arbitrary n and { t 1 , t 2 , , t n } [ 0, t ] , B t + s B t is independent from { B t 1 , B t 2 , , B t n } .

Just Like the classical expectation situation, the increments of G-Brownian motion ( B t + s B s ) t 0 is independent of F s . In fact it is a new G-Brownian motion since, just like the classical situation, the increments of B are identically distributed. Then we introduce some computation formula of standard G-Browian motion.

Lemma 2. [3] For each n = 0 , 1 , 2 , , and 0 s t , we have

E ^ [ B t B s | F s ] = 0 E ^ [ | B t B s | n | F s ] = E ^ [ | B t B s | n ] = 1 2 π ( t s ) + | x | n e x 2 2 ( t s ) d x

Exactly as in classical cases, we have

E ^ [ ( B t B s ) 2 ] = t s , E ^ [ | B t B s | 3 ] = 2 2 ( t s ) 3 / 2 π E ^ [ ( B t B s ) 4 ] = 3 ( t s ) 2 , E ^ [ | B t B s | 5 ] = 8 2 ( t s ) 5 / 2 π . (2)

2.2. Bivariate Convex Function

Definition 3. [14] Assume that the bivariate function f ( x , y ) is defined in the region D, for ( x + Δ x , y + Δ y ) , ( x Δ x , y Δ y ) , we have

f ( x , y ) 1 2 [ f ( x + Δ x , y + Δ y ) + f ( x Δ x , y Δ y ) ] 0

We can call the the bivariate function f ( x , y ) is convex function in the region D.

Lemma 3. [14] Assume that the bivariate function f ( x , y ) has continuous first partial derivatives in the convex region D , f ( x , y ) is convex function if and only if for ( x 1 , y 1 ) , ( x 2 , y 2 ) D ,

f ( x 2 , y 2 ) f ( x 1 , y 1 ) + f x ( x 1 , y 1 ) ( x 2 x 1 ) + f y ( x 1 , y 1 ) ( y 2 y 1 )

Lemma 4. [14] Assume that the bivariate function f ( x , y ) has the second partial derivatives in the convex region D , f ( x , y ) is convex function if and only if the Hesse matrix 2 f x 2 2 f x y 2 f y x 2 f y 2 is positive semi-definite.

3. Demonstrations

Using Wang’s proof method, we can easily obtain the following theorem.

Theorem 1. Assuming that function h ( x , y ) : × has the second partial derivatives and satisfies the inequation:

h ( E ^ [ X ] , E ^ [ Y ] ) E ^ [ h ( X , Y ) ] , (3)

where X , Y L G 1 ( F ) , h ( x , y ) L G 1 ( F ) , then h ( x , y ) is the viscosity subsolution of the following equation:

G ( 2 h x 2 ( x , y ) b 2 + 2 h y 2 ( x , y ) d 2 + 2 2 h x y ( x , y ) b d ) = 0 , ( b , d ) 2 . (4)

Based on the Theorem 1, we obtain the following Jensen inequality of bivariate function.

Theorem 2. Assuming that function h ( x , y ) : × has the second partial derivatives and the bivariate function is non-increasing w.r.t. one variable at least. The following two conditions are equivalent:

1) Function h is convex function;

2) The Jensen inequality based on G-expectation can hold:

h ( E ^ [ X ] , E ^ [ Y ] ) E ^ [ h ( X , Y ) ] , (5)

where X , Y L G 1 ( F ) , h ( X , Y ) L G 1 ( F ) .

Proof: ( i ) ( ii ) Suppose for a moment that convex function h ( x , y ) is non-increasing w.r.t. independent variable y. For each ( X , Y ) and ( E ^ [ X ] , E ^ [ Y ] ) , we have

h ( X , Y ) h ( E ^ [ X ] , E ^ [ Y ] ) + h x ( E ^ [ X ] , E ^ [ Y ] ) ( X E ^ [ X ] ) + h y ( E ^ [ X ] , E ^ [ Y ] ) ( Y E ^ [ Y ] ) .

Then let l : = h x ( E ^ [ X ] , E ^ [ Y ] ) , k : = h y ( E ^ [ X ] , E ^ [ Y ] ) . Apparently, k 0 .

Then we have

E ^ [ h ( X , Y ) ] h ( E ^ [ X ] , E ^ [ Y ] ) E ^ [ l ( X E ^ [ X ] ) + k ( Y E ^ [ Y ] ) ] = E ^ [ l ( X E ^ [ X ] ) k ( Y E ^ [ Y ] ) ] E ^ [ l ( X E ^ [ X ] ) ] E ^ [ k ( Y E ^ [ Y ] ) ] = E ^ [ l ( X E ^ [ X ] ) ] .

Now we only consider E ^ [ l ( X E ^ ( X ) ) ] .

E ^ [ l ( X E ^ [ X ] ) ] = E ^ [ l + ( X E ^ [ X ] ) + l ( X E ^ [ X ] ) ] = l + E ^ [ X E ^ [ X ] ] + l E ^ [ ( X E ^ [ X ] ) ] .

Because of l + E ^ [ X E ^ [ X ] ] = 0 and l E ^ [ ( X E ^ [ X ] ) ] 0 , we can know

E ^ [ h ( X , Y ) ] h ( E ^ [ X ] , E ^ [ Y ] ) 0.

( ii ) ( i ) The proof by contradiction.Suppose the function h ( x , y ) is not a convex function. And exist constants α , β , p , q such that the inequality ρ ( x , y ) h ( x , y ) can’t hold true in the domain of definition D = { ( x , y ) | α x β , p y q } , where

ρ ( x , y ) = h ( β , p ) h ( α , p ) β α ( x α ) + h ( α , q ) h ( α , p ) q p ( y p ) + h ( α , p ) .

Define a new function

ρ δ ( x , y ) = ρ ( x , y ) δ [ ( x α ) ( x β ) + ( y p ) ( y q ) 2 ] , ( x , y ) D .

Exist a constant δ 0 > 0 and a point ( x * , y * ) D , let h ( x * , y * ) > ρ δ 0 ( x * , y * ) . For this fixed constant δ 0 , we assume the maximum of function h ( x , y ) ρ δ 0 is achieved at a point ( x ¯ , y ¯ ) . Thenet let l = h ( x ¯ , y ¯ ) ρ δ 0 ( x ¯ , y ¯ ) . We can obtain the following function

h δ 0 ( x , y ) = δ 0 ( x , y ) + l , ( x , y ) D .

Obviously, h h δ 0 . According to theorem 1, for ( b , d ) 2 , h ( x , y ) is the viscosity subsolution of the Equation (1), which yields

G ( 2 h δ 0 x 2 ( x , y ) b 2 + 2 h δ 0 y 2 ( x , y ) d 2 + 2 2 h δ 0 x y ( x , y ) b d ) 0. (6)

The Inequation (6) can be rewritten as follows

G ( δ 0 ( b 2 + d 2 ) ) 0.

According to definition of G ( x ) , we can know ( 1 2 δ 0 ( b 2 + d 2 ) σ 0 2 ) 0 . This conflicts with δ 0 0 . Therefore, function h ( x , y ) is convex function.

Remark 2. Similarly, for the n-variables function, if we suppose that the n-variables function has the second partial derivatives and it is non-increasing w.r.t. n 1 variables at least, we can also obtain the corresponding Jensen inequality In the G-expectation framework. Then, we give some examples to illustrate the application of Jensen inequality of bivariate function.

Example 1. Assume B t and Z s is the standard G-Borwnian motion. Then E ^ [ B t ] = E ^ [ B s ] = 0 . The function h ( x , y ) = x 2 m + e y , ( x , y ) 2 , m .

Obviously, the function h ( x , y ) is convex function and satisfies h y 0 in the region D. According to Theorem 2, we can obtain

E ^ [ B t 2 m + e Z t ] ( E ^ [ B t ] ) 2 m + e E ^ [ Z s ] = t 2 m + 1

From this example, we can find that the Jensen inequality of bivariate function can be used to proof the inequality or estimate the G-expectation. We can also use the bivariate expected utility function to define uncertain preference based on this Jensen inequality of the bivariate function.

4. Conclusion

In this work, we suppose that the bivariate function is non-increasing w.r.t. one variable at least and has the second partial derivatives. Then we obtain the Jensen inequality of bivariate function in the G-expectation framework. Moreover, we give some examples to illustrate the application of Jensen inequality of bivariate function. As discussed in Section 1, this effort focuses on the Jensen inequality of bivariate function in the G-expectation. Our future efforts will focus on demonstrating the Jensen inequality of multivariate function and exploring the condition for this inequality.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Wang, W. and Chen, Z. (n.d.) Jensen’s Inequality for G-Expectation, Submitted to Comptes Rendus de l'Académie des Sciences Paris Series I.
[2] Peng, S. (1997) Backward Stochastic Differential Equations and Related g-Expectation. In: EI, Karoui, N. and Mazliak, L., Eds., Pitman Research Notes in Mathematics Series. Longman, Harlow, 364, 141-159.
[3] Peng, S. (2007) G-Expectation, G-Brownian Motion and Related Stochastic Calculus of Itô’s Type. Stochastic Analysis and Applications, 2, 541-567.
https://doi.org/10.1007/978-3-540-70847-6_25
[4] Peng, S. (2008) Multi-Dimensional G-Brownian Motion and Related Stochastic Calculus under G-Expectation. Stochastic Processes and their Applications, 118, 2223-2253.
https://doi.org/10.1016/j.spa.2007.10.015
[5] Peng, S. (2010) Nonlinear Expectations and Stochastic Calculus under Uncertainty. arXiv: 1002.4546.
[6] Peng, S. (2008) A New Central Limit Theorem under Sublinear Expectations. arXiv: 0803.2656v1.
[7] Peng, S. (2007) Law of Large Numbers and Central Limit Theorem under Nonlinear Expectations. arXiv: 0702.358v1.
[8] Bai, X. and Buckdahn, R. (2009) Inf-Convolution of G-Expectations. arXiv: 0910. 5398v1.
[9] Gao, F. (2009) Pathwise Properties and Homeomorphic Flow for Stochastic Differential Equations Driven by G-Brownian Motion. Stochastic Process and Their Application, 119, 3356-3382. https://doi.org/10.1016/j.spa.2009.05.010
[10] Hu, M. and Peng, S. (2009) On Representation Theorem of G-Expectations and Paths of G-Browian Motion. Acta Mathematicae Applicatae Sinica, English Series, 25, 539-546. https://doi.org/10.1007/s10255-008-8831-1
[11] Li, B. (2000) Jensen Inequality of g-Expectation and Its Applications. Journal of Shandong University, 35, 413-417.
[12] Jiang, L. and Chen, Z. (2004) On Jensen’s Inequality for g-Expectation. Chinese Annals of Mathematics, 25, 401-412. https://doi.org/10.1142/S0252959904000378
[13] Jiang L. (2003) Jensen’s Inequality of Bivariate Function for g-Expectation. Journal of Shandong University, 38, 13-17.
[14] Fang, K., Zhu, X. and Liu, H. (2008) The Discriminant Conditions of the Bivariate Convex Function. Pure Mathematics and Applied Mathematics, 24, 97-101.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.