Entropy Formulation for Triply Nonlinear Degenerate Elliptic-Parabolic-Hyperbolic Equation with Zero-Flux Boundary Condition ()
1. Introduction
Let Ω be a bounded open set of
with a Lipschitz boundary ∂Ω and
the unit normal to ∂Ω outward to Ω. We consider the triply nonlinear degenerate parabolic-elliptic-hyperbolic problem with zero-flux boundary condition:
(P)
The particularity of this problem is its strong degeneracy. For practical reason and physical consideration, we suppose that [0, 1] will be the invariant domain of solution of (P) and that there exist two particular values of the unknown u. We denote by
and
(with
) such that
(resp
) is strictly increasing only on
(resp
) otherwise it has a flat region (see Figure 1). Then, problem (P) is of mixed elliptic parabolic hyperbolic type with absorption term, and thus combines the difficulties related to nonlinear conservation laws with those related to nonlinear degenerate diffusion equations. We refer to Kruzkov [1] for the case of hyperbolic problem (
) and Carrillo [2] for degenerate parabolic problem to inspiration. We need a notion of solution which is sufficient to deal with existence and uniqueness. One consequence is that the notion of weak solution generally leads to non-uniqueness, unless
is strictly increasing. It is necessary to adopt an entropy formulation. The notion of entropy solution we use is adapted from the founding paper of [3] in the case where
and
. Several authors have studied the degenerate equation type we consider.
Some of these authors ( [3] [4] ) proved existence and uniqueness under the hypothesis that the convection flux f is a Lipschitz-continuous function and required that
(H)
This hypothesis is necessary to obtain the solution in [0, 1] if the initial datum
belongs in [0, 1] in the sense that
and the hypothesis (H) is below. The main idea in the paper is to keep this hypothesis but we impose that initial datum
belongs to [0, 1]. We suppose that the function b satisfies:
(1.1)
A simple choice is to take
and
. Our assumptions on b and
do not concern the case where the structure condition
(S.C)
holds. The presence of the absorption term in the equation requires us to assume it to be increasing. Further, we suppose:
(1.2)
The case of triply nonlinear problems of the form (P) has been first addressed by Ouaro and Touré (see [5] ) and the references therein) and Ouaro [6] .
Figure 1. Convection and diffusion flow.
Well-posedness results are obtained in dimension
, under very general coercivity conditions; see also the works of Bénilan and Touré ( [7] and the references therein). Andreianov and Wittbold investigate in [8] about the continuous dependence of the solution of a degenerate elliptic-parabolic equation without structure condition related to b and f. They prove existence by passing to bi-monotonicity and penalization method as in [9] . Otherwise, in [10] , Andreianov et al. obtain a general continuous dependence result on data for our kind of triply nonlinear problem with help of structure condition. They showed similar result for the degenerate elliptic problem, which corresponds to the case of
and general non-decreasing surjective
. In our case the function
is bounded continuous and strictly increasing. If (S.C) fails, the convergence of approximate solutions to (P) is known for a particular monotone approximation method developed by Ammar and Wittbold [9] . This approach leads to an existence result which bypasses (S.C). Notice that some essential arguments of uniqueness result in this works are specific to the case
. For Neumann boundary condition also called zero-flux boundary condition, it is easy to prove uniqueness of solutions such that the boundary condition is satisfied in the sense of strong boundary trace of the normal component of the flux
. Unfortunately, we are able to establish this additional solution regularity only for the stationary problem (S) associated to (P) and only in the case of one space dimension.
The paper is divided in three parts, in Section 2 we generalized the notion of entropy solution of paper [3] where
and [4] in the pure hyperbolic case. In Section 3, we first prove existence and after uniqueness of entropy solution.
2. Formulation of Entropy Solution
2.1. Definition of Entropy Solution
We need the notion of weak solution for (P) with additional “entropy” conditions.
Definition 2.1. A measurable function u taking values on [0, 1] is called an entropy solution of the initial-boundary value problem (P) if satisfying the following conditions:
and
, with
(2.1)
and
,
, with
,
(2.2)
Here
represents the
-dimensional Hausdorff measure and
is the duality pairing between
and
It is well known that the distributional derivative
of
can be identified with an element of the space
. More exactly, we have
(2.3)
for all
with
and
.
We obtain notions of entropy sub-solution and entropy super-solution respectively if we replace (2.2) by one of the following inequalities
(2.4)
(2.5)
Remark 2.2.
Obviously, a function u is an entropy solution if and only if u is entropy sub-solution and entropy super-solution simultaneously.
Our notion of entropy solution coincide with the Definition of [3] in the case
,
and assumption (S.C) is trivially satisfied.
Notice that if u satisfy (2.2), then use (1.1) and (H), we have also u verify (2.1).
Let us stress that, if (H) is satisfy, in particular, the zero-flux boundary condition
is verified literally in the weak sense (see for exemple [3] and [11] ). A forthcoming work is to envisage envisage (P) if assumption (H) is dropped. We expect that the boundary condition should be relaxed.
2.2. Dissipative Property
We propose here an essential property of entropy solutions, based on the idea of J. Carrillo [2] .
Proposition 2.3. Let
,
. Then for all
, for all
for all entropy solution u of (P), we have
(2.6)
where
.
Proof. The ingredient of the proof of Proposition 2.7 can from firstly, for all
and for all
, one has:
.
Secondly,
a.e. on the set
.
Taking as test function in (2.1)
with
and
is the approximation of
function, using Chain rule (see [12] ) and passing to the limit
. For more details see [3] . ▄
3. Existence and Uniqueness Result
3.1. Existence of Entropy Solution
The main result of this subsection is the following theorem:
Theorem 3.1. Assume that (1.1), (1.2), (H) holds. Then there exists an entropy solution u for (P).
3.1.1. Bi-Monotonicity Approach
Because of we are not in the case where (S.C) holds, we use the particular multi-step approximation approach of Ammar and Wittbold (see [9] ).
Theorem 3.2. Let
, a sequences converging to
in the following sense:
(3.1)
(3.2)
(3.3)
There exists a weak solution
of
the analogue of (P) with corresponding data
.
Proof. We consider the particular multi-step approximation approach of Ammar and Witt bold [9] . We replace b by
,
by
and
, by
. Hence
are Lipschitz continuous on
.
We obtain the following equation:
Take
, hence
is invertible, one puts the problem into the doubly non-linear framework then we obtain the following problem
where
;
and
. Using classical methods (cf. Andreianov and Gazibo [3] ), one shows that there exists a weak solution
for the corresponding problem
. ▄
Theorem 3.3. Let
be the weak solution of
the analogue of (P) with corresponding data
. Then
is also entropy solution of
in the sense of Definition 2.1 and converge to u entropy solution of (P) in
weakly star up to a subsequence. Furthermore:
(3.4)
(3.5)
(3.6)
3.1.2. A priori Estimates
Lemma 3.4. Let
, be a sequence of data satisfying the as sumption of theorem 3.2. Assume that the corresponding data
verifies (1.1), (1.2), (H). Let
be an entropy solution of
then there exist
such that:
(3.7)
(3.8)
(3.9)
(3.10)
Proof. Since
is entropy solution, it is also entropy subsolution and entropy super solution of
. Take
in (2.5) and
and (H) and (1.2) then
(3.11)
Let us introduce the function
Since (1.1), we have
. In the same way,
satisfy (2.4), take now
, we prove that
i.e.
(3.12)
We use the test function
in the weak formulation of
. The duality product between
and
is treated via the standard chain rule argument
(3.13)
Take
. Since
, we obtain the inequality
with some
independent of n. Note that the functions
are locally uniformly bounded because they are monotone and converge pointwise to
respectively.
Therefore the right-hand side of the above inequality is bounded uniformly in
, thanks to (3.12) and the uniform bounds on the data
in
. The uniform estimate of the left-hand side follows. We then estimate
in
by the Poincaré inequality and
follows. ▄
Lemma 3.5. Let
be the weak solution of
the analogue of (P) with corresponding data
. For
, we have
(3.14)
Proof. Let
, Multiplying the first equation of
by
and integer in
we get:
Take now
and integrate in t. By Fubini theorem and estimates of Lemma 3.4, it appear a factor
in the right hand side and we get
(3.15)
with
. Here C is a constant independent of
. Take
which is continuous function, let
the modulus of continuity of
on
and
be its inverse and set
and w be the inverse of W, notice that
. Denote by
and
. Then
Since
we have:
Therefore (3.15) implies
(3.16)
the left-hand side of (3.16) tends to zero as
, we deduce (3.14). ▄
The proof of Theorem 3.1 is a direct consequence of Theorem 3.2 and Theorem 3.3.
Since, we have establish the proof of theorem 3.2, let us demonstrate Theorem 3.3.
Proof of Theorem 3.3. There exists a function
constructed by means of the nonlinear semigroup theory (see, e.g., [3] [13] ), such that
is the unique integral solution to the abstract evolution problem associated with
(here and below, we refer to Andreianov and Gazibo [3] , Ammar and Wittbold [9] , Ammar and Redwane [14] for details). One then shows that
coincides with the unique entropy solution of
, the existence of this entropy solution being already shown. Further, the whole set
verifies the uniform a priori estimates of Lemma 3.4 and 3.5. We then pass to the limit in
in the following order: first
, then
.
While letting
, we use the fact that
is Lipschitz continuous. The fundamental estimates for the semigroup solutions permit to show that
are uniformly continuous on
with values in
; thus we get the strong precompactness of
. Thus, up to a subsequence,
converge to
which is an entropy solution of problem
corresponding to the data
. Finally, we use the inequalities
which follow readily form the comparison principle (this come from uniqueness of
). The monotonicity argument yields the strong convergence of
. Passing to the limit in
we conclude that the limit u is an entropy solution of the original problem (P) (one can use Lemma 3.6 and 3.7 of [3] ). ▄
3.2. Uniqueness of Entropy Solution in One Space Dimension
3.2.1. Stationary Problem
Let us stress that to our knowledge the problem of uniqueness is still open in multiple space dimensions. The definition of strong traces of the solution with respect to the lateral boundary of the domain Ω is possible if for example the diffusion term
is such that
is continuous up to the boundary
. If there existed “sufficiently many” solutions (in the sense of [3] , [4] , see Definition 3.9 below) having this regularity, uniqueness would follow. Unfortunate, we could obtain this regularity for this moment only for the stationary problem associate to (P) and in one space dimension.
Now, we consider the stationary problem associated to (P)
(S)
where
.
Remark 3.6.
1) If, we suppose that
is bijective, then performing a change of the unknown one puts the problem into the doubly nonlinear framework in the form
. Existence and uniqueness follows (see [3] ).
2) If
independent of t and u is solution of (S) it is also solution of (P) with source term
. Then, we can deduce from Definition 2.1 and Proposition 2.3 their equivalent form for the stationary problem.
Definition 3.7. A measurable function u taking values in [0, 1] is an entropy solution of (S) if u is a weak solution of (S) and
and for all
,
,
(3.17)
Proposition 3.7. Let
; then for all
, for all
, for all entropy solution u of (S), we have:
(3.18)
In the next subsection, we give a Definition of so called “trace regular entropy solution”
3.2.2. Trace Regular Entropy Solution
Definition 3.8. An entropy solution is called trace regular solution of (P) if the normal component of the total flux
, has
strong trace
at boundary of Lipschitz domain i.e.: for
(3.19)
The difficulty is that the regularized zero-flux boundary condition does not permit control over the tangential derivatives (with respect to
) of the solution. Thus, boundary flux traces of solution seem hard to obtain and we need the concepts of domains with Lipschitz deformable boundaries and traces (see [15] , [16] for more details).
Remark 3.9. Notice that if the normal component of the flux
is continuous function then it satisfy (3.19).
From now on, we will suppose that
is a bounded interval of
. We have this property
Proposition 3.10. For all
, the problem (S) admits a solution u such that
is continuous up the boundary, i.e.,
.
Moreover,
is zero at
and
.
Since
is bijective, the proof of Proposition 3.11 is identical to the proof of Proposition 4.8 of [3] .
The main result of this section is the following theorem:
Theorem 3.11. Suppose that
is a bounded interval of
, then (P) admits a unique
such that u is entropy solution of (P).
3.2.3. Abstract Evolution Problem
We present now the problem (P) under the abstract form of an evolution equation governed by an accretive operator, in order to apply classical results of the nonlinear semigroup theory (see, e.g., [17] ).
Let us define the (possibly multivalued) operator
by it resolvent
Consider the abstract equation:
(E)
For an operator
, denote by
its range, by
its domain and by
,
their closures in
respectively
Let us stress that for
,
due to Proposition 3.11.
Recall (cf. [17] ) that an operator A is accretive if
for all
, where for
the bracket
is defined by
.
If A is accretive and
for some
, then A is m-accretive.
Proposition 3.13. Let
,
. Then for
(3.20)
Proof. (Sketched) The proof of Proposition 3.13 is actually contained in the proof of Theorem 3.17 below, due to Remark 2.2. Actually a simpler argument applies, because both
and
have strong trace in the context of the stationary problem (S). ▄
Somewhat abusively, we will write
for the set of all measurable functions from [a, b] to [0, 1].
Proposition 3.14. The following properties hold true.
1)
is accretive in
.
2) For all
sufficiently small,
contains
.
3)
.
Proof.
1) Let
,
. Applying Proposition 3.13 with
in (3.20) and the standard properties of the bracket (see [17] ), we get
We deduce that
, so that
is accretive.
2) For
, consider the problem
Notice that the notion of solution for
is like the Definition 3.7. Let
and
then, there exists
entropy solution of
(see Proposition 3.11) such that
.
Hence
and therefore
which was to be shown.
3) Let
be the set of piecewise constant functions from [a, b] to [0, 1]. Then
is dense in
. Take
,
where the
are disjoint intervals. There exists
entropy solution of
, i.e., we have
. For
, for all
. We get
(3.21)
For every i, one can construct
such that
, as
, supp
,
and
in
with
. Take
and
in (3.21).
Then, for all
,
a.e. on
. We conclude by the Lebesgue theorem that
in
. In conclusion,
is dense in
and therefore, it is also dense in
. ▄
3.2.4. Integral Solution and Uniqueness
Now, we can exploit the notion of integral solution (see, e.g., [7] [17] ).
Definition 3.15. Suppose that
function
is an integral solution of (E) if
In particular, the integral solution is unique.
Theorem 3.17. Let
. Let v be an entropy solution of (P) and u be an entropy solution of (S). Then
(3.22)
In particular,
is an integral solution of (E).
Proof of Theorem 3.17 We adopt the doubling of variables of Kruzkhov [1] in the sense of [3] [18] . We compare regular solution and entropy solution. Keep in mind that by the result of [19] an entropy solution v of (S) is automatically time-continuous with values in
. We consider
an entropy solution of (P) and
an entropy solution of (S). Consider nonnegative function
having the property that
for each
,
for each
. As in [3] , we denote
,
and
,
their complementaries in
. To simplify the notations, take
,
and
.
In (2.6), take
,
,
and integrate over
. We get
(3.23)
In the same way, in (2.2) take
,
, integrate over
, and use the fact that
in
. We get
(3.24)
Since
, by adding (3.23) to (3.24) we obtain
(3.25)
In (3.18), take
,
,
and integrate over
(3.26)
Since
is entropy solution, then take in (3.17)
, integrate over and use the fact that in
in
.
(3.27)
By adding (3.26) to (3.27), we obtain
(3.28)
Now, sum (3.25) and (3.28) to obtain
(3.29)
Next, following the idea of [3] we consider the test function
, where
,
,
and
. Then,
and
.
Due to this choice
.
By Proposition 3.11,
. Therefore we have
when
, i.e., as
. We conclude that
It remains to study the limit, as
We use the change of variable
with
,
where
,
and
.
For z given,
converges to
in
and
converges to
in
. From Lemma 4.14 of [3] , we deduce that for all
where
,
and
.
Then
converges to
independently of z. Moreover, from the definition of
one finds easily the uniform
bound
, for n large enough. Hence by the Lebesgue theorem,
We have shown that the limit of
equals zero. The passage to the limit in other terms in (3.29) is straightforward. Finally (3.29) gives for
Hence
Thus,
is an integral solution of (E).
Now, the claim of Theorem 3.12 is a direct consequence of the fact that if u is the entropy solution then
is an integral solution, and of Corollary 3.16. ▄
Acknowledgements
The author is grateful to Boris Andreianov for her attention to this work and for very stimulating discussions. He also thanks Stanislas Ouaro for discussions and remarks.