1. Introduction
Several years ago, Barry Simon investigated a new approach to inverse spectral
theory for the half-line Schrödinger operator,
in
in [3] .
A new A-function introduced in [1] [2] [3] , is related to Weyl-Titchmarsh func- tion by the following relation:
(1)
where
for all a.
In [3] , the key discovery is that
satisfies the following integro-diffe- rential Equation:
(2)
Given the fact that
(3)
(at least in the
sense),
can be determined directly from
. And
can be calculated from
(which is essentially the inverse Laplace transform of the data), by solving an equation which does not involve
. Thus the inverse problem to determine q from m, becomes a problem to solve the integro-differential Equation (2). Properties of (2) are discussed in [4] [5] [6] [7] . To construct numerical solvers to this integro-differential equation, one needs to study sets of exact analytic solutions.
In this paper, we study a larger class of analytic solutions of (2), which is of the form
(4)
This ansatz is motivated by the explicit example in [1] , where
is calculated for Bargmann potentials using inverse scattering theory (which is valid only under restrictive assumptions). Our aim is to determine the behavior of such solutions for all
and to do so using only (2).
Substituting (4) in (2), we find that
, and
satisfy the nonlinear equations:
(5)
Then we give a method for solving (5) explicitly in Section 3. The idea is to introduce new variables
, the symmetric functions of
(
), that is
. Via this “change of variable”, (5) yields a new non- linear system:
(6)
This nonlinear system turns out to be solvable. Calogero proved that a certain family of n-body problems is solvable in a 2004 J. Math. Phys. paper and his model includes system (6), and the method we use in this method is different from his approach. Our method also shows some insightful connection to scattering problems. In Section 3, first we find n constants of motion for the system (6) which allow us to reduce it to a first order nonlinear system. Explicitly we will prove
Theorem 1. (i) Supposing that for any x in an open interval I,
are solutions of the second order nonlinear system (6). Then on I,
solves the first order system
(7)
for
. Here
are constants and
.
(ii) Conversely, if
is solutions of (7) with
and
for
, then (6) holds.
The latter is then solved by finding a nonlinear analogue of the method of integrating factors (Theorem 13).
We note that
is zeros of polynomials with coefficients
. Calogero pointed out in [8] [9] that some nonlinear systems can be linearized by non- linear mapping between coefficients of polynomial and its zero, and thus is integrable. The novelty in this paper is that the nonlinear mapping from
to
relates the system (5) to a solvable yet still nonlinear system. Interestingly, a system similar to (6) arises ( [10] [11] ) if one seeks potentials for which the large frequency WKB series is finite and yields solutions of the corresponding Sch- rödinger equations (with no error).
Section 4 shows how we obtain analytic examples of (2) by following this systematic procedure.
2. The g Equation
As described in introduction, we relate a large class of exact solution of A- Equation (2) to a second order non-linear system (5).
Without loss generality, we assume
for all
. Then the following proposition can be followed by direct calculation.
Proposition 2. If
is of the form (4), and satisfies (2), then
, and
satisfy (5). Conversely, if
satisfy (5), then the function
solves (2).
Our goal is to solve (5) explicitly. To begin with, we need some notations. Let
be the lth symmetric function on
,
:
(8)
Lemma 3. If
are distinct, and
are the symmetric functions on
,
,
, then the matrix
(9)
is invertible.
Proof. Suppose the matrix is not invertible; then there exists a non-zero vector
, such that
(10)
Then
(11)
Evaluate the above at
:
(12)
We assumed
are distinct, so
for all
. This contradicts our assumption, and proves the given matrix must be invertible.
3. A Transformed System and Explicit Solutions
3.1. Non-Linear Integrable Equation
To solve Equation (5) explicitly, we construct a nonlinear mapping from
to new dependent variables c. We take
to be the j-th symmetric function of
,
(13)
For convenience, we define
for
or
.
Proposition 4. If
satisfy Equation (5), then
as defined by (13), satisfy the system:
(14)
(15)
Proof. It follows directly by calculation, that for every
,
(16)
And for
, we have
(17)
Conversely, we have
Proposition 5. If
satisfy the system (14), and
are the distinct roots of the polynomial with coefficients
, then
satisfy the system (5).
Proof. As in the previous calculations, for every
, we have
(18)
By assumption,
. The proof of Lemma 3 then shows that the matrix
(
), where
is invertible. Thus
and
satisfy (5).
3.2. Second Order Nonlinear System to First Order Nonlinear System
We have identified n constants of motion for the system (14). This will allow us to reduce the second order system to a first order system.
Proposition 6. The nonlinear system (14) has the following constants of mo- tion:
(19)
for all the
. Here
, when
or
.
Proof. Since
,
, we can write
(20)
(21)
Multiplying the first of these equations by
and the second equation by
, and subtracting, we have
so that,
(22)
Similarly, from
and
we find
Using this equation we obtain (compare with (22))
It follows by induction that
This identity, together with (22) shows that
for
.
We can also write (19) as
(23)
for
. Here
are constants and
.
Theorem 1 presents the equivalence of the first order system (23) and the second order system (14).
Proof. of Theorem 1
(i) this result follows directly from Proposition 6.
(ii) let
. Differentiate (23), for
,
If we write the above equation in matrix form, we have
(24)
The coefficient matrix of above Equation (24) is a Sylvester resultant matrix. A well known theorem from linear algebra then expands the determinant of the Sylvester resultant matrix as the resultant of the two polynomials,
The coefficient matrix of (24) is nonsingular if and only if
and
are coprime for
. Let
be the polynomial
(25)
We observe that
, and since
is j-th symmetric function of
, we have
(26)
If
and
are not coprime, they have a common root
, such that
. Let
,
be the two distinct square roots of
. Substitute
and
in (26); this yields
and
respectively. Thus there exist
,
such that
,
.
Since
are assumed distinct, we obtain
, which contradicts the fact that
.
Therefore
and
must be coprime for
, and (24) has the unique trivial solution:
Thus
solve the second order system (14).
3.3. Method of Integrating Factor
We have reduced the second order non-linear system (14) to the first order non-linear system (23). To solve the latter system explicitly, we begin by writing it in matrix form. Let
(27)
We will assume n is even from now on. When n is odd, we obtain similar results. The Equation (23) can be written as a matrix equation.
(28)
We will show that the nonlinear system (28) can be solved explicitly.
Let
(29)
The nonlinear system (28) can be written as
(30)
where
(31)
(32)
Our goal is to find an integrating factor M such that after multiplication on the left by M, (30) takes the form
(33)
Thus we would like to find an
matrix M and an
matrix N such that
(34)
(35)
(36)
This leads to
(37)
is the first column vector of
and
is the last column vector of
.
For any
matrix
which satisfies (37), and
Math_176#,
,
We must have
,
, for
.
Let
, then these conditions show that N must be of the form
(38)
and moreover
, for
.
To find M, we now rewrite (34) and (36) in matrix form,
(39)
Thus each of the n rows of M solves an over-determined linear system, con- sisting of
equations and
unknowns.
Studying the structure of the matrix
, we notice the following algebraic identity,
Lemma 7.
As an immediate corollary, we have the following
Lemma 8. Given a nontrivial solution
of (28), the overdeter- mined system (39) is solvable only if N satisfies
. Moreover, we also have
.
For each overdetermined system
(40)
where
is the i-th column vector of
, Lemma 7 and Lemma 8 show that the rank of the augmented matrix is less than
. The over-determined system has at most
linear independent equations.
Let
denote the sub-matrix of
obtained by deleting the first row, the n-th row, the (n + 1)-th row, first column and last column of
. We observe that
is also a Sylvester resultant matrix. The determinant of a Sylvester resultant matrix is the resultant of the two polynomials
Two cases need to be considered here. (1)
and
are coprime: Then
is nonsingular. The augmented matrix of (40) has the same rank
as the corresponding coefficient matrix. Thus (40) is solvable. (2)
and
are noncoprime: We then use the following result of Laidacker [12] : let
be the greatest common divisor of two polynomials
,
, then the rank of the Sylvester resultant matrix is
. Thus the rank of the augmented matrix should be also
, if (40) is to be solvable.
We obtain an algebraic fact about the Sylvester matrix.
Lemma 9. Suppose that
, and that
, and
do not vanish simultaneously. Then the following alge- braic system is always solvable.
Proof. Let
be the polynomial with coefficients consisting of the i-th row of
and let
be the polynomial with coefficients consisting of the i-th row of
in echelon form. It should be noted that
is a linear combination of the polynomials of
. The algebraic system is solva- ble if and only if each zero row of
in echelon form corresponds to a zero row of the augmented matrix in echelon form.
Suppose the i-th row of
in echelon form is zero, that is
From the structure of
,
Let
be the greatest common divisor of the two polynomials
and
, let
,
, where
. The above shows that there exists a polynomial
such that
We need to prove
Using the above identities, the left side of the equation can be written as
The left side of this equation vanishes, since the condition in the hypothesis can be represented as
.
Inspired by the fact that
and by Lemma 9, we prove the existence of N by constructing
such that
. To be more specific, let
(41)
where
are arbitrarily distinct and non-zero constants.
Lemma 8 also shows that (39) is solvable only if
satisfies
.
To calculate
, we first introduce some new notation. For each
, we define
as follows,
(42)
(43)
Then, we have
(44)
only if
for
, i.e
.
The following propostion proves that N satisfies both
and
.
Proposition 10. If (28) is satisfied and
are defined as above, then
(45)
Proof. Straight forward calculation.
Recall that our objective is to construct an integrating factor to reduce (23) to an algebraic system. As described above, let
(46)
Thus
(47)
(48)
We assume
in this transformation.
Lemma 11.
if
is odd and
if
is even.
N can be rewritten as
(49)
With
as given above, we can define a matrix
,
(50)
The following proposition shows that
is a solution of
(39), i.e. M is an integrating factor of (30).
Proposition 12. Given
, fj by (46), and
(
) be distinct roots of
, then M solves (34), (35), (36).
Proof. Straightforward calculation shows
thus,
. Similar calculation proves
. At the same time, we have
Theorem 13. if
, then
, Conversely, if
and
, then
, moreover, ck satisfy the linear system,
(51)
The system (28) is integrable and equivalent to this linear system.
Remark 14.
can be solved directly from Equation (48) as
Math_294#. The algebraic system (51) will then lead to the solution
.
Proof. Multiplying Equation (30) by
, and using Proposition 12, yields
. Further, multiplying both sides by
, we have
(52)
Thus
. The fact that
forces the constant to be zero.
Conversely if
, since
is a
matrix and
are assumed distinct,
, so the dimension of the kernel is 1. The solution
which satisfies
and
is unique. Since the first entry of
is
, it follows that
.
This proves that (28) is integrable and provides a procedure to obtain explicit solutions from the linear system (51).
4. Exact Analytic Examples
We will illustrate the procedure of section 2 and section 3 by a simple exact analytic example in this section.
We explicitly discuss the case, where
in (4). Then
. We construct the non-linear mapping from
to
,
(53)
Then
satisfy (14). To solve for c, we reduce the second order system (14) to the first order system (30):
(54)
Given
,
where
, we construct M of the form (50)
(55)
where
are solution of (48):
We can write
with
.
After multiplying (54) by M on the left, the first order system is solved explicitly. Indeed,
satisfy linear system (51). Let
(56)
Then (51) takes the form
(57)
(58)
with solution:
(59)
(60)
To invert the mapping (53) we find
,
as the roots of the equation
This gives the following exact solutions of A-equation:
Theorem 15. For any distinct non-zero complex
, and
, there exists a solution
of the A-equation with the form
, where
,
for
.
Proof. This theorem is a direct corollary of the results in section 3.
Remark 16. This theorem does not cover all solutions of the form
. Consider an example from [1] ,
(61)
(62)
Working through the procedure in section 3, we get
Here the values of the constants are:
so that
and
are not distinct.
This leads to
. Proposition 10 holds, but the nonlinear transformation (46) is not defined.
5. Conclusion
A large class of exact Equations to A-Equation was found in this work. Techniques used in our approach include non-linear transformation between coefficient of a polynomial and its zero, constants of motion, and an interesting integrating factor method. The nonlinear system studied here is of interest not only for its connection to inverse problems. It represents a larger category of integrable system than C-integrable system and is worth further investigation.
Acknowledgement
A special thanks goes to reviewers for their valuable suggestions.