Generalized Invertibility of Operators through Spectral Sets

Abstract

If an operator is not invertible, we are interested if there is a subspace such that the reduction of the operator to that subspace is invertible. In this paper we give a spectral approach to generalized inverses considering the subspace determined by the range of the spectral projection associated with an operator and a spectral set containing the point 0. We compare the cases, 0 is a simple pole of the resolvent function, 0 is a pole of order n of the resolvent function, 0 is an isolated point of the spectrum, and 0 is contained in a circularly isolated spectral set.

Share and Cite:

Salgado-Matias, E. , Djordjević, S. and Kantún-Montiel, G. (2023) Generalized Invertibility of Operators through Spectral Sets. Advances in Linear Algebra & Matrix Theory, 13, 21-35. doi: 10.4236/alamt.2023.132002.

1. Introduction

Let X be a Banach space, and let B ( X ) stand for the set (algebra) of bounded linear operators on X. We say that an operator T B ( X ) is invertible if there exists an operator S B ( X ) such that

T S = S T = I ,

where I stands for the identity operator on any space ( I x = x for all x) (for more details about basic properties of generalized inverses, see [1] ).

If T is not invertible, it is convenient looking for a closed subspace of X such that the restriction of T to that subspace is invertible. In that case, we are interested in the algebraic conditions that characterize such a situation. Those characterizations will give us a sort of generalization of the notion of invertibility, and accordingly, they will be called generalized inverses.

In this paper, we explore the case of the subspace determined by the range of the spectral projection. After some preliminaries in the next section, in Section 3 we study the case when 0 is a simple pole of the resolvent function, and the results are extended in the next section to the case of poles of order n. In Section 5, we deal with isolated points and, in the final section, we explore the case of clusters containing the point 0.

2. Preliminaries

Let M and N be closed subspaces of a Banach space X. If every x X can be uniquely decomposed as x = u + v with u M and v N , then we write X = M N and we say that M and N decomposes X. In this case, there exists a bounded linear projection P such that R ( P ) = M and N ( P ) = N .

For a bounded linear operator T B ( X ) we say that a subspace M of X is invariant if T ( M ) M , and T | M denote the restriction of T to M. If two closed subspaces M and N are invariant for T and X = M N , then we say that M and N decomposes T. If P is the projection on M along N, then M and N decomposes T if and only if P and T commute.

Recall the spectrum of T is the (nonempty compact) set

σ ( T ) : = { λ : T λ I is not invertible } ,

and the resolvent set of T is

ρ ( T ) : = \ σ ( T ) .

If M and N decomposes T, then σ ( T ) = σ ( T | M ) σ ( T | N ) . However, σ ( T | M ) and σ ( T | N ) may not be disjoint.

For λ ρ ( T ) , the resolvent function is defined as

R λ ( T ) : = ( T λ I ) 1 .

This function is analytic in ρ ( T ) .

A spectral set for T is a subset Λ of σ ( T ) such that Λ is open and closed in the relative topology of σ ( T ) . For every spectral set Λ for T, there is a Cauchy contour C that separates Λ from σ ( T ) \ Λ .

For a spectral set Λ and any Cauchy contour separating Λ from σ ( T ) \ Λ we define

P ( Λ ) : = 1 2 π i C R λ ( T ) d λ .

This integral does not depend on the particular choice of the Cauchy contour separating Λ from σ ( T ) \ Λ since any such Cauchy contour can be continuously deformed in ρ ( T ) to C.

Let Λ be a spectral set for T. Then, P ( Λ ) is a bounded projection that commutes with T. We call P ( Λ ) the spectral projection of the spectral set Λ associated to T. In this case, since P ( Λ ) and T commute, we have R ( P ( Λ ) ) and N ( P ( Λ ) ) = R ( I P ( Λ ) ) are invariant subspaces for T. Now, if M : = R ( P ( Λ ) ) and N : = N ( P ( Λ ) ) , then M and N decomposes T, σ ( T | M ) = Λ and σ ( T | N ) = σ ( T ) \ Λ . Notice that, in this case, σ ( T | M ) σ ( T | N ) = (for more details, see [2] ).

3. Simple Poles of the Resolvent

It is clear that T ( R ( T ) ) R ( T ) , and, since x N ( T ) implies T x = 0 N ( T ) , we have T ( N ( T ) ) N ( T ) . It follows that R ( T ) and N ( T ) are invariant subspaces for T.

Suppose R ( T ) = R ( T 2 ) and N ( T ) = N ( T 2 ) . If x R ( T ) N ( T ) , then x = T y for some y X and T x = 0 . Now, since T x = T 2 y = 0 , y N ( T 2 ) = N ( T ) . Thus x = T y = 0 . This shows R ( T ) N ( T ) = { 0 } .

Let T 1 : = T | R ( T ) . Then, R ( T 1 ) = R ( T 2 ) = R ( T ) . Let x X , then since T x R ( T ) = R ( T 1 ) , there exists y R ( T ) such that T x = T 1 y = T y . Now let z : = x y , then z N ( T ) and we have X = R ( T ) + N ( T ) . Now, since N ( T ) and R ( T ) + N ( T ) are closed, we have R ( T ) is closed. Hence, X = R ( T ) N ( T ) .

Conversely, if X = R ( T ) N ( T ) , we have R ( T ) = T ( X ) = T ( R ( T ) N ( T ) ) = R ( T 2 ) 0 . Now, let x N ( T 2 ) and y = T x , then y R ( T ) N ( T ) = { 0 } , thus T x = 0 and x N ( T ) . Since N ( T ) N ( T 2 ) always holds, we get N ( T ) = N ( T 2 ) . Thus, we have the following

Proposition 1. Let T B ( X ) , then R ( T ) = R ( T 2 ) and N ( T ) = N ( T 2 ) if and only if X = R ( T ) N ( T ) .

From the above proposition it is clear that, if X = R ( T ) N ( T ) , then R ( T ) and N ( T ) decomposes T. Moreover, in the following theorem we show a matrix form for the operator T.

Theorem 2. Let T B ( X ) . If X = R ( T ) N ( T ) , then the restriction T 1 = T | R ( T ) : R ( T ) R ( T ) is invertible, T 2 = T | N ( T ) : N ( T ) N ( T ) is the zero operator and we have the following matrix form for T:

T = [ T 1 0 0 0 ] = [ R ( T ) N ( T ) ] [ R ( T ) N ( T ) ] .

Proof. Let T has the following matrix form with respect to X = R ( T ) N ( T ) :

T = [ T 1 T 3 T 4 T 2 ] = [ R ( T ) N ( T ) ] [ R ( T ) N ( T ) ] .

It is clear that T1 is 1-1 and onto, thus, it is invertible. Also, T 2 x = T x = 0 for every x N ( T ) , hence T2 is the zero operator on N ( T ) .

For T 3 : N ( T ) R ( T ) , T 3 x = T x = 0 for every x N ( T ) . Also, for T 4 : R ( T ) N ( T ) , since T 4 x R ( T ) N ( T ) = { 0 } we have T 4 = 0 .

For certain calculations, it is useful to have an algebraic characterization of our situation.

Theorem 3. Let T B ( X ) . X = R ( T ) N ( T ) if and only if there exists S B ( X ) such that T = T S T and S T = T S .

Proof. If X = R ( T ) N ( T ) , then, by Theorem 2, we have the following matrix form with respect to this decomposition:

T = [ T 1 0 0 0 ] = [ R ( T ) N ( T ) ] [ R ( T ) N ( T ) ] .

Now, let us define the operator

S : = [ T 1 1 0 0 0 ] = [ R ( T ) N ( T ) ] [ R ( T ) N ( T ) ] .

A direct verification shows that T = T S T and S T = T S .

Conversely, suppose that there exist S B ( X ) such that T = T S T and S T = T S . Then, from

R ( T ) = R ( T S T ) = R ( T 2 S ) R ( T 2 ) R ( T ) ,

We have R ( T ) = R ( T 2 ) . Also, from

N ( T ) = N ( T S T ) = N ( S T 2 ) N ( T 2 ) N ( T ) ,

We have N ( T ) = N ( T 2 ) .

Remark. It is easy to see that for the operator S in the previous theorem we have even more: S T S = S . In general, if for some T B ( X ) there exists an operator V B ( X ) such that T = T V T and V T = T V , then letting V = V T V a direct calculation shows that T = T V T , V = V T V and V T = T V .

Definition 1. We say that T B ( X ) is group invertible if there exists S B ( X ) such that

T = T S T , S = S T S , S T = T S

In this case, we say that S is the group inverse for T. We denote the group inverse S = T # .

Remark. (i) The group inverse is unique if it exists. Indeed, suppose that there were two group inverses S and S' for T, then

S = S T S = S T S T S = S S T S T = S S T = S T S T S = T S T S S = S T S = S .

(ii) By the uniqueness of the group inverse, Theorem 3 gives us a matrix form for the group inverse for T. For the operator V that satisfies T = T V T and V T = T V , we can find a matrix representation:

V = [ T 1 1 0 0 V 2 ] = [ R ( T ) N ( T ) ] [ R ( T ) N ( T ) ] ,

where V 2 : N ( T ) N ( T ) is an arbitrary operator. Then, T # = V T V is the group inverse of T and has the following matrix representation:

T # = [ T 1 1 0 0 0 ] = [ R ( T ) N ( T ) ] [ R ( T ) N ( T ) ] .

It is clear that an invertible operator is group invertible. In the case when T is not invertible, but group invertible, we can say more about the point 0 σ ( T ) .

Theorem 4. Let T B ( X ) . The operator T is group invertible if and only if 0 is a simple pole of the resolvent.

Proof. Suppose T is group invertible. Then, we have the following matrix form for T:

T = [ T 1 0 0 0 ] = [ R ( T ) N ( T ) ] [ R ( T ) N ( T ) ] ,

with T1 invertible.

Suppose R λ ( T ) has a Laurent expansion at 0 given by

R λ ( T ) = B n λ n .

Since T1 is invertible, we have 0 ρ ( T 1 ) , thus the resolvent function ( T 1 λ I ) 1 is analytic at 0, which implies that the principal part of the Laurent series of ( T 1 λ I ) 1 have no terms. On the other hand, since T 2 = 0 we have the Laurent series ( T 2 λ I ) 1 = λ 1 .

Now, since R ( T ) and N ( T ) decompose T, then they decompose B n for every n. Thus, the principal part of the Laurent series of ( T λ I ) 1 has only the term B 1 . Therefore 0 is a simple pole of the resolvent function R λ ( T ) .

Conversely, suppose 0 is a simple pole of the resolvent. Let P : = P ( { 0 } ) be the spectral projection associated with T and the spectral set {0}. Then, X = N ( P ) R ( P ) . Moreover, since T P = P T , we have R ( P ) and N ( P ) decompose T. Thus, we have the following matrix form:

T = [ T 1 0 0 T 2 ] : [ N ( P ) R ( P ) ] [ N ( P ) R ( P ) ] .

Since σ ( T | N ( P ) ) = σ ( T ) \ { 0 } , we have T1 is invertible. Now, define the operator S by

S = [ T 1 1 0 0 0 ] : [ N ( P ) R ( P ) ] [ N ( P ) R ( P ) ] .

A direct verification shows that S is the group inverse for T.

In the following characterization of group invertibility using projections, we see the central role played by the spectral projection of the spectral set {0}.

Theorem 5. T is group invertible if and only if there exists a projection P such that T + P is invertible and T P = P T = 0 . Furthermore, P is the spectral projection of the spectral set {0} associated to T.

Proof. Suppose T is group invertible with group inverse S. From T = T S T we have that TS is a projection. Thus, I T S is also a projection. Now, T ( I T S ) = ( I T S ) T = 0 and a direct verification show that T + ( I T S ) is invertible with inverse S + I T S . Thus, I T S is the desired projection.

Conversely, suppose there is a projection P B ( X ) such that T + P is invertible and T P = P T = 0 . Let Q be the inverse of T + P . Then, Q T = I Q P . Since T commutes with T + P , we have that T also commutes with the inverse Q of T + P . Let S : = Q P . Then,

T S = T Q T P = Q T P T = S T ,

T S T = T Q T = T ( I Q P ) = T T Q P = T .

Finally, a direct verification shows that T # = S T S .

Now, since T and P commute, the operator T has the following matrix form with respect to P:

T = [ T 1 0 0 T 2 ] : [ N ( P ) R ( P ) ] [ N ( P ) R ( P ) ] .

Since T 2 ( R ( P ) ) = T P ( X ) = 0 , we have T 2 = T | R ( P ) = 0 . Now, since T + P is invertible and R ( P ) and N ( P ) decompose T, ( T + P ) | N ( P ) is invertible. But P | N ( P ) = 0 , hence T 1 = T | N ( P ) : N ( P ) N ( P ) is invertible. Thus, for T λ I we have the following matrix form:

T λ I = [ T 1 λ I 0 0 λ I ] .

Since T1 is invertible, T 1 λ I is invertible for sufficiently small λ . Then, for a sufficiently small λ 0 , we have that T λ I is invertible. It follows that 0 is an isolated point of σ ( T ) . Hence, {0} is a spectral set for T. Let C be a Cauchy contour separating {0} from σ ( T ) \ { 0 } . The spectral projection of the spectral set {0} associated to T is

P ( Λ ) = 1 2 π i C ( T λ I ) 1 d λ = 1 2 π i C ( T | N ( P ) λ I ) 1 ( I P ) d λ 1 2 π i C ( T | R ( P ) λ I ) 1 P d λ = 0 1 2 π i C ( T | R ( P ) λ I ) 1 P d λ = 1 2 π i C ( λ I ) 1 P d λ = P .

Notice that, from the above theorem, if T is group invertible, then I T T # is the spectral projection associated to the spectral set {0}. Regarding the proof of Theorem 5, a similar methodology has been adopted in the works of Liu [3] and [4] .

Theorem 6. Let T B ( X ) . The operator T is group invertible if and only if there exist two closed invariant subspaces M and N for T such that X = M N , T | M is invertible and T | N = 0 .

Proof. Let T B ( X ) be group invertible. Then, by Theorem 4, we can choose M = R ( T ) and N = N ( T ) .

Conversely, suppose that there exist two closed subspaces M and N of X such that X = M N , T ( M ) M , T ( N ) N , T | M is invertible and T | N = 0 . Then T ( X ) = T ( M N ) = T ( M ) = M and T 2 ( X ) = M . Also, for any non-zero vector x = x 1 + x 2 in N ( T ) , x 1 M , x 2 N , we have 0 = T x = T ( x 1 + x 2 ) = T x 1 + T x 2 . Thus, T x 1 = T x 2 M N = { 0 } and, by invertibility of T | M , we have x 1 = 0 and x = x 2 N . Since N N ( T ) , it follows N ( T ) = N . Similarly we can prove that N ( T 2 ) = N . Hence, X = R ( T ) N ( T ) by Proposition 1, and using Theorem 4 we see T is group invertible.

4. Poles of the Resolvent of Order n

For an operator T B ( X ) let us define T 0 = I . We have the following chains:

X = R ( T 0 ) R ( T 1 ) R ( T 2 ) R ( T 3 ) ,

{ 0 } = N ( T 0 ) N ( T 1 ) N ( T 2 ) N ( T 3 )

From R ( T n ) = R ( T n + 1 ) = T ( R ( T n ) ) we see that if R ( T n ) = R ( T n + 1 ) for some n , then R ( T n ) = R ( T n + k ) for every k . In a similar way, from N ( T n + 1 ) = { x : T x N ( T n ) } we have that if N ( T n ) = N ( T n + 1 ) for some n , then N ( T n ) = N ( T n + k ) for every k .

The descent d ( T ) of T is the smallest n such that R ( T n ) = R ( T n + 1 ) . If there is no such n, then we define d ( T ) = .

The ascent a ( T ) of T is the smallest n such that N ( T n ) = N ( T n + 1 ) . If there is no such n, then we define a ( T ) = .

Theorem 7. If a ( T ) < and d ( T ) < , then p = a ( T ) = d ( T ) . Moreover,

X = N ( T p ) R ( T p )

Proof. Let p = a ( T ) , q = d ( T ) .

Fix j 1 and suppose x R ( T p ) N ( T j ) . Then, there exists y X such that x = T p y . From T p + j y = T j x = 0 , we have y N ( T p + j ) = N ( T p ) . Thus x = T p y = 0 . Hence R ( T p ) N ( T j ) = { 0 } .

Fix k 1 and let x X be arbitrary. From T q x R ( T q ) = R ( T q + 1 ) , there exists y X such that T q x = T q + k y . Then, T q ( x T k y ) = T q x T q + k y = 0 . Hence, x = T k y + ( x T k y ) R ( T k ) + N ( T q ) .

Now, letting j = q and k = p , and since N ( T q ) and R ( T p ) + N ( T q ) are closed, we have

X = R ( T p ) N ( T q ) .

It remains to show that p = q . Let u N ( T q + 1 ) . We have u = u 1 + u 2 with u 1 R ( T p ) and u 2 N ( T q ) . Then, u 1 = u u 2 N ( T q + 1 ) R ( T p ) = { 0 } . Thus, u 1 = 0 and u = 0 + u 2 = u 2 N ( T q ) . Hence, N ( T q ) = N ( T q + 1 ) and p q .

Let v R ( T p ) . We have v = v 1 + v 2 with v 1 R ( T p + 1 ) = R ( T p ) and v 2 N ( T q ) . Then, v 2 = v v 1 R ( T p ) N ( T q ) = { 0 } . Then, v 2 = 0 and v = v 1 + 0 = v 1 R ( T p + 1 ) . Hence R ( T p ) = R ( T p + 1 ) and q p .

If the ascent and the descent of T are finite, then we have a nice matrix form, as shown in the following theorem. Recall an operator T is nilpotent if there exists n such that T n = 0 .

Theorem 8. Let T B ( X ) . If X = R ( T n ) N ( T n ) , then the restriction T 1 = T | R ( T n ) : R ( T n ) R ( T n ) is invertible, T 2 = T | N ( T n ) : N ( T n ) N ( T n ) is a nilpotent operator and we have the following matrix form for T:

T = [ T 1 0 0 T 2 ] = [ R ( T n ) N ( T n ) ] [ R ( T n ) N ( T n ) ] .

Proof. Consider the following matrix form for T with respect to X = R ( T n ) N ( T n ) :

T = [ T 1 T 3 T 4 T 2 ] = [ R ( T n ) N ( T n ) ] [ R ( T n ) N ( T n ) ] .

First, for T 1 : R ( T n ) R ( T n ) . Let y R ( T n ) = R ( T n + 1 ) , then there exists x such that y = T n + 1 x = T ( T n x ) , it follows that T1 is onto. Now let x N ( T ) R ( T n ) , then there exists y such that x = T n y and 0 = T x = T ( T n y ) = T n + 1 y . Hence, y N ( T n + 1 ) = N ( T n ) and x = T n y = 0 . It follows T1 is 1-1. Therefore, T1 is invertible.

For T 2 : N ( T n ) N ( T n ) , note that T 2 n = T n | N ( T n ) = 0 , hence T2 is nilpotent.

For T 3 : N ( T n ) R ( T n ) , for every x N ( T n ) = N ( T n + 1 ) we have 0 = T n + 1 x = T n T x , thus T 3 x N ( T n ) . But T 3 x R ( T n ) , hence T 3 x R ( T n ) N ( T n ) = { 0 } and it follows T 3 = 0 .

Finally, for T 4 : R ( T n ) N ( T n ) , for x R ( T n ) we have T x R ( T n + 1 ) = R ( T n ) . But T 4 x N ( T n ) , hence T 4 x R ( T n ) N ( T n ) = { 0 } and it follows T 4 = 0 .

We have a similar characterization as for the group invertibility.

Theorem 9. Let T B ( X ) . X = R ( T n ) N ( T n ) if and only if there exists S B ( X ) such that T n = T n S T , S = S T S and S T = T S .

Proof. If X = R ( T n ) N ( T n ) , then we have the following matrix form with respect to this decomposition:

T = [ T 1 0 0 T 2 ] = [ R ( T n ) N ( T n ) ] [ R ( T n ) N ( T n ) ]

with T1 invertible and T2 nilpotent. Now, let us define the operator

S : = [ T 1 1 0 0 0 ] = [ R ( T n ) N ( T n ) ] [ R ( T n ) N ( T n ) ] .

A direct verification shows that T n = T n S T , S = S T S and S T = T S .

Conversely, suppose that there exists S B ( X ) such that T n = T n S T , S = S T S and S T = T S . Then, from

R ( T n ) = R ( T n S T ) = R ( T n + 1 S ) R ( T n + 1 ) R ( T n ) ,

We have R ( T n ) = R ( T n + 1 ) . Also, from

N ( T n ) = N ( T n S T ) = N ( S T n + 1 ) N ( T n + 1 ) N ( T n ) ,

We have N ( T n ) = N ( T n + 1 ) . Now, the result follows from Proposition 1.

Remark. (i) We say that T B ( X ) is Drazin invertible if there exists S B ( X ) and n such that

T n = S T n + 1 , S = S T S , T S = S T .

The least n that satisfies above condition is called the Drazin index of T. In this case, we say that S is the Drazin inverse for T. We denote the Drazin inverse S = T d .

(ii) Notice that from S = S T S we get that ( T S ) 2 = T S and ( S T ) 2 = S T .

(iii) The Drazin inverse is unique if it exists. Indeed, suppose T B ( X ) is Drazin invertible and S and S ˜ are Drazin inverses for T. Then,

S = S T S = S ( T S ) n = S n + 1 T n = S n + 1 T n S ˜ T = S n + 1 T n ( S ˜ T ) n + 1 = S n + 1 T 2 n + 1 S ˜ n + 1 .

In a similar way, S ˜ = S n + 1 T 2 n + 1 S ˜ n + 1 and S = S ˜ .

The group and the Drazin inverses are related as follows:

Theorem 10. T is Drazin invertible if and only if Tn is group invertible for some n .

Proof. Suppose T is Drazin invertible with Drazin inverse S and Drazin index n. Then, T n = T n S T = T n ( S T ) n = T n S n T n , S n = S n T n S n and T n S n = S n T n .

Conversely, suppose Tn is group invertible for some n . Then, X = R ( T n ) N ( T n ) . From Theorem 9, T is Drazin invertible.

Now we characterize the Drazin invertibility using the spectrum.

Theorem 11. Let T B ( X ) . The operator T is Drazin invertible of index n if and only if 0 is a pole of the resolvent of order n.

Proof. Suppose T is Drazin invertible of index n. Then, we have the following matrix form for T:

T = [ T 1 0 0 T 2 ] = [ R ( T n ) N ( T n ) ] [ R ( T n ) N ( T n ) ] ,

with T1 invertible and T2 nilpotent.

Suppose R λ ( T ) has a Laurent expansion at 0 given by

R λ ( T ) = B n λ n .

Since T1 is invertible, we have 0 ρ ( T 1 ) , thus the resolvent function ( T 1 λ I ) 1 is analytic at 0, which implies that the principal part of the Laurent series of ( T 1 λ I ) 1 have no terms. On the other hand, since T2 is nilpotent of degree n, then the principal part of the Laurent series of ( T 2 λ I ) 1 has n terms.

Now, since R ( T ) and N ( T ) decompose T, then they decompose Bn for every n. Thus, the principal part of the Laurent series of ( T λ I ) 1 has only n terms. Therefore 0 is a pole of order n of the resolvent function R λ ( T ) .

Conversely, suppose 0 is a pole of the resolvent of order n. Let P : = P ( { 0 } ) be the spectral projection associated with T and the spectral set {0}. Then, X = N ( P ) R ( P ) . Moreover, since T P = P T , we have R ( P ) and N ( P ) decompose T. Thus, we have the following matrix form:

T = [ T 1 0 0 T 2 ] : [ N ( P ) R ( P ) ] [ N ( P ) R ( P ) ] .

Since σ ( T | N ( P ) ) = σ ( T ) \ { 0 } , we have T1 is invertible. Since σ ( T | R ( P ) ) = { 0 } and 0 is a pole of order n, we have T 2 n = 0 . Now, define the operator S by

S = [ T 1 1 0 0 0 ] : [ N ( P ) R ( P ) ] [ N ( P ) R ( P ) ] .

A direct verification shows that S is the Drazin inverse for T.

Slight modifications in the proofs of Theorems 5 and 6 give us the following:

Theorem 12. (i) T is Drazin invertible if and only if there exists a projection P such that T + P is invertible and T P = P T is nilpotent. Moreover, P is the spectral projection of the spectral set {0} associated to T.

(ii) The operator T is Drazin invertible if and only if there exists two closed invariant subspaces M and N for T such that X = M N , T | M is invertible and T | N is nilpotent.

5. Isolated Points of the Spectrum

Let T B ( X ) . The quasinilpotent part of T is defined by

H 0 ( T ) = { x X : lim n T n x 1 / n = 0 }

and the analytical core is the set

K ( T ) = { x X : there exists a sequence ( u n ) X and a constant δ > 0 such that x = u 0 , T u n + 1 = u n and u n δ n x , n } .

It is known that both of H 0 ( T ) and K ( T ) are (not necessary closed) invariant for T subspaces of X, T ( K ( T ) ) = K ( T ) . Furthermore, 0 is an isolated point in σ ( T ) if and only if X = H 0 ( T ) K ( T ) and K ( T ) are closed. Additionally, P ( 0 ) = H 0 ( T ) and N ( P 0 ) = K ( T ) (for more details, see [5] ).

Theorem 13. Let T B ( X ) . If X = H 0 ( T ) K ( T ) , K ( T ) closed, then the restriction T 1 = T | K ( T ) : K ( T ) K ( T ) is invertible, T 2 = T | H 0 ( T ) : H 0 ( T ) H 0 ( T ) is quasinilpotent, and we have the following matrix forms for T:

T = [ T 1 0 0 T 2 ] = [ K ( T ) H 0 ( T ) ] [ K ( T ) H 0 ( T ) ] .

Moreover, for the operator T D define with

T D = [ T 1 1 0 0 0 ] = [ K ( T ) H 0 ( T ) ] [ K ( T ) H 0 ( T ) ]

We have

T ( I T D T ) is quasinilpotent , T D = T D T T D , T T D = T D T .

Proof. From N ( T ) K ( T ) = { 0 } , we have T1 is 1-1, and from T ( K ( T ) ) = K ( T ) we have T1 is onto. Therefore, T1 is invertible.

Let x H 0 ( T ) , then lim T 2 n x 1 / n = 0 , thus T2 is quasi-nilpotent.

Finally, from T ( K ( T ) ) = K ( T ) and T ( H 0 ( T ) ) = H 0 ( T ) , the matrix form follows.

Definition 2. We say that T B ( X ) is Koliha-Drazin invertible if there exists S B ( X ) such that

T ( I S T ) is quasinilpotent , S = S T S , T S = S T .

In this case, we say that S is the Koliha-Drazin inverse for T. We denote the Koliha-Drazin inverse S = T D .

The next theorem give us some basic properties of Koliha-Drazin inverse.

Theorem 14. (i) The Koliha-Drazin inverse is unique if it exists.

(ii) The operator T is Koliha-Drazin invertible if and only if 0 is an isolated point of the spectrum.

(iii) The operator T is Koliha-Drazin invertible if and only if there exists two closed invariant subspaces M and N for T such that X = M N , T | M is invertible and T | N is quasinilpotent.

6. Clusters

Let r ( T , x ) = lim sup n T n x 1 / n and S ( T , x ) the set of all sequences ( x n ) in X such that T x n + 1 = x n for all n 1 and T x 1 = x .

Let T B ( X ) and r > 0 . The generalized quasinilpotent part and the generalized analytic core are defined by

H r ( T ) : = { x X : r ( T , x ) < r } ,

K r ( T ) : = { x X : ( x n ) S ( T , x ) with lim sup n x n 1 / n < r 1 } .

We have that T ( H r ( T ) ) = H r ( T ) and T ( K r ( T ) ) = K r ( T ) .

Theorem 15. Let T B ( X ) and r > 0 . If X = H r ( T ) K r ( T ) , then the restriction T 1 = T | K r ( T ) : K r ( T ) K r ( T ) is invertible, T 2 = T | H r ( T ) : H r ( T ) H r ( T ) satisfies σ ( T 1 ) σ ( T 2 ) = , and we have the following matrix form for T:

T = [ T 1 0 0 T 2 ] = [ K r ( T ) H r ( T ) ] [ K r ( T ) H r ( T ) ] ,

Proof. Let T 1 : = T | K r ( T ) and T 2 : = T | H r ( T ) . Since N ( T ) H r ( T ) , it follows T1 is 1-1. Also, since T ( K r ( T ) ) = K r ( T ) it follows T1 is onto. Hence, T 1 : K r ( T ) K r ( T ) is invertible.

Since lim n sup T n x 1 / n < r , the spectral radius r ( T 2 ) < r . Let x H r ( T ) . Since T 1 n x n = x and T 1 n is invertible, we have lim sup n T n x 1 / n = lim sup n x n 1 / n < r 1 . Then, for every λ σ ( T 2 ) we have | λ | > r . Therefore, σ ( T 1 ) σ ( T 2 ) = .

Finally, from T ( H r ( T ) ) = H r ( T ) and T ( K r ( T ) ) = K r ( T ) the matrix form follows.

Theorem 16. Let T B ( X ) and r > 0 . If X = H r ( T ) K r ( T ) , then there exists S such that σ ( T T S T ) σ ( T S T ) = { 0 } , S = S T S and T S = S T .

Proof. Since X = H r ( T ) K r ( T ) we have

T = [ T 1 0 0 T 2 ] = [ K r ( T ) H r ( T ) ] [ K r ( T ) H r ( T ) ] ,

with T1 invertible and σ ( T 1 ) σ ( T 2 ) = . Let S B ( X ) be defined by

S = [ T 1 1 0 0 0 ] = [ K r ( T ) H r ( T ) ] [ K r ( T ) H r ( T ) ] .

Then, a direct verification shows S = S T S and S T = T S . From S = S T S it follows that S T and I S T are projections, and the matrix forms shows that R ( S T ) = K r ( T ) and R ( I S T ) = H r ( T ) . Then T ( S T ) = T 1 0 and T ( I S T ) = 0 T 2 . Thus, σ ( T S T ) = σ ( T 1 ) { 0 } and σ ( T T S T ) = σ ( T 2 ) { 0 } . Since σ ( T 1 ) σ ( T 2 ) = , it follows σ ( T S T ) σ ( T T S T ) = { 0 } .

We say that T B ( X ) is Λ-Drazin invertible if there exists S B ( X ) such that

σ ( T T S T ) σ ( T S T ) = { 0 } , S = S T S ,

T S = S T , and σ ( T T S T ) = Λ .

In this case, we say that S is the Λ-Drazin inverse for T. We denote the Λ-Drazin inverse S = T D , Λ (for more details, see [6] ).

Theorem 17. Let T B ( X ) . The operator T is Λ-Drazin invertible if and only if Λ is a spectral set for T such that 0 Λ .

Proof. Suppose T is Λ-Drazin invertible and let S = T D , Λ . Then S T and I S T are projections which commute with T. It follows X = R ( T S ) R ( I S T ) and R ( T S ) and R ( I S T ) are invariant for T.

Let T 1 : = T | R ( T S ) and T 2 : = T | R ( I T S ) . If x N ( T 1 ) then x N ( T ) R ( T S ) , thus x = T S x = S T x = 0 . Hence T1 is 1-1. Now, if y R ( T S ) then from S y = S T S y = T S S y we have S y R ( T S ) . Let x = S y , then T s = T S y = y . Hence T1 is onto. It follows T1 is invertible.

Since R ( T S ) and R ( I S T ) decompose T, we have σ ( T ) = σ ( T 1 ) σ ( T 2 ) . Also, σ ( T S T ) = σ ( T 1 ) { 0 } and σ ( T T S T ) = σ ( T 2 ) { 0 } . From σ ( T S T ) σ ( T T S T ) = { 0 } and 0 σ ( T 1 ) it follows σ ( T 1 ) σ ( T 2 ) = . Hence σ ( T 1 ) and σ ( T 2 ) are disjoint isolated sets of T.

Now, if x N ( T ) , then ( I S T ) x = x and x R ( I S T ) . It follows x N ( T ) R ( I S T ) = N ( T | R ( I S T ) ) = N ( T 2 ) . Thus, 0 σ ( T 2 ) = σ ( T 2 ) { 0 } = σ ( T T S T ) = Λ . Hence Λ is a spectral set of σ ( T ) and 0 Λ .

Conversely, suppose Λ is an spectral set for T. Let P = P ( Λ ) be the spectral projection of the spectral set Λ associated to T, then we have the following matrix form

T = [ T 1 0 0 T 2 ] = [ N ( P ) R ( P ) ] [ N ( P ) R ( P ) ] .

Since σ ( T 1 ) = σ ( T | N ( P ) ) = σ ( T ) \ Λ and 0 Λ , we have T1 is invertible. Let S B ( X ) be defined by

S = [ T 1 1 0 0 0 ] = [ N ( P ) R ( P ) ] [ N ( P ) R ( P ) ] .

Then, a direct verification shows S = S T S , T S = S T and P = I S T . Since T T S T = T ( I S T ) = T P and

T P = [ 0 0 0 T 2 ] ,

We have σ ( T T S T ) = { 0 } σ ( T 2 ) = { 0 } σ ( T | R ( P ) ) = { 0 } Λ = Λ . Also, since T S T = T ( I P ) and

T ( I P ) = [ T 1 0 0 0 ] ,

We have σ ( T S T ) = σ ( T ( I P ) ) = σ ( T 1 ) { 0 } . Finally, since σ ( T 1 ) and σ ( T 2 ) are disjoint we have σ ( T T S T ) σ ( T S T ) = { 0 } .

Remark. From the proof of Theorem 6.1 we see that if for some r > 0 we have X = K r ( T ) H r ( T ) , then σ ( T 1 ) is contained in the disc { λ : | λ | < r } and σ ( T 2 ) is contained in the annulus { λ : r < | λ | } . To see that the converse also holds, suppose that for some r we have that Λ { λ : | λ | < r } is spectral set of T, and let P be the spectral projection associated to T and Λ. Then, T is Λ-Drazin invertible with inverse S = T D , Λ . From the matrix forms

T = [ T 1 0 0 T 2 ] = [ N ( P ) R ( P ) ] [ N ( P ) R ( P ) ] ,

S = [ T 1 1 0 0 0 ] = [ N ( P ) R ( P ) ] [ N ( P ) R ( P ) ] ,

We see that r ( T 2 ) < r and r ( S ) = r ( T 1 1 ) < r 1 . We will show N ( P ) = K r ( T ) and R ( P ) = H r ( T ) .

N ( P ) K r ( T ) : let x N ( P ) and let x n = S n x . Then lim sup n x n 1 / n = lim sup n S n x 1 / n = r ( S ) < r 1 . K r ( T ) N ( P ) : let x K r ( T ) . Then, there exists a sequence { x n } such that x = T n x n . Thus, lim sup n P x 1 / n lim sup n P n T n x n 1 / n r ( T P ) lim sup n x n 1 / n r ( T 2 ) r 1 < r r 1 = 1 . Hence P x = 0 .

R ( P ) H r ( T ) : let x R ( P ) , then lim sup n T n x 1 / n = lim sup n T 2 n x 1 / n < r . H r ( T ) R ( P ) : let x H r ( T ) . Since I P = T S = ( T S ) n , we have lim sup n ( I P ) x 1 / n r ( T ) r ( S ) = r r 1 = 1 . Hence, ( I P ) x = 0 and x R ( P ) .

7. Conclusions and Final Remarks

Let T B ( X ) , if there exists S such that T = T S T we say that S is an inner inverse for T. If S = S T S holds, then S is called an outer inverse for T. From 0, it follows inner invertibility which implies outer invertibility. If S is inner and outer inverse for T, then S is called a reflexive inverse for T. Neither inner nor outer inverses are unique. However, if a reflexive inverse for T commutes with T, then it is unique (since it is the group inverse for T).

Outer inverses are not unique. However, if we prescribe the range and null space of the outer inverse, it becomes unique. All inverses discussed in this paper are classes of outer inverses, and all classes satisfy that the range and null space of the outer inverse coincides with the null space and range of the spectral projection associated to a spectral set containing the point 0.

If T is invertible, then λ σ ( T ) if and only if 1 λ σ ( T 1 ) , note that in this case λ 0 . Now, if T is Λ-Drazin invertible for Λ = { 0 } , from the matrix form we can write T = T 1 T 2 and T D , Λ = T 1 1 0 with σ ( T 1 ) = σ ( T ) \ { 0 } and σ ( T 2 ) = { 0 } . Since σ ( T ) = σ ( T 1 ) σ ( T 2 ) , σ ( T 1 ) σ ( T 2 ) = and σ ( T D , Λ ) = σ ( T 1 1 ) { 0 } , it follows that for every λ 0 ,

λ σ ( T D , Λ ) 1 λ σ ( T ) .

Generalized inverses satisfying above condition are called spectral inverses. Thus, group, Koliha inverse and Koliha-Drazin inverses are spectral inverses.

Acknowledgements

This work was partially supported by CONAHCYT (Mexico).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Djordjević, D.S. and Rakočević, V. (2008) Lectures on Generalized Inverses. University of Niš, Niš.
[2] Caradus, S.R., Pfaffenberger, W.E. and Yood, B. (1974) Calkin Algebras and Algebras of Operators on Banach Spaces. Marcel Dekker, Inc., New York.
https://doi.org/10.1201/9781315138718
[3] Alali, A., Ali, S., Hassan, N., Sahng, Y., Mahnashi, A.M. and Assiry, A. (2023) Algebraic Structure Graphs over the Commutative Ring Zm: Exploring Topological Indices and Entropies Using M-Polynomials. Mathematics, 11, 3833.
https://doi.org/10.3390/math11183833
[4] Sharma, M., Rajat, K. and Shang, Y. (2021) On g-Noncommuting Graph of a Finite Group Relative to Its Subgroups. Mathematics, 9, 3147.
https://doi.org/10.3390/math9233147
[5] Aiena, P. (2004) Fredholm and Local Spectral Theory, with Applications to Multipliers. Kluwer Academic Publishers, Dordrecht.
https://doi.org/10.1007/1-4020-2525-4
[6] Dajic, A. and Koliha, J.J. (2007) The σg-Drazin Inverse and the Generalized Mbekhta Decomposition. Integral Equations and Operator Theory, 57, 309-326.
https://doi.org/10.1007/s00020-006-1454-0

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.