Introduction to Lie Algebras and Their Representations

Abstract

This paper is made up of a desire for me to contribute to this beautiful field of mathematics that I have encountered in recent years. In addition, I would like to mention that I am not aware that there are papers in our Balkans on Lie algebra, although this is only an introductory part for which in the near future in collaboration with several professors from abroad I will do a book in our mother tongue on Lie groups and algebras. The main content of this paper is similar to the books that have been published regarding Lie algebras, from basic definition and example, structure, killing form, classification to root system. In my opinion, this paper is important in relation to Lie algebras, because it will be helpful to all those who write papers on algebra, as well as the fact that the paper will be written in Montenegrin, which is understood by almost more than 70 percent of the population. For me, this work has the significance of being useful to all who need it.

Share and Cite:

Hasić, A. (2021) Introduction to Lie Algebras and Their Representations. Advances in Linear Algebra & Matrix Theory, 11, 67-91. doi: 10.4236/alamt.2021.113006.

1. Introduction

[1] Lie theory has its roots in the work of Sophus Lie, who studied certain transformation groups that are now called Lie groups. His work led to the discovery of Lie algebras. By now, both Lie groups and Lie algebras have become essential to many parts of mathematics and theoretical physics. In the meantime, Lie algebras have become a central object of interest in their own right, not least because of their description by the Serre relations, whose generalizations have been very important.

The study of Lie Algebra requires a thorough understanding of linear algebra, group, and ring theory.

Definition 1.1. A nonempty set G equipped with an operation on it is said to form a group under that operation if the operation obeys the following laws, called the group axioms:

● Closure: For any a , b G , we have a b G .

● Associativity: For any a , b , c G , we have a ( b c ) = ( a b ) c .

● Identity: There exists an element e G such that for all a G we have a e = e a = a . Such an element e G is called the identity in G.

● Inverse: For each a G there exists an element a 1 G such that a a 1 = a 1 a = e . Such an element a 1 G is called the inverse of a in G.

Definition 1.2. Let M m , n ( x ) be the set of m × n matrices whose entries are in x. We will denote the ( i , j ) entry of an m n matrix A by A i j or a i j and also write

A = [ a i j ] = [ a 11 a 1 n a m 1 a m n ]

We will use the special notations

M n ( x ) = M n , n ( x ) , x n = M n , 1 ( x ) .

M m , n ( x ) is a x-vector space with the operations of matrix addition and scalar multiplication. The zero vector is the m × n zero matrix P m , n which we will often denote P when the size is clear from the context. The matrices E r s with r = 1 , , m ; s = 1 , , n and

( E r s ) i j = δ i r δ j s = { 1 if i = r and j = s 0 otherwise

form a basis of M m , n ( x ) ; hence its dimension as a x-vector space is

dim x M m , n ( x ) = m n

when n = 1 we will denote the standard basis vectors of x n = M n , 1 ( x ) by

e r = E r 1 ( r = 1 , , m ) .

As well as being a x-vector space of dimension n 2 M n ( x ) is also a ring with the usual addition and multiplication of square matrices, with zero P n = P n , n and the n × n identity matrix In as its unity; M n ( x ) is not commutative except when n = 1 . Later we will see that M n ( x ) is also an important example of a finite dimensional x-algebra. The ring M n ( x ) acts on x n by left multiplication, giving x n the structure of a left M n ( x ) -module.

Definition 1.3: But M n ( x ) is not just a vector space. It also has a multiplication which distributes over addition (on either side).

A ( B + C ) = A B + A C

( B + C ) A = B A + C A

Such a system is called an algebra. When we use the word algebra we will always mean one with a two-sided multiplicative identity. For M n ( x ) ,

I = ( 1 1 0 0 1 )

is the multiplicative identity.

Definition 1.4: If G is an algebra, x G is a unit if there exists some y G such that x y = 1 = y x , i.e., if it has a multiplicative inverse.

Definition 1.5: The group of units in the algebra M n ( R ) is denoted by G L ( n , R ) , in M n ( C ) by G L ( n , r ) and in M n ( x ) by G L ( n , x ) These are the general linear groups.

Proposition 1.6. The determinant function det : M n ( x ) x has the following properties.

1) For A , B M n ( x ) , det ( A B ) = det A det B .

2) det I n = 1 .

3) A M n ( x ) is invertible if and only if det A O .

We will use the notation

G L n ( x ) = { A M n ( x ) : det A O }

for the set of invertible n × n matrices (also known as the set of units of the ring M n ( x ) ), and

S L n ( x ) = { A M n ( x ) : det A = 1 } G L n ( x )

for the set of n n unimodular matrices.

Definition 1.7: a matrix A M n × n ( F ) is called orthogonal if A A t = I .

Theorem 1.8: If A is a square matrix of order n > 1 , then A ( a d j A ) = ( a d j A ) A = | A | I n . The ( i , j ) th element of A ( a d j A ) is:

k = 1 n a i k a k j = k = 1 n a i j A j k = { | A | i = j 0 i j

Therefore A ( a d j A ) is a scalar matrix with diagonal elements all equal to | A | .

A ( A d j A ) = I n

( A d j A ) A = | A | I ( n )

where | A | represents the determinant of A

Proposition 1.8. The following equation denotes the inverse of A.

A 1 = A d j A | A |

2. Basic Definitions and First Examples

The study of Lie Algebra requires a thorough understanding of linear algebra, group, and ring theory. The following provides a cursory review of these subjects as they will appear within the scope of this paper.

Definition 2.1. A vector space V over a field F, is a Lie algebra if there exists a bilinear multiplication V × V V , with an operation, denoted ( x , y ) [ x y ] , such that:

1) It is skew symmetric where [ x , x ] = 0 for all x in V (this is equivalent to [ x , y ] = [ y , x ] since character F 2 ).

2) It satisfies the Jacobi identity [ x [ y z ] + y [ z , x ] + z [ x , y ] ] = 0 ( x , y , z V ) .

Example 2.2: Given an n dimensional vector space End(V), the set of all linear maps V V with associative multiplication ( x , y ) x y for all x , y , where x y denotes functional composition, observe, End(V) is an associative algebra over F. Let us define a new operation on End(V) by ( x , y ) x y y x . If we denote x y y x by [ x , y ] , then End(V) together with the map [ . , . ] satisfies Definition 1.1, and is thus a Lie algebra.

Proof. The first two bracket axioms are satisfied immediately. The only thing left to prove is the Jacobi identity. Given x , y , z End ( V ) , we have by use of the bracket operation:

[ x [ y z ] + y [ z , x ] + z [ x , y ] ] = 0

x ( y z z y ) ( y z z y ) x + y ( z x x z ) ( z x x z ) y + z ( x y y x ) ( x y y x ) z = 0

x y z x z y y z x + z y x + y z x y x z z x y + x z y + z x y z y x x y z + y x z = 0

( x y z x y z ) + ( x z y x z y ) + ( y z x y z x ) + ( z y x z y x ) + ( y x z y x z ) = 0

Definition 2.3. The Lie algebra End(V) with bracket [ x ; y ] = x y y x , is denoted as gl(V), the general linear algebra.

Example 2.4. We can show that real vector space R 3 is a Lie algebra. Recall the following cross product properties when a, b and c represent arbitrary vectors and α , β and γ represent arbitrary scalars:

1) a × b = ( b × a )

2) α × ( β b + γ c ) = β ( a × b ) + γ ( a × c ) and

( α a + β b ) × c = α ( a × c ) + β ( b × c )

Proof. Note, a × a = ( a × a ) , by property (1), letting b = a , therefore, a × a = 0 . By the above properties, the cross product is both skew symmetric (property 1) and bilinear (property 2).

By vector triple product expansion: x × ( y × z ) = y ( x × z ) z ( x × y ) . To show that the cross product satisfies the Jacobi identity, we have:

[ x [ y z ] + y [ z , x ] + z [ x , y ] ] = x × ( y × z ) + y × ( z × x ) + z × ( x × y ) = [ y ( x z ) z ( x y ) ] + [ z ( x y ) x ( y z ) ] + [ x ( z y ) y ( z x ) ] = 0

Example 2.5. The Lie algebra s l ( 2 , C ) (or A 1 ), 2 × 2 matrices of trace 0. A basis is given by the three matrices

H = [ 1 0 0 1 ] , X + = [ 0 1 0 0 ] , X = [ 0 0 1 0 ]

One computes [ H X + ] = 2 X + , [ H X ] = 2 X , [ X + X ] = H . This Lie algebra and these relations will play a considerable role later on.

The standard skew-symmetric (exterior) form det [ X , Y ] = x 1 y 2 x 2 y 1 on C 2 is invariant under s l ( 2 , C ) (precisely because of the vanishing of the trace), and so s l ( 2 , C ) is identical with s p ( 1 , C ) .

Example 2.6. L is itself a left L-module.

The left action of L on L is defined as x y = [ x y ] Then, we have

[ [ x y ] z ] = [ x [ y z ] ] [ y [ x z ] ]

which is a consequence of the Jacobi identity. This shows that L is a left L-module. This is called the adjoint module. We define a d x : L L by

a d x y = [ x y ] for x , y L

Then we have

a d [ x y ] = a d x a d y a d y a d x

Now let V be a left L-module, U be a subspace of V and H a subspace of L. We define HU to be the subspace of V spanned by all elements of the form xu for x H , u U .

A submodule of V is a subspace U of V such that L U U . In particular V is a submodule of V and the zero subspace O = { 0 } is a submodule of V. A proper submodule of V is a submodule distinct from V and O. An L-module V is called irreducible if it has no proper submodules. V is called completely reducible if it is a direct sum of irreducible submodules. V is called indecomposable if V cannot be written as a direct sum of two proper submodules. Of course every irreducible L-module is indecomposable, but the converse need not be true.

We may also define right L-modules, but we shall mainly work with left L-modules, and L-modules will be assumed to be left L-modules unless otherwise stated.

Example 2.7. Let o(n) be the subspace of gl(n) consisting of skew-symmetric matrices, that is, A T = A . Then

( A B B A ) T = B T A T A T B T = ( B ) ( A ) ( A ) ( B ) = ( A B B A )

so that o(n) is closed under [ a b ] = a b b a , and hence is a Lie algebra.

To define a Lie bracket on a vector space with basis e 1 , , e n we need to specify the structure constants c l m r , that is, elements of k such that

[ e l , e m ] = r = 1 n c l m r e r

For example,

H = ( 1 0 0 1 ) , X + = ( 0 1 0 0 ) , X = ( 0 0 1 0 )

is a basis of the vector space sl(2).

Definition 2.8. Ifg and g are Lie algebras, a homomorphism ϕ : g g is a linear map such that

φ ( [ x , y ] ) = [ φ ( x ) , φ ( y ) ] for all x , y g

As usual, the identity map is a homomorphism, and the composition of two homomorphisms is another homomorphism.

Definition 2.9. A bijective homomorphism is an isomorphism (its inverse is clearly also an isomorphism). We write g g to mean that g and g are isomorphic, i.e. there exists an isomorphism between them.

Proposition 2.10. [2] Any two-dimensional Lie algebra is either abelian or is isomorphic to l 2 .

Proof. Let g be a two-dimensional non-abelian Lie algebra with basis x, y. We need to find another basis u, v of g such that [ u , v ] = v . Suppose that [ x , y ] = a x + b y . At least one of a and b is nonzero since otherwise g would be abelian. If b 0 , let u = b 1 x and v = a x + b y . Then

[ u , v ] = b 1 [ x , a x + b y ] = [ x , y ] = a x + b y = v .

If b = 0 , let u = a 1 y , v = x . Then

[ u , v ] = a 1 [ x , y ] = a 1 a x = v .

So, in either case we are done.

The most important three-dimensional Lie algebra, namely s l 2 , consisting of all 2 × 2 matrices with trace 0. The standard basis of s l 2 is e, h, f, where

e = e 12 = ( 0 1 0 0 ) , h = e 11 e 22 = ( 1 0 0 1 ) , f = e 21 = ( 0 0 1 0 )

The Lie bracket is completely specified by the brackets of these basis elements, which are given by:

Proposition 2.11. [2]

[ e , f ] = h , [ h , e ] = 2 e , [ h , f ] = 2 f .

Proof: We get

[ e , f ] = [ e 12 , e 21 ] = e 11 e 22 = h ,

[ h , e ] = [ e 11 , e 12 ] [ e 22 , e 12 ] = e 12 + e 12 = 2 e ,

[ h , f ] = [ e 11 , e 21 ] [ e 22 , e 21 ] = e 21 e 21 = 2 f .

Example 2.12. Suppose that g is a three-dimensional Lie algebra with basis x 1 , x 2 , x 3 such that

[ x 1 , x 2 ] = x 1 + x 2 ,

[ x 1 , x 3 ] = a x 1 + x 3 ,

[ x 2 , x 3 ] = x 2 + b x 3 ,

for some a , b C . Determine a and b.

Example 2.13: Let V be a vector space over F, and let be a non-degenerate skew-symmetric bilinear form on V. The symplectic Lie algebra sp(V) or just sp(V) consists of the operators T on V that leave infinitesimally invariant: ( T X , Y ) + ( X , T Y ) = 0 .

3. Basic Structure of a Lie Algebra

Subalgebra

Definition 3.1: A subspace K of V is called a (Lie) subalgebra if [ x y ] K whenever x , y K .

Example 3.2: Important subalgebras of g l n are the following spans of subsets of the standard basis: the diagonal subalgebra

D n = { ( 0 0 0 0 0 0 0 0 0 0 0 0 ) } = C { e i i | 1 i n }

the upper-triangular subalgebra

α n = { ( 0 0 0 0 0 0 ) } = C { e i j | 1 i j n }

and the strictly upper-triangular subalgebra

β n = { ( 0 0 0 0 0 0 0 0 0 0 ) } = C { e i j | 1 i < j n }

The dimensions of these Lie algebras are as follows

dim D n = n , dim α n = ( n + 1 2 ) , dim β n = ( n 2 )

Definition 3.3: A subspace I of a Lie algebra L is called an ideal of L if x I ; y L together imply [ x ; y ] I . The construction of ideals in Lie algebra is analogous to the construction of normal subgroups in group theory.

By skew-symmetry, all Lie algebra ideals are automatically two sided. That is, if [x; y] 2I, then [y; x] 2 I. The kernel of a Lie algebra L and L itself are trivial ideals contained in every Lie algebra.

Proposition 3.4: [3]

1) If H, K are subalgebras of L so is H K .

2) If H, K are ideals of L so is H K .

3) If H is an ideal of L and K a subalgebra of L then H +K is a subalgebra of L.

4) If H, K are ideals of L then H + K is an ideal of L.

Proof. 1) H K is a subspace of L and [ H K , H K ] [ H H ] [ K K ] H K . Thus H K is a subalgebra.

2) This time we have [ H K , L ] [ H L ] [ K L ] H K . Thus H K is an ideal of L.

3) H + K is a subspace of L. Also [ H + K , H + K ] [ H H ] + [ H K ] + [ K H ] + [ K K ] H + K , since [ H H ] H , [ H K ] H , [ K K ] K . Thus H + K is a subalgebra.

4) This time we have [ H + K , L ] [ H L ] + [ K L ] H + K . Thus H + K is an ideal of L.

Derivations and homomorphism

The derivative between two functionsf andg is a linear operator that satisfies the Leibniz rule:

1) ( f g ) = f g + f g

2) ( α f ) = α f where α is a scalar.

Given an algebra A over a field F, the derivation of A is the linear map δ such that δ ( f , g ) = f δ ( g ) + δ ( f ) g for all f , g A . The set of all derivations of A is denoted by Der(A). Given δ D e r ( A ) , f , g A and α F .

( α δ ) ( f g ) = α δ ( f g ) = α ( f δ ( g ) + δ ( f ) g ) = α f δ ( g ) + α δ ( f ) g

where the Leibniz rule is satisfied if and only if a f = f a where F is a field.

Definition 3.5. Let L be a Lie algebra over an arbitrary field F. Let L 2 and Z(L) denote the derived algebra and the center of L, respectively. A derivation of L is an F-linear transformation α : L L such that α ( [ x , y ] ) = [ α ( x ) , y ] + [ x , α ( y ) ] for all x , y L . We denote by D e r ( L ) the vector space of all derivations of L, which itself forms a Lie algebra with respect to the commutator of linear transformations, called the derivation algebra of L. For all x L , the map a d x : L L given by y [ x , y ] is a derivation called the inner derivation corresponding to x. Clearly, the space I D e r ( L ) = { a d x | x L } of inner derivations is an ideal of D e r ( L ) .

Proposition 3.6: Let g be any Lie algebra. For any x g , define a linear transformation

a d g ( x ) : g g : y [ x , y ]

Then a d g : g g l ( g ) is a representation of g on g itself.

Proof: It is clear that a d g is a linear map. We need to show that a d g ( [ x , y ] ) = [ a d g ( x ) , a d g ( y ) ] for all x , y g , i.e. that

[ [ x , y ] , z ] = [ x , [ y , z ] ] [ y , [ x , z ] ] for all x , y , z g .

This is, however, just a form of the Jacobi identity.

Definition 3.7: [4] The map L D e r L sending x to a d x is called the adjoint representation of L. This is akin to taking the ad homomorphism of g g l ( g ) . To show ad is a homomorphism:

a d ( [ x ; y ] ) = [ a d ( x ) , a d ( y ) ] = a d ( x ) a d ( y ) a d ( y ) a d ( x )

[ [ x , y ] ; z ] = [ x , [ y , z ] ] [ y , [ x , z ] ]

0 = [ x , [ y , z ] ] + [ y , [ z , x ] ] + [ z , [ x , y ] ]

That is, if and only if the Jacobi identity is satisfied, where [ [ x , y ] , z ] = [ z , [ x , y ] ] and [ y ; [ x , z ] ] = [ y , [ z , x ] ] by skew symmetry.

Example 3.8: [4] The set of all inner derivations a d x , x L , is an ideal of D e r ( L ) . Let δ D e r ( L ) . By definition of inner derivations, for all y L :

[ δ , a d x ] ( y ) = ( δ ( a d x ) ( a d x ) δ ) ( y ) = δ [ x , y ] a d x ( δ ( y ) ) = [ δ ( x ) , y ] + [ x , δ ( y ) ] [ x , δ ( y ) ] = a d ( δ ( x ) y )

Therefore, a d x is an ideal of D e r ( L ) .

Example 3.9: [2] In the adjoint representation of s l 2 , the element e is represented by the linear transformation a d ( e ) of s l 2 , which sends any element x to [ e , x ] . On the standard basis, this linear transformation acts as follows:

a d ( e ) e = [ e , e ] = 0 , a d ( e ) h = [ e , h ] = 2 e , a d ( e ) f = [ e , f ] = h .

Making similar calculations for a d ( h ) and a d ( f ) , we deduce that the representing matrices are:

a d ( e ) : ( 0 2 0 0 0 1 0 0 0 ) , a d ( h ) : ( 2 0 0 0 0 0 0 0 2 ) , a d ( f ) : ( 0 0 0 1 0 0 0 2 0 )

Since Z ( s l 2 ) = { 0 } (or alternatively because these matrices are clearly linearly independent), this is a faithful representation of s l 2 .

Definition 3.10: An automorphism of A is an isomorphism of A onto itself. Aut A denotes the group of all such.

The set of inner automorphisms of a ring, or associative algebra A, is given by the conjugation element, using right conjugation, such that:

ϕ a : A A

ϕ a ( x ) = a 1 x a

Given x , y A :

ϕ a ( x y ) = a 1 ( x y ) a = a 1 x a a 1 y a = ( a 1 x a ) ( a 1 y a ) = ϕ a ( x ) ϕ a ( y )

where, ϕ a is an (invertible) homomorphism that contains ϕ a 1 .

Example 3.11: [4] Let L be a Lie algebra such that L = s l ( n , F ) ; g G L ( n , F ) . The map ϕ : L L defined by x g x t g 1 ( x t = the transpose of x) belongs to Aut L. When n = 2, g = the identity matrix, we can prove that Aut L is inner.

t r ( g x t g 1 ) = t r ( g g 1 x t ) = t r ( x t ) = t r ( x ) (Since g = the identity matrix)

t r ( x ) = 0 t r ( g x t g 1 ) = 0

Therefore, the map is a linear automorphism of s l ( n , F ) . If we apply the transpose to the commutator, for x , y L , we have:

[ x , y ] t = ( x y y x ) t = ( x y ) t ( y x ) t = y t x t x t y t = [ y t x t ]

Therefore:

ϕ [ x , y ] = g [ x , y ] t g 1 = g [ y t , x t ] g 1 = g ( y t x t x t y t ) g 1 = g y t x t g 1 g x t y t g 1 = [ g y t g 1 , g x t g 1 ] = g y t g 1 g x t g 1 g x t g 1 g y t g 1 = [ ϕ ( x ) , ϕ ( y ) ]

Therefore, ϕ is a homomorphism. Thus Aut L is inner.

Cartan’s Criterion

Theorem 3.12. (Cartan’s Criterion). Let L be a subalgebra of gl(V), V finite-dimensional. Suppose that T r ( x y ) = 0 for all x [ L , L ] , y L , The L is solvable.

Proof: As remarked at the beginning of it will suffice to prove that [ L , L ] is nilpotent, or just that all x in [ L , L ] are nilpotent endomorphisms. For this we apply the above lemma to the situation: Vas given, A = [ L , L ] , B = L , so M = { x g l ( V ) | [ x , L ] [ L , L ] } . Obviously L M . Our hypothesis is that T r ( x y ) = 0 for x [ L , L ] , y L , whereas to conclude from the lemma that each x [ L , L ] is nilpotent we need the stronger statement: T r ( x y ) = 0 for x [ L , L ] , y M .

Now if [ x , y ] is a typical generator of [ L , L ] , and if z M , then identity (⋅) above shows that T r ( [ x , y ] z ) = T r ( x [ y , z ] ) = T r ( [ y , z ] x ) . By definition of M, [ y , z ] [ L , L ] , so the right side is 0 by hypothesis.

The Killing form

Definition 3.13. LetL be a finite-dimensional Lie algebra over F. We define a map

L × L F

x , y x , y

given by

x , y = t r ( a d ( x ) a d ( y ) )

for x , y L . We refer to as the Killing form on L.

Proposition 3.14. Let L be a finite-dimensional Lie algebra over F. The Killing form on L is a symmetric bilinear form. Moreover, we have

[ x y ] , z = x , [ y z ]

Proof:

[ x y ] , z = t r ( a d [ x y ] a d ( z ) ) = t r ( ( a d ( x ) a d ( y ) a d ( y ) a d ( x ) ) a d ( z ) )

t r ( a d ( x ) a d ( y ) a d ( z ) ) t r ( a d ( y ) a d ( x ) a d ( z ) )

t r ( a d ( x ) a d ( y ) a d ( z ) ) t r ( a d ( x ) a d ( z ) a d ( y ) )

t r ( a d ( x ) ( a d ( y ) a d ( z ) a d ( z ) a d ( y ) ) )

t r ( a d ( x ) a d [ y z ] ) = x , [ y z ]

Lemma 3.15. Let L be a finite-dimensional Lie algebra over F. Let I be an ideal of L. Define

I = { x L : x , I = 0 }

Then I is an ideal of L.

Proof: It is evident that I is an F-subspace of L. Let x L , y I and z I .

Then

[ x y ] , z = x , [ y z ] = x , 0 = 0

It follows that [ x y ] I .

Example 3.16: [5] s l ( 2 , C ) . We write the elements as X = a X + + b H + c X .

From the brackets between the basis vectors one finds the matrix expressions

a d H = [ 2 0 0 0 0 0 0 0 2 ] , a d X + = [ 0 2 0 0 0 1 0 0 0 ] , a d X = [ 0 0 0 1 0 0 0 2 0 ]

and then the values t r ( a d H a d H ) etc. of the coefficients of the Killing form, with the result

κ X , X = 8 ( b 2 + a c ) ( = 4 t r X 2 ) .

The bilinear form κ X , Y is then obtained by polarization.

Lemma 3.17: [6] Let F have characteristic zero and be algebraically closed. Let n be a positive integer. For x , y g l ( n , F ) define

t ( x , y ) = t r ( x y )

The function t : g l ( n , F ) × g l ( n , F ) F is an associative, symmetric bilinear form. If L is a Lie subalgebra of g l ( n , F ) , L is simple, and the restriction of t to L × L is non-zero, then L is non degenerate.

Proof. It is clear that t is F-linear in each variable. Also, t is symmetric because t r ( x y ) = t r ( y x ) for x , y g l ( n , F ) . To see that t is associative, let x , y , z g l ( n , F ) . Then

t ( x , [ y , z ] ) = t r ( x ( y z z y ) ) = t r ( x y z ) t r ( x z y ) = t r ( x y z ) t r ( y x z ) = t r ( ( x y y x ) z ) = t ( [ x , y ] , z )

Assume that L is a subalgebra of g l ( n ; F ) , L is simple, and the restriction of t to L × L is non-zero. Let J = { y L : t ( x , y ) = 0 , x L } . We need to prove that J = 0. We claim that J is an ideal of L. Let y L and z J ; we need to see that [ y , z ] J . Let x L . Now t ( x , [ y , z ] ) = t ( [ x , y ] , z ) = 0 because z J .

It follows thatJ is an ideal. Since L is simple, J = 0 or J = L. If J = L, then the restriction of t to L × L is zero, a contradiction. Hence, J = 0.

Homomorphisms and Representations

Definition 3.18. A linear transformation φ : L L is called ahomomorphism if φ ( [ x , y ] ) = [ φ ( x ) , φ ( y ) ] , for all x , y L . φ is called a monomorphism if its kernal is zero, an epimorhpism if its image equals L , and an isomorphism if φ is both a monomorphism and epimorphism, that is, if φ is bijective.

[7] A representation of a Lie algebra L is a homomorphism φ : L g l ( V ) (V = vector space over F). Although we require L to be finite dimensional, it is useful to allow V to be of arbitrary dimension: gl(V) makes sense in any case. However, for the time being the only important example to keep in mind is the adjoint representation a d : L g l ( L ) introduced, which sends x to ad x, where a d x ( y ) = [ x y ] . (The image of ad is in D e r L g l ( L ) , but this does not concern us at the moment.) It is clear that ad is a linear transformation. To see that it preserves the bracket, we calculate:

[ a d x , a d y ] ( z ) = a d x a d y ( z ) a d y a d x ( z ) = a d x ( [ y z ] ) a d y ( [ x z ] ) = [ x [ y z ] ] [ y [ x z ] ] = [ x [ y z ] ] + [ [ x z ] y ] = [ [ x y ] z ] = a d [ x y ] ( z )

what is the kernel of ad? It consists of all x L for which ad x = 0, i.e., for which [xy] = 0 (all y L ). So Ker a d = Z ( L ) . This already has an interesting consequence: If L is simple, then Z ( L ) = 0 , so that a d : L g l ( L ) is a monomorphism. This means that any simple Lie algebra is isomorphic to a linear Lie algebra.

Example 3.19. The special linear group S L ( n , F ) denotes the kernel of the homomorphism

det : G L ( n , F ) F x = { x F | x 0 }

where F is a feld.

We will now forge a proof of Lie algebra isomorphism theorems analogous to the ring theory isomorphisms described in the introduction.

Automorphisms

Definition 3.20. The set of inner automorphisms of a ring, or associative algebra A, is given by the conjugation element, using right conjugation, such that:

ϕ a : A A

ϕ a ( x ) = a 1 x a

Given x , y A :

ϕ a ( x y ) = a 1 ( x y ) a = a 1 x a a 1 y a = ( a 1 x a ) ( a 1 y a ) = ϕ a ( x ) ϕ a ( y )

where, ϕ s is an (invertible) homomorphism that contains ϕ a 1 . Therefore ϕ a constitutes an isomorphism onto itself. Since the composition of conjugation is associative, a 1 x a is often denoted as x a .

Definition 3.21. An automorphism of L is an isomorphism of L onto itself. Aut L denotes the group of all such.

An automorphism of the form exp(adx), with adx nilpotent, i.e., ( a d x ) k = 0 for some k > 1 , is called inner.

[4] Example 3.22. Let L be a Lie algebra such that L = s l ( n , F ) , g G L ( n , F ) . The map φ : L L defined by x g x t g 1 ( x t = the transpose of x) belongs to Aut L. When n = 2, g = the identity matrix, we can prove that Aut L is inner.

t r ( g x t g 1 ) = t r ( g g 1 x t ) = t r ( x t ) = t r ( x ) {Since g= the identity matrix}

t r ( x ) = 0 t r ( g x t g 1 ) = 0

Therefore, the map is a linear automorphism of s l ( n , F ) . If we apply the transpose to the commutator, for x , y L , we have:

[ x , y ] t = ( x y y x ) t = ( x y ) t ( y x ) t = y t x t x t y t = [ y t x t ]

Therefore:

φ [ x , y ] = g [ x , y ] t g 1 = g [ y t , x t ] g 1 { Bypropertiesofthetranspase } = g ( y t x t x t y t ) g 1 = g y t x t g 1 g x t y t g 1 = [ g y t g 1 , g x t y t ] = g y t g 1 g x t g 1 g x t g 1 g y t g 1 = [ φ ( x ) , φ ( y ) ]

Therefore, ' is a homomorphism. Thus Aut L is inner.

Definition 3.23.When adx is nilpotent, the inner automorphism constructed is called IntL. For σ Aut l , x L ,

σ ( a d x ) σ 1 = a d σ ( x ) when σ exp ( a d x ) σ 1 = exp ( a d σ ( x ) )

[4] Example 3.24. Let σ be the automorphism of s l ( 2 , F ) give by the following: let L s l ( 2 , F ) , with standard basis (x; y; h). Define σ = exp a d x exp a d ( y ) exp a d x . We can show that σ ( x ) = y , σ ( y ) = x , σ ( h ) = h .

We have [ x , y ] = h , [ h , y ] = 2 y , [ h , x ] = 2 x

σ ( x ) = exp a d x exp a d ( y ) exp a d x ( x ) = exp a d x ( 1 + a d ( y ) + a d ( y ) 2 2 ! ) x = exp a d x ( x + ( a d y ) ( x ) + 1 2 ! ( a d ( y ) ) 2 ( x ) ) = exp a d x ( x [ y , x ] + 1 2 ! ( a d ( y ) ) ( h ) ) = exp a d x ( x + h + [ y , h ] )

= ( 1 + a d x + a d ( x 2 ) 2 ! ) ( x + h y ) = ( x + h y ) + ( [ x , x ] + [ x , h ] [ x , y ] ) + ( [ x [ x , x ] ] 2 ! + [ x , 2 x ] 2 ! [ x , h ] 2 ! ) = ( x + h y ) + ( 2 x h ) + ( x ) = y

σ ( y ) = exp a d x exp a d ( y ) exp a d x ( y ) = exp a d x ( 1 + a d x + a d ( x ) 2 2 ! ) ( y ) = exp a d x ( y + [ x , y ] + [ x , [ x , y ] ] 2 ) = exp a d x ( 1 + a d ( y ) + a d ( y ) 2 2 ! ) ( y + h x )

= exp a d x [ ( y + h x ) + ( [ y , y ] + [ y , h ] [ y , x ] ) + ( [ y , [ y , y ] ] 2 ! + [ y , [ y , h ] ] 2 ! [ y , [ y , x ] ] 2 ! ) ] = exp a d x ( y + h 2 y x h + y ) = ( 1 + a d x + a d ( x ) 2 2 ! ) ( x ) = x

σ ( h ) = exp a d x exp a d ( y ) exp a d x ( h ) = exp a d x exp a d ( y ) ( 1 + a d x + a d ( x ) 2 2 ! ) ( h ) = exp a d x exp a d ( y ) ( h + [ x , h ] + [ [ x , h ] , h ] 2 ! ) = exp a d x ( 1 + a d ( y ) + a d ( y ) 2 2 ! ) ( h 2 x ) = exp a d x [ ( h 2 x ) + ( [ y , h ] 2 [ y , x ] ) + ( [ y , [ y , h ] ] 2 ! ) 2 [ y , [ y , x ] ] 2 ! ]

= exp a d x [ ( h 2 x ) + ( 2 y 2 h ) + ( 0 + 2 y ) ] = exp a d x ( h 2 x ) = ( 1 + a d x + a d ( x ) 2 2 ! ) ( h 2 x ) = ( h 2 x ) + ( [ x , h ] 2 [ x , x ] ) + ( [ x , [ x , h ] ] 2 2 [ x , [ x , x ] ] 2 ) = ( h 2 x ) + ( 2 x 0 ) = h

4. The Classical Lie Algebras

Classical algebras are finite-dimensional Lie algebras. Let F have characteristic zero and be algebraically closed. The classical Lie algebras over F are s l ( l + 1 , F ) , s o ( 2 l + 1 , F ) , s p ( 2 l , F ) and s o ( 2 l , F ) (for lpositive integer) has an associated algebra, represented by symmetric, skew symmetric, and orthogonal matrices.

s l ( l + 1 , F ) -special linear algebra( l 1 )

The set of all endomorphisms of V having trace zero is denoted by sl(V), the special linear algebra. s l ( l + 1 , F ) is inherited from g l ( l + 1 , F ) , and is defined by [ X , Y ] = X Y Y X for X , Y s l ( l + 1 , F ) .

That,

t r [ x , y ] = t r ( x y ) t r ( y x ) = 0 . Since the trace of a matrix preserves bilinearity g

t r ( [ x , [ y , z ] ] + [ y , [ z , x ] ] + [ z , [ x , y ] ] ) = t r [ x , 0 ] + t r [ y , 0 ] + t r [ z , 0 ] = 0

so actually not only is the bracket of two endomorphisms in s l ( V ) back in the subspace, the bracket of any two endomorphisms of g l ( V ) lands in s l ( V ) . In other words: [ g l ( V ) , g l ( V ) ] = s l ( V ) .

Choosing a basis, we will write the algebra as s l ( l + 1 , F ) . It should be clear that the dimension is ( l + 1 ) 2 1 , since this is the kernel of a single linear functional on the ( l + 1 ) 2 dimensional, g l ( l + 1 , F ) but let’s exhibit a basis anyway. All the basic matrices e i j with i j are traceless, so they’re all in s l ( n , F ) . Along the diagonal, t r ( e i j ) = 1 , so we need linear combinations that cancel each other out. It’s particularly convenient to define

h i = e i i e i + 1 , i + 1

So we’ve got the ( l + 1 ) 2 basic matrices, but we take away the ( l + 1 ) along the diagonal. Then we add back the l new matrices h i , getting ( l + 1 ) 2 1 matrices in our standard basis for s l ( l + 1 , F ) verifying the dimension.

We sometimes refer to the isomorphism class of s l ( l + 1 , F ) as A l

s l ( 2 l + 1 , F ) -special orthagonal algebra ( l 2 )

The set of all endomorphisms of V with dimension 2 l + 1 . Let F have characteristic zero and be algebraically closed, and let l be a positive integer. Let s g l ( 2 l + 1 , F ) be the matrix

s = [ 1 0 0 0 0 I l 0 I l 0 ]

corresponds to the orthogonal algebra.

Here, I l is the l × l identity matrix. Recall a matrix A M n × n ( F ) is called orthogonal if A A t = I . The orthogonal algebra is a subalgebra of gl(V).

We sometimes refer to the isomorphism class of s o ( 2 l + 1 , F ) as B l

s p ( 2 l , F ) -simplectic algebra ( l 3 )

The set of endomorphisms of V having dim V = 2 l . Let F have characteristic zero and be algebraically closed, and let l be a positive integer. Let s g l ( 2 l , F ) be the skew symmetric matrix

s = [ 0 I l I l 0 ]

Here, I l is the l × l identity matrix. Recall that a matrix A M n × n ( F ) is called skew-symmetric if A t = A . We have s p ( 2 l , F ) s l ( 2 l , F ) .

We sometimes refer to the isomorphism class of s p ( 2 l , F ) as C l

s o ( 2 l , F ) -orthogonal algebra ( l 4 )

The set of all endomorphisms of V with dimension dim = 2 l . Let F have characteristic zero and be algebraically closed, and let l be a positive integer. Let s g l ( 2 l + 1 , F ) be the matrix

s = [ 0 I l I l 0 ]

Here, I l is the l × l identity matrix. We define s o ( 2 l , F ) to be the Lie subalgebra of g l ( 2 l , F ) .We have s o ( 2 l , F ) s l ( 2 l , F ) .

We sometimes refer to the isomorphism class of s o ( 2 l , F ) as D l

5. Root Systems

Definition 5.1: [6] Let V be a finite-dimensional vector space over R, and let ( . , . ) be an inner product (here it is a positive definite, bilinear, symmetric form). By definition, ( . , . ) : V × V R is a symmetric bilinear form such that ( x , x ) > 0 for all non-zero x V . Let v V be non-zero. We define the reflection determined by v to be the unique R linear map s v : V V such that s v ( v ) = v and s v ( w ) = w for all w ( R v ) . A calculation shows that s v is given by the formula

s v ( x ) = x 2 ( x , v ) ( v , v ) v

for x V . Another calculation also shows that s v preserves the inner product ( . , . ) , i.e.,

( s v ( x ) , s v ( y ) ) = ( x , y )

for x , y V ; that is, s v is in the orthogonal group O ( V ) . Evidently,

det ( s v ) = 1

We will write

( x , y ) = 2 ( x , y ) ( y , y )

for x , y V . We note that the function . , . : V × V R is linear in the first variable; however, this function is not linear in the second variable. We have

s v ( x ) = x ( x , v ) v

for x V .

Definition 5.2: A root system R in V is a finite set R V s.t.

1) The set R is finite, does not contain 0, and spans V.

2) If α R , then α and −α are the only scalar multiples of α that are contained in R.

3) If α R then s α ( R ) = R , so that s α permutes the elements of R.

4) for all α , β R we have α , β Z ,

Proposition 5.3: [6] Let the notation be as in the discussion preceding the proposition. The subset ϕ of the inner product space V is a root system.

Proof. It is clear that (1) is satisfied. (2) is satisfied. To see that (3) is satisfied, let α , β ϕ . Then

s α ( β ) = β 2 ( β , α ) ( α , α ) α = β β ( h α ) α

we have β β ( h α ) α ϕ . It follows that s α ( β ) ϕ so that (3) is satisfied. To prove that (4) holds, again let α , β ϕ . We have

α , β = 2 ( α , β ) ( β , β )

we have

2 ( α , β ) ( β , β ) = α ( h β )

Definition 5.4: [8] Let W : = { s α : α R } G L ( V ) . The group W is called the Weyl group of R.

Theorem 5.5: [6] (Weyl’s Theorem). Let F be algebraically closed and have characteristic zero. Let L be a finite-dimensional semi-simple Lie algebra over F. If ( φ , V ) is a finite-dimensional representation of L, then V is a direct sum of irreducible representations of L.

Proof. By induction, to prove the theorem it will suffice to prove that if W is a proper, non-zero L-subspace of V, then W has a complement, i.e., there exists an L-subspace W' of V such that V = W W . Let W be a proper, non-zero L-subspace of V.

We first claim that W has a complement in the case that dim W = dim V 1 . Assume that dim W = dim V 1 .

We will first prove our claim when W is irreducible; assume that W is irreducible. The kernel ker ( φ ) of φ : L g l ( V ) is an ideal of L. By replacing φ : L g l ( V ) by the representation φ : L / ker ( φ ) g l ( V ) , we may assume that φ is faithful. Consider the quotient V = W . By assumption, this is a one-dimensional L-module. Since [ L , L ] acts by zero on any one-dimensional L-module, and since L = [ L , L ] . It follows that L acts by zero on V = W . This implies that φ ( L ) V W . In particular, if C is the Casmir operator1 for φ then C V W . Hence, ker(C) is an L-submodule of V; we will prove that V = W ker ( C ) , so that ker(C) is a complement to W. To prove that ker(C) is a complement to W it will suffice to prove that W ker ( C ) = 0 and dim ker ( C ) = 1 . Consider the restriction C | W of C to W. This is an L-map from W to W.

Since W is irreducible, there exists a constant a F such that C ( w ) = a w for w W . Fix an ordered basis w 1 , , w t for W, and let v V . Then w 1 , , w t v is an ordered basis for V, and the matrix of C in this basis has the form

[ a a 0 ]

It follows that t r ( C ) = ( dim W ) a . On the other hand,, we have t r ( C ) = dim L . It follows that ( dim W ) a = dim L , and in particular, a 0 . Thus, C is injective on W and maps onto W. Therefore, W ker ( C ) = 0 , and dim ker ( C ) = dim V dim i m ( C ) = dim V dim W = 1 . This proves our claim in the case that W is irreducible.

We will now prove our claim by induction on dim V. We cannot have dim V = 0 or 1 because W is non-zero and proper by assumption. Suppose that dim V = 2 . Then dim W = 1 , so that W is irreducible, and the claim follows from the previous paragraph. Assume now that dim V 3 , and that for all L-modules A with dim A < dim V , if B is an L-submodule of A of codimension one, then B has a complement. If W is irreducible, then W has a complement by the previous paragraph. Assume that W is not irreducible, and let W 1 be a L-submodule of W such that 0 < dim W 1 < dim W . Consider the L-submodule W = W 1 of V = W 1 . This L-submodule has co-dimension one in V = W 1 , and dim V = W 1 < dim V . By the induction hypothesis, there exists an L-submodule U of V = W 1 such that

V / W 1 = U W / W 1

We have dim U = 1 . Let p : V V / W 1 be the quotient map, and set M = p 1 ( U ) . Then M is an L-submodule of V, W 1 M , and M / W 1 = U . We have

dim M = dim W 1 + dim U = 1 + dim W 1

Since dim M = 1 + dim W 1 < 1 + dim W dim V , we can apply the induction hypothesis again: let W 2 be an L-submodule of M that is a complement to W 1 in M, i.e.,

M = W 1 W 2

Theorem 5.6: [6] Let V be a finite-dimensional vector space over R equipt with an inner product ( . , . ) . The Cauchy-Schwartz inequality asserts that

| ( x , y ) | x y

for x , y V . It follows that if x , y V are nonzero, then

1 ( x , y ) x y 1

If x , y V are nonzero, then we define the angle between x and y to be the unique number 0 θ π such that

( x , y ) = x y cos θ

The inner product measures the angle between two vectors, though it is a bit more complicated in that the lengths of x and y are also involved. The term, angle does make sense geometrically. For example, suppose that V = R 2 and we have:

Project x onto y, to obtain ty:

Then we have

x = z + t y

Taking the inner product with y, we get

( x , y ) = ( z , y ) + ( t y , y )

( x , y ) = 0 + t ( y , y )

( x , y ) = t y 2

t = ( x , y ) y 2

On the other hand,

cos θ = t y x

cos θ = t y x

t = x y cos θ

If we equate the two formulas for t we get ( x , y ) = x y cos θ We say that two vectors are orthogonal if ( x , y ) = 0 ; this is equivalent to the angle between x and y being π / 2 . If ( x , y ) > 0 , then we will say that x and y form an acute angle; This is equivalent to 0 < θ < π / 2 . If ( x ; y ) < 0 , then we will say that x and y form an obtuse angle; this is equivalent to π / 2 < θ < π . Non-zero vectors also define some useful geometric objects. Let v V be non-zero. We may consider three sets that partition V:

{ x V : ( x , v ) > 0 } , P = { x V : ( x , v ) = 0 } , { y V : ( x , v ) < 0 }

The first set consists of the vectors that form an acute angle with v, the middle set is the hyperplane P orthogonal to Rv, and the last set consists of the vectors that form an obtuse angle with v. We refer to the first and last sets as the half-spaces defined by P. Of course, v lies in the first half-space. The formula for the reflection sv shows that

( s v ( x ) , v ) = ( x , v )

for x in V, so that S sends one half-space into the other half-space. Also, S acts by the identity on P. Multiplication by −1 also sends one half-space into the other half-space; however, while multiplication by −1 preserves P, it is not the identity on P.

Example 5.7: A1: The only rank 1 root system is V = R with inner product ( x ; y ) = x y and roots R = { α , α } = 0 . Its Weyl group is given by W = Z / 2 . We call this root system A1. This is the root system of sl2.

Example 5.8: [8] A 1 × A 1 : Take V = R 2 with the usual inner product. Then R = { e 1 , e 1 , e 2 , e 2 } with the standard basis vectors is a root system. Note that this is A 1 × A 1 and therefore not irreducible. Here W = Z / 2 × Z / 2 (Figure 1).

A 2 : Let α = α V , β = β V ( α , β ) = 1 . Then W = S 3 . We call this root system A 2 , it appears as the root system of s l 3 .

B 2 : Let α = e 1 , ( α , α ) = 1 , β = e 2 e 1 , ( β , β ) = 2 , α , α + β are short

Figure 1. Root systems of rank 2.

roots, β , 2 α + β long roots. Then W is the symmetry group of the square, i.e. W = D 8 , the dihedral group of order 8. This is the root system of s p 4 and s o 5 .

G 2 : Also D 12 appears as Weyl group of a root system, called G 2 .

Example 5.9: Let V be a vector space over R with an inner product ( . , . ) . Let x , y V and assume that x and y are both non-zero. The following are equivalent:

1) The vectors x and y are linearly dependent.

2) We have ( x , y ) 2 = ( x , x ) ( y , y ) = x 2 y 2 .

3) The angle between x and y is 0 or π.

Proof. Let θ be the angle between x and y. We have

( x , y ) 2 = x 2 y 2 cos 2 θ

Assume that ( x , y ) 2 = ( x , x ) ( y , y ) = x 2 y 2 . Then ( x , y ) 2 = x 2 y 2 0 , and cos 2 θ = 1 , so that cos θ = ± 1 . This implies that θ = 0 or θ = π / 2 .

Suppose that ( x , y ) 2 = ( x , x ) ( y , y ) . We have

( y ( x , y ) ( x , x ) x , y ( x , y ) ( x , x ) x ) = ( y , y ) 2 ( x , y ) ( x , x ) ( x , y ) + ( x , y ) 2 ( x , x ) 2 ( x , x ) = ( y , y ) ( x , y ) 2 ( x , x ) = ( y , y ) ( x , x ) ( y , y ) ( x , x ) = ( y , y ) ( y , y ) = 0

It follows that y ( x , y ) ( x , x ) x = 0 , so that x and y are linearly dependent.

Example 5.10: [6] Let V be a finite-dimensional vector space over R equipt with an inner product ( . , . ) , and let R be a root system in V. Let α , β R , and assume that α ± β and β α . Let θ be the angle between α and β. Exactly one of the following possibilities holds Figure 2.

Proof. By the assumption β α . We have ( β , β ) = β 2 ( α , α ) = α 2 , so that

| β , α | = 2 | ( β , α ) | ( α , α ) 2 | ( α , β ) | ( β , β ) = | α , β |

Figure 2. Possible angles and ratio of norms between pairs of roots.

we have that α , β and β , α are integers, and we have α , β β , α { 0 , 1 , 2 , 3 } . These facts imply that the possibilities for α , β and β , α are as in the table.

Assume first that β , α = α , β = 0 . From above, α , β β , α = 4 cos 2 θ .

It follows that cos θ = 0 , so that θ = π / 2 = 90 .

Assume next that β , α 0 . Now

β , α α , β = 2 ( β , α ) ( α , α ) ( β , β ) 2 ( α , β ) = ( β , β ) ( α , α )

so that β , α α , β is positive and

β , α α , β = β α

This yields the β / α column. Finally,

α , β = 2 ( α , β ) ( β , β ) = 2 α β cos θ β 2

α , β = 2 α β cos θ

so that

cos θ = 1 2 β α α , β

This gives the cos θ column.

Definition 5.11: R is simply laced if all the roots are of the same length (e.g. A 1 , A 1 × A 1 , A 2 , not B 2 , G 2 ).

Example 5.12: [6] Let V = R 2 equipt with the usual inner product ( . , . ) , and let R be a root system in V. Let γ be the length of the shortest root in R. Let S be the set of pairs ( α , β ) of non-colinear roots such that α = γ and the angle θ between α and β is obtuse, and β is to the left of α. The set S is non-empty.

Fix a pair ( α , β ) in S such that θ is maximal. Then

1) ( A 2 root system) If θ = 120 ˚ (so that α = β ) then R, α, and β are as follows Figure 3:

2) ( B 2 root system) If θ = 135 ˚ (so that β = 2 α ) then R, α, and β are as follows Figure 4:

3) ( B 2 root system) If θ = 150 ˚ (so that β = 3 α ) then R, α, and β are as follows Figure 5:

Proof. Let ( α , β ) be a pair of non-colinear roots in R such that α = γ such a pair must exist because R contains a basis which includes α. If the angle between α and β is acute, then the angle between α and −β is obtuse. Thus, there exists a pair of roots ( α , β ) in R such that α = γ and the angle between αand β is obtuse. If β is the right of α, then −β forms an acute angle with and is to the left of α; in this case, s α ( β ) forms an obtuse angle with α and s α ( β ) is to the left of β. It follows that S is non-empty.

Figure 3. A2 root system θ = 120˚.

Figure 4. B2 root system θ = 135˚.

Figure 5. B2 root system θ = 150˚.

Assume that θ = 120 ˚ so that α = β . It follows that α , β , α + β , α , β , α β R . By geometry, α + β = α + β . It follows that R contains the vectors in 1. Assume that R contains a root other than α , β , α + β , α , β , α β . We see that must lie halfway between two adjacent roots from α , β , α + β , α , β , α β . This implies that θ is not maximal, a contradiction.

Assume that θ = 135 ˚ so that α = 2 β . We have α + β , 2 α + β R . It follows that R contains α , β , α + β , 2 α + β , α , β , α β , 2 α β so that R contains the vectors in 2. Assume that R contains a root δ other than α , β , α + β , 2 α + β , α , β , α β , 2 α β

Then δ must make an angle strictly less than 30° with one of α , β , α + β , 2 α + β , α , β , α β , 2 α β

Assume that θ = 150 ˚ so that α = 3 β . We have α + β , 3 α + β R . By geometry, the angle between α and 3 α + β is 30°. By geometry, the angle between β and 3 α + β is 120°. By β + 3 α + β = 3 α + 2 β R . It now follows that R contains the vectors in 3. Assume that R contains a vector δ other than α , β , α + β , 2 α + β , 3 α + β , 3 α + 2 β , α , β , α β , 2 α β , 3 α β , 3 α 2 β . Then must make an angle strictly less than 30° with one of α , β , α + β , 2 α + β , 3 α + β , 3 α + 2 β , α , β , α β , 2 α β , 3 α β , 3 α 2 β .

6. The Future Perspective of This Paper

The future perspective of this paper is to support me in writing a book on Lie algebras. In addition, it is possible to do work for each of the titles in the paper. Each of the titles creates an opportunity for research, because from each title, there are many opportunities for research and to write. My future is to work on comparing many facts and applying leftist algebras in everyday life. Comparing all these disciplines with left algebras, we see closely connectivity and the need to apply and use it properly.

NOTES

1Let L be a Lie algebra over F, let V be a finite-dimensional F-vector space, and let φ : L g l ( V ) be a representation. Define β V : L × L F by β V ( x , y ) = t r ( φ ( x ) φ ( y ) ) for x , y L .

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Erdman, K. and Wildon, M.J. (2006) Introduction to Lie Algebra. Mathematical Institute University, Oxford, UK.
https://www.springer.com/gp/book/9781846280405
[2] Henderson, A. (2012) Representations of Lie Algebras an Introduction through gl(n). School of Mathematics and Statistics, University of Sydney, 18-19, 21-27.
https://www.cambridge.org/me/academic/subjects/mathematics/algebra/representations-lie-algebras-introduction-through-gln?format=PB&isbn=9781107653610
[3] Carter, R. (2005) Lie Algebras of Finite and Affine Type. Mathematics Institute University of Warwick.
http://www.cambridge.org/9780521851381
[4] Renee Talley, A. (2017) An Introduction to Lie Algebra. California State University, San Bernardino, 38-40.
https://scholarworks.lib.csusb.edu/cgi/viewcontent.cgi?article=1668&context=etd
[5] Samelson, H. (1988) Notes on Lie Algebras. 15-16.
https://pi.math.cornell.edu/~hatcher/Other/Samelson-LieAlg.pdf
[6] Roberts, B. (2018-2019) Lie Algebras. University of Idaho, Moscow, 56-57, 85-86.
https://www.freebookcentre.net/maths-books-download/Lie-Algebras-by-Brooks-Roberts.html
[7] Humphreys, J.E. (2000) Introduction to Lie Algebras and Their Representations. Professor of Mathematics University of Massachusetts Amherst, MA 01003USA.
https://www.springer.com/gp/book/9780387900537
[8] Grojowski, L., Laugwitz, R. and Seidler, H. (2010) Introduction to Lie Algebras and Their Representations. Unofficial Lecture Notes, University of Cambridge, Cambridge, 33-36.
https://book4you.org/book/2692749/b705ea?id=2692749&secret=b705ea&signAll=1&ts=2233

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.