Representations of Lie Groups

Abstract

In this paper, the most important liner groups are classified. Those that we often have the opportunity to meet when studying linear groups as well as their application in left groups. In addition to the introductory part, we have general linear groups, special linear groups, octagonal groups, symplicit groups, cyclic groups, dihedral groups: generators and relations. The paper is summarized with brief deficits, examples and evidence as well as several problems. When you ask why this paper, I will just say that it is one of the ways I contribute to the community and try to be a part of this little world of science.

Share and Cite:

Hasić, A. (2021) Representations of Lie Groups. Advances in Linear Algebra & Matrix Theory, 11, 117-134. doi: 10.4236/alamt.2021.114009.

1. Introduction

Algebra is the mathematical discipline that arose from the problem of solving equations [1]. If one starts with the integers Z, one knows that every equation a + x = b , where a and b are integers, has a unique solution. However, the equation a x = b does not necessarily have a solution in Z, or it might have infinitely many solutions (take a = b = 0 ). So let us enlarge Z to the rational numbers Q, consisting of all fraction’s c/d, where d 0 . Then both equations have a unique solution in Q, provided that a 0 for the equation a x = b . So, Q is a field. If, for example, one takes the solutions of an equation such as x 2 5 = 0 and forms the set of all numbers of the form a + b 5 , where a and b are rational, we get a larger field, denoted by Q ( 5 ) , called an algebraic number field. In the study of fields obtained by adjoining the roots of polynomial equations, a new notion arose, namely, the symmetries of the field that permute the roots of the equation. Evariste Galois (1811-1832) coined the term group for these symmetries, and now this group is called the Galois group of the field. While still a teenager, Galois showed that the roots of an equation are expressible by radicals if and only if the group of the equation has a property now called solvability. This stunning result solved the 350-year-old question whether the roots of every polynomial equation are expressible by radicals.

1Linear Groups is the Accent on Infinite Dimensionality explores some of the main results and ideas in the study of infinite-dimensional linear groups. The theory of finite-dimensional linear groups is one of the best developed algebraic theories. The array of articles devoted to this topic is enormous, and there are many monographs concerned with matrix groups, ranging from old, classical texts to ones published more recently. However, in the case when the dimension is infinite (and such cases arise quite often), the reality is quite different.

The situation with the study of infinite-dimensional linear groups is like the situation that has developed in the theory of groups, in the transition from the study of finite groups to the study of infinite groups which appeared about one hundred years ago. It is well known that this transition was extremely efficient and led to the development of a rich and central branch of algebra: Infinite group theory.

Group theory arose from the study of polynomial equations [2]. The solvability of an equation is determined by a group of permutations of its roots; before Abel [1824] and Galois [1830] mastered this relationship, it led Lagrange [1770] and Cauchy [1812] to investigate permutations and prove forerunners of the theorems that bear their names. The term “group” was coined by Galois. Interest in groups of transformations, and in what we now call the classical groups, grew after 1850; thus, Klein’s Erlanger Programmed [1872] emphasized their role in geometry. Modern group theory began when the axiomatic method was applied to these results; Burnside’s Theory of Groups of Finite Order [1897] marks the beginning of a new discipline, abstract algebra, in that structures are defined by axioms, and the nature of their elements is irrelevant.

Definition 1.1. A group is a set G, together with a map of G × G into G with the following properties:

· Closure: For all x , y G , We have

x y G

· Associativity: For all x , y , z G .

( x y ) z = x ( y z )

· There exists an element e in G such that for all x G ,

x e = e x = x

The element e is unique, and is called the identity element of the group, or simply the identity.

· and such that for all x G , there exists x G with

x x = x x = e

is called an inverse of x.

Definition 1.2. A group G is said to be commutative or abelian if all x , y G , we have x y = y x . A group that is not abelian is said to benonabelian.

Proposition 1.3. (Uniqueness of the Identity). Let G be a group, and let e , f G be such that for all x G .

e x = x e = x

f x = x f = x

Then e = f .

Proof. Since e is an identity, we have

e f = f

On the other hand, since f is an identity, we have

e f = e

Thus e = e f = f .

Proposition 1.4 (Uniqueness of Inverses). Let G be a group, e the (unique) identity of G, and x , y , z G . Suppose that

x y = y x = e

x z = z x = e

Then y = z .

Proof: We know that x y = x z = e . Multiplying on the left by h gives y ( x y ) = y ( x z ) .

By as sociativity, this gives

( y x ) y = ( y x ) z

and so

e y = e z

y = z .

Proposition 1.5. [1] For all x , y G , we have ( x y ) 1 = y 1 x 1 .

Proof: Let w = y 1 x 1 . Then it suffices to show that w ( x y ) = 1 . But

w ( x y ) = ( w x ) y = ( ( y 1 x 1 ) x ) y = ( y 1 ( x 1 x ) ) y = ( y 1 1 ) y = y 1 y = 1.

If x 1 , x 2 , , x n are arbitrary elements of a group G, then the expression x 1 x 2 x n will stand for x 1 ( x 2 x n ) , where x 2 x n = x 2 ( x 3 x n ) and soon. This gives an inductive definition of the product of an arbitrary finite number of elements of Moreover, by as sociativity, pairs ( ) of parentheses can be inserted or removed in the expression x 1 x 2 x n without making any change in the group element being represented, provided the new expression makes sense. (For example, you can’t have an empty pair of parentheses, and the number of left parentheses has to be the same as the number of right parentheses).

( y 1 x 1 ) ( x y ) = y 1 ( x 1 x ) y 1 = y 1 y 1 = 1.

Definition 1.6. A subgroup of a group G is a subset H of G with the following properties:

1) The identity is an element of H.

2) If h H , then h 1 H .

3) If h 1 , h 2 H , then h 1 h 2 H .

Proposition 1.7. A subset H of a group G is a subgroup if and only if H is a group under the group operations of G. That is, H is closed under the group operation and contains the identity of G, and the inverse of an element of H is its inverse in G.

Definition 1.8. Let G and H be groups. A homomorphism from G to H is a map φ : G H such that, for all x , y in G, φ ( x y ) = φ ( x ) φ ( y ) .

Proposition 1.9. Let G and H be groups and φ : G H be a homomorphism, Then for all x , y in G,

φ ( x y 1 ) = φ ( x ) φ ( y ) 1 and φ ( y x 1 ) = φ ( y ) φ ( x ) 1

Proof: We have

φ ( x y 1 ) φ ( y ) = φ ( ( x y 1 ) y ) = φ ( x y 0 ) = φ ( x )

Since φ is a homomorphism.

Definition 1.10. A homomorphism from G to H which is a bijection is an isomorphism. In that case, we say that G and H are isomorphic, and write G H .

Definition 1.11. A bijective homomorphism φ from a group to itself is an automorphism.

Example 1.12. Let R * denote the nonzero real numbers. Multiplication by a R * defines a bijection μ a : R R given by μ a ( r ) = a r . The distributive law for R says that

μ a ( r + s ) = a ( r + s ) = a r + a s = μ a ( r ) + μ a ( s )

for all r , s R . Thus μ a : R R is a homomorphism for the additive group structure on R. Since a 0 , a is in fact an isomorphism. Furthermore, the associative and commutative laws for R imply that a ( r s ) = ( a r ) s = ( r a ) s = r ( a s ) . Hence,

μ a ( r s ) = r μ a ( s ) .

2. The Classical Groups

In this study the structure of a classical group G and its Lie algebra [3]. We choose a matrix realization of G such that the diagonal subgroup H G is a maximal torus; by elementary linear algebra every conjugacy class of semi simple elements intersects H. Using the unipotent elements in G, we show that the groups GL(n,R), SL(n,R), SO(n,R), and U(n,C) are connected (as Lie groups and as algebraic groups). This group and its Lie algebra play a basic role in the structure of the other classical groups and Lie algebras. We decompose the Lie algebra of a classical group under the adjoint action of a maximal torus and find the invariant subspaces (called root spaces) and the corresponding characters (called roots). The commutation relations of the root spaces are encoded by the set of roots; we use this information to prove that the classical (trace-zero) Lie algebras are simple (or semi simple). In the final section of the chapter, we develop some general Lie algebra methods (solvable Lie algebras, killing form) and show that every semi simple Lie algebra has a root-space decomposition with the same properties as those of the classical Lie algebras.

Definition 1.13. The classical groups are the groups of invertible linear transformations of finite-dimensional vector spaces over the real, complex, and quaternion fields, together with the subgroups that preserve a volume form or a bilinear form.

Proposition 1.14. The determinant function det : M n k : has the following properties.

i) For A , B M n ( k ) , det ( A B ) = det A det B .

ii) det I n = 1 .

iii) A M n ( k ) is invertible if and only if det A O .

3. General Linear Groups GL(n,R)

G L ( n , R ) = { A = [ a i j ] n × n | det A 0 , ( | A | 0 ) }

· Closure property

Let A , B G L ( n , R ) then | A | 0 , | B | 0

| A B | = | A | | B |

| A | 0 , | B | 0

| A | | B | 0 i.e.

| A B | 0 A B G L ( n , R )

| A | = det A , | B | = det B

det ( A B ) = det A det B

det ( A B ) 0 i.e.

det A 0 , det B 0

A B G L ( n , R )

· Associative property

A ( B C ) = ( A B ) C , A , B , C G L ( n , R )

· Existence of identity

I n × n = [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] n × n . Such that

| I | = 1 0

A I n × n = A = I n × n A

· Existence of inverse

Let A G L ( n , R ) , | A | 0 .

A 1 = 1 | A | ( a d j A )

| A 1 | = | 1 | A | ( a d j A ) |

( a d j A ) ( A ) n = | A | n 1 | A | n = 1 | A | 0

A A 1 = A 1 | A | ( a d j A ) = A ( a d j A ) | A |

| A | I | A | = I

· Commutativity

( G L ( n , R ) , ) group is not Abellio group because A B B A .

Let F denote either the field of real numbers R or the field of complex numbers C, and let V be a finite-dimensional vector space over F [3]. The set of invertible linear transformations from V to V will be denoted by GL(V). This set has a group structure under composition of transformations, with identity element the identity transformation I ( x ) = x for all x V . The group GL(V) is the first of the classical groups. To study it in more detail, we recall some standard terminology related to linear transformations and their matrices. Let V and W be finite-dimensional vector spaces over F. Let { v 1 , , v n } and { w 1 , , w m } be bases for V and W, respectively. If T : V W is a linear map then

T v j = i = 1 m a i j w i for j = 1 , , n

with a i j F . The numbers a i j are called the matrix coefficients or entries of T with respect to the two bases, and the m × n array

A = [ a 11 a 12 a 1 n a 21 a 22 a 2 n a m 1 a m 2 a m n ]

Let S : W U be another linear transformation, with U an l-dimensional vector space with basis { u 1 , , u n } , and let B be the matrix of S with respect to the bases { w 1 , , w m } and { u 1 , , u n } , Then the matrix of S T with respect to the bases { v 1 , , v n } and { u 1 , , u n } , is given by BA, with the product being the usual product of matrices.

We denote the space of all n × n matrices over F by M n ( F ) , and we denote the n × n identity matrix by I (or in if the size of the matrix needs to be indicated); it has entries δ i j = 1 if i = j and 0 otherwise. Let V be an n-dimensional vector space over F with basis { v 1 , , v n } . If T : V V is a linear map we write μ ( T ) for the matrix of T with respect to this basis. If T , S G L ( V ) then the preceding observations imply that μ ( S T ) = μ ( S ) μ ( T ) . Furthermore, if T G L ( V ) then μ ( T T 1 ) = μ ( T 1 T ) = μ ( I d ) = I . The matrix A M n ( F ) is said to be invertible if there is a matrix B M n ( F ) such that A B = B A = I . We note that a linear map T : V V is in G L ( V ) if and only if its matrix μ ( T ) is invertible. We also recall that a matrix A M n ( F ) is invertible if and only if its determinant is nonzero.

We will use the notation G L ( n , F ) for the set of n_n invertible matrices with coefficients in F. Under matrix multiplication G L ( n , F ) is a group with the identity matrix as identity element. We note that if V is an n-dimensional vector space over F with basis { v 1 , , v n } , then the map μ : G L ( V ) G L ( n , F ) corresponding to this basis is a group isomorphism. The group G L ( n , F ) is called the general linear group of rank n.

If { w 1 , , w m } is another basis of V, then there is a matrix g G L ( n , F ) such that

w i j = i = 1 n g i j v i and v j = i = 1 n h i j w i for j = 1 , , n

with [ h i j ] the inverse matrix to [ g i j ] . Suppose that T is a linear transformation from V to V, that A = [ a i j ] is the matrix of T with respect to a basis { v 1 , , v n } , and that B = [ b i j ] is the matrix of T with respect to another basis { w 1 , , w m } . Then

T w j = T ( i g i j v i ) = i g i j T v i = i g i j ( k a k i v k ) = l ( k i h l k a k i g i j ) w l

for j = 1 , , n . Thus B = g 1 A g is similar to the matrix A.

4. Special Linear Group SL(n,R)

S L ( n , R ) = { A = [ a i j ] n × n G L ( n , R ) | det A = 1 , ( | A | = 1 ) }

· Closure property

Let A , B S L ( n , R ) then | A | = 1 , | B | = 1

| A B | = | A | | B |

| A | = 1 , | B | = 1

| A | | B | = 1 1 = 1 i.e.

| A B | = 1

A B S L ( n , R )

| A | = det A , | B | = det B

det ( A B ) = det A det B

det ( A B ) = 1 1 = 1 i.e.

det A = 1 , det B = 1

A B S L ( n , R )

· Associative property

A ( B C ) = ( A B ) C , A , B , C S L ( n , R )

· Existence of identity

I n × n = [ a i j ] n × n = { 1 , i = j 0 , i j

Such that

| I | = 1

A I n × n = A = I n × n A

· Existence of inverse

Let A S L ( n , R ) ,

| A | = 1 .

A 1 exist

| A 1 | = | A | 1 = 1 | A | 1 = 1 1 = 1

A 1 S L ( n , R )

· Commutativity

( S L ( n , R ) , ) is a group but not Abellio group because A B B A .

The special linear group S L ( n , R ) is the set of all elements A of M n ( R ) such that det ( A ) = 1 [3]. Since det ( A B ) = det ( A ) det ( B ) and det ( I ) = 1 , we see that the special linear group is a subgroup of G L ( n , R ) .

We note that if V is an n-dimensional vector space over F with basis { v 1 , , v n } . and if μ : G L ( V ) G L ( n , R ) is the map previously defined, then the group

μ 1 ( S L ( n , R ) ) = { T G L ( V ) : det ( μ ( T ) ) = 1 }

is independent of the choice of basis, by the change of basic formula. We denote this group by SL(V).

5. Orthogonal and Special Orthogonal Group

O ( n , R ) = { A = [ a i j ] n × n G L ( n , R ) | A A T = I n }

Definition 4.1. The matrix obtained by replacing the same numbered rows and columns of a matrix A is called the transposition of A and is denoted by AT. Accordingly, A = [ a i j ] , m × n -matrix, A T = [ a j i ] , n × m -matrix.

A = ( a i j ) A T = ( a j i )

Definition 4.2.

1) A square matrix with A T = A is called a symmetric matrix. If A = [ a i j ] is a symmetric matrix then a i j = a j i for each i , j .

2) A square matrix with A T = A is called an inverse symmetric matrix. If A = [ a i j ] is an inverse symmetric matrix then a i j = a j i for each i , j . Thus, in an inverse symmetric matrix, the prime diagonal elements are always zero.

Theorem 4.3. A, B are two matrices of the same order and r is a scalar.

1) ( A + B ) T = A T + B T

2) ( r A ) T = r A T

3) ( A T ) T = A

4) ( A B ) T = B T A T

Theorem 4.4. A square matrix A is orthogonal if and only if its column vectors form an orthonormal set.

Proof: Let A be an n × n orthogonal matrix and let a 1 a 2 a n be the column vectors of A. Then

A T A = ( a i j ) T a i j = ( a i T a j ) = ( a i a j )

Therefore,

A T A = I n

A = [ a 11 a 12 a 1 n a 21 a 22 a 2 n a m 1 a m 2 a m n ] and A T = [ a 11 a 21 a m 1 a 12 a 22 a m 2 a 1 n a 2 n a m n ]

A T A = [ a 11 a 21 a m 1 a 12 a 22 a m 2 a 1 n a 2 n a m n ] [ a 11 a 12 a 1 n a 21 a 22 a 2 n a m 1 a m 2 a m n ] = [ a 11 a 11 + a 21 a 21 + + a m 1 a m 1 a 11 a 12 + a 21 a 22 + + a m 1 a m 2 a 11 a 1 n + a 21 a 2 n + + a m 1 a m n a 12 a 11 + a 22 a 21 + + a m 2 a m 1 a 12 a 12 + a 22 a 22 + + a m 2 a m 2 a 12 a 1 n + a 22 a 2 n + + a m 2 a m n a 1 n a 11 + a 2 n a 21 + + a m n a m 1 a 1 n a 12 + a 2 n a 22 + + a m n a m 2 a 1 n a 1 n + a 2 n a 2 n + + a m n a m n ]

if and only if

a i a j = { 1 , i = j 0 , i j

A T A = [ 1 + 0 + + 0 0 + 0 + + 0 0 + 0 + + 0 0 + 0 + + 0 0 + 1 + + 0 0 + 0 + + 0 0 + 0 + + 0 0 + 0 + + 0 0 + 0 + + 1 ]

A T A = [ 1 0 0 0 1 0 0 0 1 ] = I n

A simple example 3.5. The matrix

A = [ 3 / 11 1 / 6 1 / 66 1 / 11 2 / 6 4 / 66 1 / 11 1 / 6 7 / 66 ] 3 × 3 is orthogonal matrix. Note that

A T A = I n

[ 3 / 11 1 / 11 1 / 11 1 / 6 2 / 6 1 / 6 1 / 66 4 / 66 7 / 66 ] [ 3 / 11 1 / 6 1 / 66 1 / 11 2 / 6 4 / 66 1 / 11 1 / 6 7 / 66 ] = [ ( 9 + 1 + 1 ) / 11 ( 3 + 2 + 1 ) / 66 ( 3 4 + 7 ) / 726 ( 3 + 2 + 1 ) / 66 ( 1 + 4 + 1 ) / 6 ( 1 8 + 7 ) / 396 ( 3 4 + 7 ) / 726 ( 1 8 + 7 ) / 396 ( 1 + 16 + 49 ) / 66 ] = [ 11 / 11 0 / 66 0 / 726 0 / 66 6 / 6 0 / 396 0 / 726 0 / 396 66 / 66 ] = [ 1 0 0 0 1 0 0 0 1 ] = I 3 × 3

Consider the determinant function restricted to O ( n ) , det : O ( n ) R * .

For A O ( n ) ,

( det A ) 2 = det A T det A = det ( A T A ) = det I n = 1 ,

which implies that det A = ± 1 . Thus, we have

O ( n ) = O ( n ) + O ( n ) ,

where

O ( n ) + = { A O ( n ) : det A = I } , O ( n ) = { A O ( n ) : det A = I } .

6. Special Orthogonal Group

S O ( n , R ) = { A = [ a i j ] n × n G L ( n , R ) | A A T = I n , det A = 1 }

The SO(n) is a subgroup of the orthogonal group O(n) and also known as the special orthogonal group or the set of rotations group.

The orthogonal and special orthogonal groups, O(n) and SO(n) [4]. An n × n real matrix A is said to be orthogonal if the column vectors that make up A are orthonormal, that is, if

i = 1 N A i j A i k = δ j k

Equivalently, A is orthogonal if it preserves the inner product, namely, if x , y = A x , A y for all vectors x , y in R n . (Angled brackets denote the usual inner product on R n , x , y = i x i y i ) Still another equivalent definition is that A is orthogonal if A t r A = I , i.e., if A t r = A 1 . ( A t r is the transpose of A, ( A t r ) i j = A j i .).

Since det A t r = det A , we see that if A is orthogonal, then det ( A t r A ) = ( det A ) 2 = det I = 1 . Hence det A = ± 1 , for all orthogonal matrices A.

This formula tells us, in particular, that every orthogonal matrix must be invertible. But if A is an orthogonal matrix, then

A 1 x , A 1 y = A ( A 1 x ) , A ( A 1 x ) = x , y

Thus, the inverse of an orthogonal matrix is orthogonal. Furthermore, the product of two orthogonal matrices is orthogonal, since if A and B both preserve inner products, then so does AB. Thus, the set of orthogonal matrices forms a group

The set of all n × n real orthogonal matrices is the orthogonal group O(n), and is a subgroup of G L ( n , C ) . The limit of a sequence of orthogonal matrices is orthogonal, because the relation A t r A = I is preserved under limits. Thus O(n) is a matrix Lie group.

The set of n × n orthogonal matrices with determinant one is the special orthogonal group SO(n). Clearly this is a subgroup of O(n), and hence of G L ( n , C ) .

Moreover, both orthogonality and the property of having determinant one is preserved under limits, and so SO(n) is a matrix Lie group. Since elements of O(n) already have determinant ±1, SO(n) is “half” of O(n).

7. Unitary Groups

U ( n , C ) = { A = [ a i j ] n × n G L ( n , C ) | A A T ¯ = I }

For A = [ a i j ] M n ( C ) ,

A * = ( A ¯ ) T = A T ¯ ,

is the Hermitian conjugate of A, i.e., ( A * ) i j = a j i .

Definition 3.5. [5] In a matrix A whose elements are complex numbers, the matrix obtained by replacing each element with its conjugate is called the conjugate of A matrix and is denoted by A ¯ .

Simple Example 5.1:

A = [ 6 4 + 3 i 5 i 3 i 1 i 3 2 i i 1 + i ] conjugate of the matrix

A ¯ = [ 6 4 3 i 5 i 3 i 1 + i 3 + 2 i i 1 i ]

Theorem 5.2. Let A, B be two matrices and k any scalar.

1) A ¯ ¯ = A

2) ( k A ¯ ) = k ¯ A ¯

3) ( A + B ) ¯ = A ¯ + B ¯

4) ( A B ) ¯ = A ¯ + B ¯

Example 5.3: Let

A = [ 1 i 1 i i 3 2 + i 1 + i 5 i 3 ] and B = [ 1 i i 0 3 2 2 i i ]

Note that,

A * = ( A ¯ ) T

A * = ( [ 1 i 1 i i 3 2 + i 1 + i 5 i 3 ¯ ] ) T = ( [ 1 i 1 + i i 3 2 i 1 i 5 + i 3 ] ) T = [ 1 i 1 i i 3 5 + i 1 + i 2 i 3 ]

B * = ( [ 1 i i 0 3 2 2 i i ] ¯ ) T = ( [ 1 + i i 0 3 2 + 2 i i ] ) T = [ 1 + i 3 i 2 + 2 i 0 i ]

Definition 5.4. A square matrix with ( A ¯ ) = A is called Hermitian matrix. If the square matrix A = a i j is the Hermitian matrix, then a i j = a j i ¯ . The diagonal elements of a Hermitian matrix are real numbers.

Simple Example 5.5:

A = [ 1 i 1 + i i 4 2 i 1 i 2 + i 3 ] matrix is a Hermitian matrix.

Definition 5.6: [5] A square matrix with ( A ¯ ) = A is called the inverse Hermitian matrix. If the square matrix A = a i j is the inverse Hermitian matrix, then a i j = a j i ¯ . The diagonal elements of an inverse Hermitian matrix are 0 or their imaginary numbers.

Simple Example 5.7: Let

A = [ i 1 i 5 1 i 2 i i 2 i 0 ] matrix is the inverse Hermitian matrix.

Example 5.8: Let A = [ 1 2 ( 1 + i ) 1 2 ( 1 + i ) 1 2 ( 1 i ) 1 2 ( 1 + i ) ]

A A * = [ 1 2 ( 1 + i ) 1 2 ( 1 + i ) 1 2 ( 1 i ) 1 2 ( 1 + i ) ] ( [ 1 2 ( 1 + i ) 1 2 ( 1 + i ) 1 2 ( 1 i ) 1 2 ( 1 + i ) ¯ ] ) T = [ 1 2 ( 1 + i ) 1 2 ( 1 + i ) 1 2 ( 1 i ) 1 2 ( 1 + i ) ] ( [ 1 2 ( 1 i ) 1 2 ( 1 i ) 1 2 ( 1 + i ) 1 2 ( 1 i ) ] ) T

= [ 1 2 ( 1 + i ) 1 2 ( 1 + i ) 1 2 ( 1 i ) 1 2 ( 1 + i ) ] [ 1 2 ( 1 i ) 1 2 ( 1 + i ) 1 2 ( 1 i ) 1 2 ( 1 i ) ] = [ 1 2 + 1 2 1 2 1 2 1 2 1 2 1 2 + 1 2 ] = [ 1 0 0 1 ]

Theorem 5.9: Let A = [ 1 2 ( a 11 + i b 11 ) 1 2 ( a 12 + i b 12 ) 1 2 ( a 1 n + i b 1 n ) 1 2 ( a 21 + i b 21 ) 1 2 ( a 22 + i b 22 ) 1 2 ( a 2 n + i b 2 n ) 1 2 ( a m 1 + i b m 1 ) 1 2 ( a m 2 + i b 2 m ) 1 2 ( a m n + i b m n ) ]

A A T ¯ = I

if and only if

a i a j = { 1 , i = j 0 , i j

[ 1 2 ( 1 + i 1 ) 1 2 ( 0 + i 0 ) 1 2 ( 0 + i 0 ) 1 2 ( 0 + i 0 ) 1 2 ( 1 + i 1 ) 1 2 ( 0 + i 0 ) 1 2 ( 0 + i 0 ) 1 2 ( 0 + i 0 ) 1 2 ( 1 + i 1 ) ] [ 1 2 ( 1 i 1 ) 1 2 ( 0 i 0 ) 1 2 ( 0 i 0 ) 1 2 ( 0 i 0 ) 1 2 ( 1 i 1 ) 1 2 ( 0 i 0 ) 1 2 ( 0 i 0 ) 1 2 ( 0 i 0 ) 1 2 ( 1 i 1 ) ]

[ 1 2 ( 1 + i ) 0 0 0 1 2 ( 1 + i ) 0 0 0 1 2 ( 1 + i ) ] [ 1 2 ( 1 i ) 0 0 0 1 2 ( 1 i ) 0 0 0 1 2 ( 1 i ) ]

[ 1 2 ( 2 ) 0 0 0 1 2 ( 2 ) 0 0 0 1 2 ( 2 ) ] = [ 1 0 0 0 1 0 0 0 1 ]

An n × n complex matrix A is said to be unitary if the column vectors of A are orthonormal [4], that is, if

i = 1 n A i j ¯ A i k = δ j k

Equivalently, A is unitary if it preserves the inner product, namely, if x , y = A x , A y for all vectors x, y in C n . (Angled brackets here denote the inner product on C n , x , y = i x i ¯ y i . We will adopt the convention of putting the complex conjugate on the left.) Still another equivalent definition is that A is unitary if A * A = I , i.e., if A * = A 1 . ( A * is the adjoint of A, ( A * ) i j = A j i ).

Since det A * = det A ¯ , we see that if A is unitary, then det ( A * A ) = | det A | 2 = det I = 1 . Hence | det A | = 1 , for all unitary matrices A. This in particular shows that every unitary matrix is invertible. The same argument as for the orthogonal group shows that the set of unitary matrices forms a group.

This in particular shows that every unitary matrix is invertible. The same argument as for the orthogonal group shows that the set of unitary matrices forms a group. The set of all n × n unitary matrices is the unitary group U(n), and is a subgroup of G L ( n , C ) . The limit of unitary matrices is unitary, so U(n) is a matrix Lie group. The set of unitary matrices with determinant one is the special unitary group SU(n). It is easy to check that SU(n) is a matrix Lie group. Note that a unitary matrix can have determinant e i θ for any θ, and so SU(n) is a smaller subset of U(n) than SO(n) is of O(n). (Specifically, SO(n) has the same dimension as O(n), whereas SU(n) has dimension one less than that of U(n)).

8. Symplectic Groups

n × n real skew symmetric matrix A [6], i.e., one for which S T = A . For such a matrix,

det A T = det ( A ) = ( 1 ) n det A

giving

det A = ( 1 ) n det A

The most interesting case occurs if det A 0 when n must be even and we then write n = 2m. The standard example of this is built up using the 2 × 2 block

J = [ 0 1 1 0 ]

If m 1 we have the non-degenerate skew symmetric matrix

J 2 m = [ J O 2 O 2 O 2 J O 2 O 2 O 2 J ]

The matrix group

S P 2 m ( R ) = { A G L 2 m ( R ) : A T J 2 m A = J 2 m } G L ( R )

Is called the 2 m × 2 m (real) symplectic group.

We will now look at the coordinate-free version of these groups. A bilinear form B is called skew symmetric if B ( v , w ) = B ( w , v ) . If B is skew-symmetric and non-degenerate, then m = dim V must be even, since the matrix of B relative to any basis for V is skew-symmetric and has nonzero determinant.

9. Cyclic Groups

The next class of groups we will consider consists of the cyclic groups. Before defining these groups, we need to explain how exponents work. If G is a group and x G , then if m is a positive integer, x m = x x (m factors). We define x m to be ( x 1 ) m . Also, x 0 = 1 . Then the usual laws of exponentiation hold for all integers m, n:

1) x m x n = x m + n

2) ( x m ) n = x m n

Definition 7.1. A group G is said to be cyclic if there exists an element x G such that for every y G , there is an integer m such that y = x m . Such an element x is said to generate G.

10. Dihedral Groups: Generators and Relations

In the next example, we give an illustration of a group G that is described by giving a set of its generators and the relations the generators satisfy. The group we will study is called the dihedral group. We will see in due course that the dihedral groups are the symmetry groups of the regular polygons in the plane. As we will see in this example, defining a group by giving generators and relations does not necessarily reveal much information about the group.

Example 8.1. [2] Let us now verify that D(2) is a group. Since the multiplication of words is associative, it follows from the requirement that a 2 = b 2 = 1 and a b = b a that every word can be collapsed to one of 1 , a , b , a b , b a . But a b = b a , so D ( 2 ) = { 1 , a , b , a b } . To see that D(2) is closed under multiplication, we observe that

a ( a b ) = a 2 b = b , b ( a b ) = ( b a ) b = a b 2 = a , ( a b ) a = ( b a ) a = b a 2 = b and ( a b ) ( a b ) = ( b a ) ( a b ) = b a 2 b = b 2 = 1.

Therefore, D(2) is closed under multiplication, so it follows from our other remarks that D(2) a group. Note that the order of D(2) is 4.

Example 8.2. [2] (Dihedral Groups) The dihedral groups are groups that are defined by specifying two generators a and b and also specifying the relations that the generators satisfy. When we define a group by generators and relations, we consider all words in the generators, in this case a and b: these are all the strings or products x 1 x 2 x n , where each xi is either a or b, and n is an arbitrary positive integer. For example, abbaabbaabbaabba is a word with n = 16. Two words are multiplied together by placing them side by side.

Thus,

( x 1 x 2 x n ) ( y 1 y 2 y p ) = x 1 x 2 x n y 1 y 2 y p .

This produces an associative binary operation on the set of words. The next step is to impose some relations that a and b satisfy. Suppose m > 1. The dihedral group D(m) is defined to be the set of all words in a and b with the above multiplication that we assume is subject to the following relations:

a m = b 2 = 1 , a b = b a m 1 .

It is understood that the cyclic groups a and b have orders m and two respectively. By, a 1 = a m 1 and b = b 1 . For example, if m = 3, then a 3 = b 2 = 1 , so

a a a b a b a b b b = ( a a a ) ( b a b ) ( a b ) ( b b ) = ( a 2 ) ( a b ) = a 3 b = b .

The reader can show that D ( 3 ) = { 1 , a , a 2 , b , a b , b a } . For example, a 2 b = a ( a b ) = a ( b a 2 ) = ( a b ) a 2 = b a 4 = b a . Hence, D(3) has order 6. We will givea more convincing argument in due course.

11. Quaternionic Groups

We recall some basic properties of the quaternions [3]. Consider the four-dimensional real vector space H consisting of the 2 × 2 complex matrices

w = [ x y ¯ y x ¯ ]

with x , y C .

One checks directly that H is closed under multiplication in M 2 ( C ) . If w H then w * H and

w * w = w w * = ( | x | 2 + | y | ) 2 I

(where w denotes the conjugate-transpose matrix). Hence every nonzero element of H is invertible. Thus H is a division algebra (or skew field) over R. This division algebra is a realization of the quaternions. The more usual way of introducing the quaternions is to consider the vector space H over R with basis f = { 1 , i , j , k } . Define a multiplication so that 1 is the identity and

i 2 = j 2 = k 2 = 1 ;

i j = j i = k , k i = i k = j , j k = k j = i ;

then extend the multiplication to H by linearity relative to real scalars. To obtain an isomorphism between this version of H and the 2_2 complex matrix version, take

1 = I , i = [ i 0 0 i ] , j = [ 0 1 1 0 ] , k = [ 0 i i 0 ]

where i is a fixed choice of 1 . The conjugation w w * satisfies ( u v ) * = v * u * . In terms of real components, ( a + b i + c j + d k ) * = a b i c j d k for a , b , c , d R . It is useful to write quaternions in complex form as x + j y with x , y C ; however, note that the conjugation is then given as

( x + j y ) * = x + y j = x j y

On the 4n-dimensional real vector space H n we define multiplication by a H on the right:

( u 1 , , u n ) a = ( u 1 a , , u n a )

We note that u 1 = u and u ( a b ) = ( u a ) b . We can therefore think of H n as a vector space over H. Viewing elements of H n as n × 1 column vectors, we define Au for u H n and A M n ( H ) by matrix multiplication. Then A ( u a ) = ( A u ) a for a H ; hence A defines a quaternionic linear map. Here matrix multiplication is defined as usual, but one must be careful about the order of multiplication of the entries.

We can make H n into a 2n-dimensional vector space over C in many ways; for example, we can embed C into H as any of the subfields

R 1 + R i , R 1 + R j , R 1 + R k

Using the first of these embeddings, we write z = x + j y H n with x , y C n , and likewise C = A + j B M n ( H ) with A , B M n ( C ) . The maps

z [ x y ] and C [ A B ¯ B A ¯ ]

identify H n with C 2 n and M n ( H ) with the real subalgebra of M 2 n ( C ) consisting of matrices T such that

J T = T ¯ J

where

J = [ 0 I I 0 ]

The Future Perspective of This Paper

The future of this paper is to help some to better understand the introductory part of linear groups, while others, like me, to be an incentive in further work and engaging in science.

NOTES

1https://www.routledge.com/Linear-Groups-The-Accent-on-Infinite-Dimensionality/Dixon-Kurdachenko-Subbotin/p/book/9781138542808.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Carrell, J.B. (2017) Groups, Matrices, and Vector Spaces. Department of Mathematics University of British Columbia Vancouver, BC Canada.
http://www.uop.edu.pk/ocontents/Groups,%20Matrices,%20and%20Vector%20Spaces%20A%20Group%20Theoretic%20Approach%20to%20Linear%20Algebra.pdf
[2] Grillet, P.A. (2007) Abstract Algebra. Department of Mathematics Tulane University USA.
http://dobrochan.ru/src/pdf/1204/Grillet_P._A._-_Abstract_Algebra_(2007)(684).pdf
[3] Goodman, R. and Wallach, N.R. (2009) Symmetry, Representations, and Invariants. Department of Mathematics, Rutgers University of California, San Diego Piscataway.
https://www.maths.ed.ac.uk/~v1ranick/papers/goodwallx.pdf
[4] Hall. B.C. (2000) An Elementary Introduction to Groups and Representations. University of Notre Dame, Department of Mathematics, Notre Dame.
https://arxiv.org/abs/math-ph/0005032
[5] Agargun, A.G. and Uzdag, H. (2007) Linear Algebra and Calculus Problems. Yildiz Teknik University, Turkey.
https://www.cimri.com/kaynak-kitaplari/en-ucuz-lineer-cebir-ve-cozumlu-problemleri-a-goksel-agargunhulya-ozdag-9781111138769-fiyatlari,528793
[6] Baker, A. (1953) Matrix Groups an Introduction to Lie Group Theory (Library of Congress Cataloging-in-Publication Data Baker, Andrew, 1953), Department of Mathematics, University of Glasgow, Glasgow, USA.
http://inis.jinr.ru/sl/vol1/UH/_Ready/Mathematics/Algebra/Baker%20A.%20Matrix%20groups,%20an%20introduction%20to%20Lie%20groups%20(Springer,%202002)(L)(173s).pdf

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.