Representations of Lie Groups

Abstract

In this paper, the most important liner groups are classified. Those that we often have the opportunity to meet when studying linear groups as well as their application in left groups. In addition to the introductory part, we have general linear groups, special linear groups, octagonal groups, symplicit groups, cyclic groups, dihedral groups: generators and relations. The paper is summarized with brief deficits, examples and evidence as well as several problems. When you ask why this paper, I will just say that it is one of the ways I contribute to the community and try to be a part of this little world of science.

Keywords

Share and Cite:

Hasić, A. (2021) Representations of Lie Groups. Advances in Linear Algebra & Matrix Theory, 11, 117-134. doi: 10.4236/alamt.2021.114009.

1. Introduction

Algebra is the mathematical discipline that arose from the problem of solving equations [1]. If one starts with the integers Z, one knows that every equation $a+x=b$, where a and b are integers, has a unique solution. However, the equation $ax=b$ does not necessarily have a solution in Z, or it might have infinitely many solutions (take $a=b=0$ ). So let us enlarge Z to the rational numbers Q, consisting of all fraction’s c/d, where $d\ne 0$. Then both equations have a unique solution in Q, provided that $a\ne 0$ for the equation $ax=b$. So, Q is a field. If, for example, one takes the solutions of an equation such as ${x}^{2}-5=0$ and forms the set of all numbers of the form $a+b\sqrt{5}$, where a and b are rational, we get a larger field, denoted by $Q\left(\sqrt{5}\right)$, called an algebraic number field. In the study of fields obtained by adjoining the roots of polynomial equations, a new notion arose, namely, the symmetries of the field that permute the roots of the equation. Evariste Galois (1811-1832) coined the term group for these symmetries, and now this group is called the Galois group of the field. While still a teenager, Galois showed that the roots of an equation are expressible by radicals if and only if the group of the equation has a property now called solvability. This stunning result solved the 350-year-old question whether the roots of every polynomial equation are expressible by radicals.

1Linear Groups is the Accent on Infinite Dimensionality explores some of the main results and ideas in the study of infinite-dimensional linear groups. The theory of finite-dimensional linear groups is one of the best developed algebraic theories. The array of articles devoted to this topic is enormous, and there are many monographs concerned with matrix groups, ranging from old, classical texts to ones published more recently. However, in the case when the dimension is infinite (and such cases arise quite often), the reality is quite different.

The situation with the study of infinite-dimensional linear groups is like the situation that has developed in the theory of groups, in the transition from the study of finite groups to the study of infinite groups which appeared about one hundred years ago. It is well known that this transition was extremely efficient and led to the development of a rich and central branch of algebra: Infinite group theory.

Group theory arose from the study of polynomial equations [2]. The solvability of an equation is determined by a group of permutations of its roots; before Abel [1824] and Galois [1830] mastered this relationship, it led Lagrange [1770] and Cauchy [1812] to investigate permutations and prove forerunners of the theorems that bear their names. The term “group” was coined by Galois. Interest in groups of transformations, and in what we now call the classical groups, grew after 1850; thus, Klein’s Erlanger Programmed [1872] emphasized their role in geometry. Modern group theory began when the axiomatic method was applied to these results; Burnside’s Theory of Groups of Finite Order [1897] marks the beginning of a new discipline, abstract algebra, in that structures are defined by axioms, and the nature of their elements is irrelevant.

Definition 1.1. A group is a set G, together with a map of $G×G$ into G with the following properties:

· Closure: For all $x,y\in G$, We have

$x\cdot y\in G$

· Associativity: For all $x,y,z\in G$.

$\left(xy\right)z=x\left( y z \right)$

· There exists an element e in G such that for all $x\in G$,

$x\cdot e=e\cdot x=x$

The element e is unique, and is called the identity element of the group, or simply the identity.

· and such that for all $x\in G$, there exists ${x}^{\prime }\in G$ with

$x\cdot {x}^{\prime }={x}^{\prime }\cdot x=e$

is called an inverse of x.

Definition 1.2. A group G is said to be commutative or abelian if all $x,y\in G$, we have $x\cdot y=y\cdot x$. A group that is not abelian is said to benonabelian.

Proposition 1.3. (Uniqueness of the Identity). Let G be a group, and let $e,f\in G$ be such that for all $x\in G$.

$e\cdot x=x\cdot e=x$

$f\cdot x=x\cdot f=x$

Then $e=f$.

Proof. Since e is an identity, we have

$e\cdot f=f$

On the other hand, since f is an identity, we have

$e\cdot f=e$

Thus $e=e\cdot f=f$.

Proposition 1.4 (Uniqueness of Inverses). Let G be a group, e the (unique) identity of G, and $x,y,z\in G$. Suppose that

$x\cdot y=y\cdot x=e$

$x\cdot z=z\cdot x=e$

Then $y=z$.

Proof: We know that $x\cdot y=x\cdot z=e$. Multiplying on the left by h gives $y\cdot \left(x\cdot y\right)=y\cdot \left(x\cdot z\right)$.

By as sociativity, this gives

$\left(y\cdot x\right)\cdot y=\left(y\cdot x\right)\cdot z$

and so

$e\cdot y=e\cdot z$

$y=z$.

Proposition 1.5. [1] For all $x,y\in G$, we have ${\left(xy\right)}^{-1}={y}^{-1}{x}^{-1}$ .

Proof: Let $w={y}^{-1}{x}^{-1}$. Then it suffices to show that $w\left(xy\right)=1$. But

$w\left(xy\right)=\left(wx\right)y=\left(\left({y}^{-1}{x}^{-1}\right)x\right)y=\left({y}^{-1}\left({x}^{-1}x\right)\right)y=\left({y}^{-1}1\right)y={y}^{-1}y=1.$

If ${x}^{1},{x}^{2},\cdots ,{x}^{n}$ are arbitrary elements of a group G, then the expression ${x}^{1}{x}^{2}\cdots {x}^{n}$ will stand for ${x}^{1}\left({x}^{2}\cdots {x}^{n}\right)$, where ${x}^{2}\cdots {x}^{n}={x}^{2}\left({x}^{3}\cdots {x}^{n}\right)$ and soon. This gives an inductive definition of the product of an arbitrary finite number of elements of Moreover, by as sociativity, pairs ( $\cdots$ ) of parentheses can be inserted or removed in the expression ${x}^{1}{x}^{2}\cdots {x}^{n}$ without making any change in the group element being represented, provided the new expression makes sense. (For example, you can’t have an empty pair of parentheses, and the number of left parentheses has to be the same as the number of right parentheses).

$\left({y}^{-1}{x}^{-1}\right)\left(xy\right)={y}^{-1}\left({x}^{-1}x\right){y}^{-1}=y1{y}^{-1}=1.$

Definition 1.6. A subgroup of a group G is a subset H of G with the following properties:

1) The identity is an element of H.

2) If $h\in H$, then ${h}^{-1}\in H$.

3) If ${h}_{1},{h}_{2}\in H$, then ${h}_{1}\cdot {h}_{2}\in H$.

Proposition 1.7. A subset H of a group G is a subgroup if and only if H is a group under the group operations of G. That is, H is closed under the group operation and contains the identity of G, and the inverse of an element of H is its inverse in G.

Definition 1.8. Let G and H be groups. A homomorphism from G to H is a map $\phi :G\to H$ such that, for all $x,y$ in G, $\phi \left(x\cdot y\right)=\phi \left(x\right)\phi \left(y\right)$.

Proposition 1.9. Let G and H be groups and $\phi :G\to H$ be a homomorphism, Then for all $x,y$ in G,

$\phi \left(x{y}^{-1}\right)=\phi \left(x\right)\phi {\left(y\right)}^{-1}$ and $\phi \left(y{x}^{-1}\right)=\phi \left(y\right)\phi {\left(x\right)}^{-1}$

Proof: We have

$\phi \left(x{y}^{-1}\right)\phi \left(y\right)=\phi \left(\left(x{y}^{-1}\right)y\right)=\phi \left(x{y}^{0}\right)=\phi \left( x \right)$

Since $\phi$ is a homomorphism.

Definition 1.10. A homomorphism from G to H which is a bijection is an isomorphism. In that case, we say that G and H are isomorphic, and write $G\cong H$.

Definition 1.11. A bijective homomorphism $\phi$ from a group to itself is an automorphism.

Example 1.12. Let ${R}^{*}$ denote the nonzero real numbers. Multiplication by $a\in {R}^{*}$ defines a bijection ${\mu }_{a}:R\to R$ given by ${\mu }_{a}\left(r\right)=ar$. The distributive law for R says that

${\mu }_{a}\left(r+s\right)=a\left(r+s\right)=ar+as={\mu }_{a}\left(r\right)+{\mu }_{a}\left( s \right)$

for all $r,s\in R$. Thus ${\mu }_{a}:R\to R$ is a homomorphism for the additive group structure on R. Since $a\ne 0$, a is in fact an isomorphism. Furthermore, the associative and commutative laws for R imply that $a\left(rs\right)=\left(ar\right)s=\left(ra\right)s=r\left(as\right)$. Hence,

${\mu }_{a}\left(rs\right)=r{\mu }_{a}\left(s\right)$.

2. The Classical Groups

In this study the structure of a classical group G and its Lie algebra [3]. We choose a matrix realization of G such that the diagonal subgroup $H\subset G$ is a maximal torus; by elementary linear algebra every conjugacy class of semi simple elements intersects H. Using the unipotent elements in G, we show that the groups GL(n,R), SL(n,R), SO(n,R), and U(n,C) are connected (as Lie groups and as algebraic groups). This group and its Lie algebra play a basic role in the structure of the other classical groups and Lie algebras. We decompose the Lie algebra of a classical group under the adjoint action of a maximal torus and find the invariant subspaces (called root spaces) and the corresponding characters (called roots). The commutation relations of the root spaces are encoded by the set of roots; we use this information to prove that the classical (trace-zero) Lie algebras are simple (or semi simple). In the final section of the chapter, we develop some general Lie algebra methods (solvable Lie algebras, killing form) and show that every semi simple Lie algebra has a root-space decomposition with the same properties as those of the classical Lie algebras.

Definition 1.13. The classical groups are the groups of invertible linear transformations of finite-dimensional vector spaces over the real, complex, and quaternion fields, together with the subgroups that preserve a volume form or a bilinear form.

Proposition 1.14. The determinant function $\mathrm{det}:{M}_{n}\to k$ : has the following properties.

i) For $A,B\in {M}_{n}\left(k\right)$, $\mathrm{det}\left(AB\right)=\mathrm{det}A\cdot \mathrm{det}B$.

ii) $\mathrm{det}{I}_{n}=1$.

iii) $A\in {M}_{n}\left(k\right)$ is invertible if and only if $\mathrm{det}A\ne O$.

3. General Linear Groups GL(n,R)

$GL\left(n,R\right)=\left\{A={\left[{a}_{ij}\right]}_{n×n}|\mathrm{det}A\ne 0,\left(|A|\ne 0\right)\right\}$

· Closure property

Let $A,B\in GL\left(n,R\right)$ then $|A|\ne 0,|B|\ne 0$

$|A\cdot B|=|A|\cdot |B|$

$|A|\ne 0,|B|\ne 0$

$|A|\cdot |B|\ne 0$ i.e.

$|A\cdot B|\ne 0A\cdot B\in GL\left(n,R\right)$

$|A|=\mathrm{det}A,|B|=\mathrm{det}B$

$\mathrm{det}\left(AB\right)=\mathrm{det}A\cdot \mathrm{det}B$

$⇒\mathrm{det}\left(AB\right)\ne 0$ i.e.

$⇒\mathrm{det}A\ne 0,\mathrm{det}B\ne 0$

$⇒A\cdot B\in GL\left(n,R\right)$

· Associative property

$A\cdot \left(B\cdot C\right)=\left(A\cdot B\right)\cdot C,\forall A,B,C\in GL\left(n,R\right)$

· Existence of identity

${I}_{n×n}={\left[\begin{array}{ccccc}1& 0& 0& \cdots & 0\\ 0& 1& 0& \cdots & 0\\ 0& 0& 1& \cdots & 0\\ 0& 0& 0& \cdots & 1\end{array}\right]}_{n×n}$. Such that

$|I|=1\ne 0$

$A\cdot {I}_{n×n}=A={I}_{n×n}\cdot A$

· Existence of inverse

Let $A\in GL\left(n,R\right)$, $|A|\ne 0$.

${A}^{-1}=\frac{1}{|A|}\left(adjA\right)$

$|{A}^{-1}|=|\frac{1}{|A|}\left(adjA\right)|$

$⇒\frac{\left(adjA\right)}{{\left(A\right)}^{n}}=\frac{{|A|}^{n-1}}{{|A|}^{n}}=\frac{1}{|A|}\ne 0$

$A\cdot {A}^{-1}=A\cdot \frac{1}{|A|}\left(adjA\right)=\frac{A\cdot \left(adjA\right)}{|A|}$

$⇒\frac{|A|\cdot I}{|A|}=I$

· Commutativity

$\left(GL\left(n,R\right),\cdot \right)$ group is not Abellio group because $AB\ne BA$.

Let F denote either the field of real numbers R or the field of complex numbers C, and let V be a finite-dimensional vector space over F [3]. The set of invertible linear transformations from V to V will be denoted by GL(V). This set has a group structure under composition of transformations, with identity element the identity transformation $I\left(x\right)=x$ for all $x\in V$. The group GL(V) is the first of the classical groups. To study it in more detail, we recall some standard terminology related to linear transformations and their matrices. Let V and W be finite-dimensional vector spaces over F. Let $\left\{{v}_{1},\cdots ,{v}_{n}\right\}$ and $\left\{{w}_{1},\cdots ,{w}_{m}\right\}$ be bases for V and W, respectively. If $T:V\to W$ is a linear map then

$T{v}_{j}=\underset{i=1}{\overset{m}{\sum }}{a}_{ij}{w}_{i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}j=1,\cdots ,n$

with ${a}_{ij}\in F$. The numbers ${a}_{ij}$ are called the matrix coefficients or entries of T with respect to the two bases, and the $m×n$ array

$A=\left[\begin{array}{cccc}{a}_{11}& {a}_{12}& \cdots & {a}_{1n}\\ {a}_{21}& {a}_{22}& \cdots & {a}_{2n}\\ ⋮& ⋮& \ddots & ⋮\\ {a}_{m1}& {a}_{m2}& \cdots & {a}_{mn}\end{array}\right]$

Let $S:W\to U$ be another linear transformation, with U an l-dimensional vector space with basis $\left\{{u}_{1},\cdots ,{u}_{n}\right\}$, and let B be the matrix of S with respect to the bases $\left\{{w}_{1},\cdots ,{w}_{m}\right\}$ and $\left\{{u}_{1},\cdots ,{u}_{n}\right\}$, Then the matrix of $S\circ T$ with respect to the bases $\left\{{v}_{1},\cdots ,{v}_{n}\right\}$ and $\left\{{u}_{1},\cdots ,{u}_{n}\right\}$, is given by BA, with the product being the usual product of matrices.

We denote the space of all $n×n$ matrices over F by ${M}_{n}\left(F\right)$, and we denote the $n×n$ identity matrix by I (or in if the size of the matrix needs to be indicated); it has entries ${\delta }_{ij}=1$ if $i=j$ and 0 otherwise. Let V be an n-dimensional vector space over F with basis $\left\{{v}_{1},\cdots ,{v}_{n}\right\}$. If $T:V\to V$ is a linear map we write $\mu \left(T\right)$ for the matrix of T with respect to this basis. If $T,S\in GL\left(V\right)$ then the preceding observations imply that $\mu \left(S\circ T\right)=\mu \left(S\right)\mu \left(T\right)$. Furthermore, if $T\in GL\left(V\right)$ then $\mu \left(T\circ {T}^{-1}\right)=\mu \left({T}^{-1}\circ T\right)=\mu \left(Id\right)=I$. The matrix $A\in {M}_{n}\left(F\right)$ is said to be invertible if there is a matrix $B\in {M}_{n}\left(F\right)$ such that $AB=BA=I$. We note that a linear map $T:V\to V$ is in $GL\left(V\right)$ if and only if its matrix $\mu \left(T\right)$ is invertible. We also recall that a matrix $A\in {M}_{n}\left(F\right)$ is invertible if and only if its determinant is nonzero.

We will use the notation $GL\left(n,F\right)$ for the set of n_n invertible matrices with coefficients in F. Under matrix multiplication $GL\left(n,F\right)$ is a group with the identity matrix as identity element. We note that if V is an n-dimensional vector space over F with basis $\left\{{v}_{1},\cdots ,{v}_{n}\right\}$, then the map $\mu :GL\left(V\right)\to GL\left(n,F\right)$ corresponding to this basis is a group isomorphism. The group $GL\left(n,F\right)$ is called the general linear group of rank n.

If $\left\{{w}_{1},\cdots ,{w}_{m}\right\}$ is another basis of V, then there is a matrix $g\in GL\left(n,F\right)$ such that

${w}_{ij}=\underset{i=1}{\overset{n}{\sum }}{g}_{ij}{v}_{i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{v}_{j}=\underset{i=1}{\overset{n}{\sum }}{h}_{ij}{w}_{i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}j=1,\cdots ,n$

with $\left[{h}_{ij}\right]$ the inverse matrix to $\left[{g}_{ij}\right]$. Suppose that T is a linear transformation from V to V, that $A=\left[{a}_{ij}\right]$ is the matrix of T with respect to a basis $\left\{{v}_{1},\cdots ,{v}_{n}\right\}$, and that $B=\left[{b}_{ij}\right]$ is the matrix of T with respect to another basis $\left\{{w}_{1},\cdots ,{w}_{m}\right\}$. Then

$T{w}_{j}=T\left(\underset{i}{\sum }{g}_{ij}{v}_{i}\right)=\underset{i}{\sum }{g}_{ij}T{v}_{i}=\underset{i}{\sum }{g}_{ij}\left(\underset{k}{\sum }{a}_{ki}{v}_{k}\right)=\underset{l}{\sum }\left(\underset{k}{\sum }\underset{i}{\sum }{h}_{lk}{a}_{ki}{g}_{ij}\right){w}_{l}$

for $j=1,\cdots ,n$. Thus $B={g}^{-1}Ag$ is similar to the matrix A.

4. Special Linear Group SL(n,R)

$SL\left(n,R\right)=\left\{A={\left[{a}_{ij}\right]}_{n×n}\in GL\left(n,R\right)|\mathrm{det}A=1,\left(|A|=1\right)\right\}$

· Closure property

Let $A,B\in SL\left(n,R\right)$ then $|A|=1,|B|=1$

$|A\cdot B|=|A|\cdot |B|$

$|A|=1,|B|=1$

$|A|\cdot |B|=1\cdot 1=1$ i.e.

$|A\cdot B|=1$

$A\cdot B\in SL\left(n,R\right)$

$|A|=\mathrm{det}A,|B|=\mathrm{det}B$

$\mathrm{det}\left(AB\right)=\mathrm{det}A\cdot \mathrm{det}B$

$⇒\mathrm{det}\left(AB\right)=1\cdot 1=1$ i.e.

$⇒\mathrm{det}A=1,\mathrm{det}B=1$

$⇒A\cdot B\in SL\left(n,R\right)$

· Associative property

$A\cdot \left(B\cdot C\right)=\left(A\cdot B\right)\cdot C,\forall A,B,C\in SL\left(n,R\right)$

· Existence of identity

${I}_{n×n}={\left[{a}_{ij}\right]}_{n×n}=\left\{\begin{array}{l}1,i=j\\ 0,i\ne j\end{array}$

Such that

$|I|=1$

$A\cdot {I}_{n×n}=A={I}_{n×n}\cdot A$

· Existence of inverse

Let $A\in SL\left(n,R\right)$,

$|A|=1$.

${A}^{-1}\text{\hspace{0.17em}}\text{exist}$

$|{A}^{-1}|={|A|}^{-1}=\frac{1}{{|A|}^{1}}=\frac{1}{1}=1$

${A}^{-1}\in SL\left(n,R\right)$

· Commutativity

$\left(SL\left(n,R\right),\cdot \right)$ is a group but not Abellio group because $AB\ne BA$.

The special linear group $SL\left(n,R\right)$ is the set of all elements A of ${M}_{n}\left(R\right)$ such that $\mathrm{det}\left(A\right)=1$ [3]. Since $\mathrm{det}\left(AB\right)=\mathrm{det}\left(A\right)\mathrm{det}\left(B\right)$ and $\mathrm{det}\left(I\right)=1$, we see that the special linear group is a subgroup of $GL\left(n,R\right)$.

We note that if V is an n-dimensional vector space over F with basis $\left\{{v}_{1},\cdots ,{v}_{n}\right\}$. and if $\mu :GL\left(V\right)\to GL\left(n,R\right)$ is the map previously defined, then the group

${\mu }^{-1}\left(SL\left(n,R\right)\right)=\left\{T\in GL\left(V\right):\mathrm{det}\left(\mu \left(T\right)\right)=1\right\}$

is independent of the choice of basis, by the change of basic formula. We denote this group by SL(V).

5. Orthogonal and Special Orthogonal Group

$O\left(n,R\right)=\left\{A={\left[{a}_{ij}\right]}_{n×n}\in GL\left(n,R\right)|A{A}^{\text{T}}={I}_{n}\right\}$

Definition 4.1. The matrix obtained by replacing the same numbered rows and columns of a matrix A is called the transposition of A and is denoted by AT. Accordingly, $A=\left[{a}_{ij}\right]$, $m×n$ -matrix, ${A}^{\text{T}}=\left[{a}_{ji}\right]$, $n×m$ -matrix.

$A=\left({a}_{ij}\right)\to {A}^{\text{T}}=\left({a}_{ji}\right)$

Definition 4.2.

1) A square matrix with ${A}^{\text{T}}=A$ is called a symmetric matrix. If $A=\left[{a}_{ij}\right]$ is a symmetric matrix then ${a}_{ij}={a}_{ji}$ for each $i,j$.

2) A square matrix with ${A}^{\text{T}}=-A$ is called an inverse symmetric matrix. If $A=\left[{a}_{ij}\right]$ is an inverse symmetric matrix then ${a}_{ij}=-{a}_{ji}$ for each $i,j$. Thus, in an inverse symmetric matrix, the prime diagonal elements are always zero.

Theorem 4.3. A, B are two matrices of the same order and r is a scalar.

1) ${\left(A+B\right)}^{\text{T}}={A}^{\text{T}}+{B}^{\text{T}}$

2) ${\left(rA\right)}^{\text{T}}=r{A}^{\text{T}}$

3) ${\left({A}^{\text{T}}\right)}^{\text{T}}=A$

4) ${\left(AB\right)}^{\text{T}}={B}^{\text{T}}{A}^{\text{T}}$

Theorem 4.4. A square matrix A is orthogonal if and only if its column vectors form an orthonormal set.

Proof: Let A be an $n×n$ orthogonal matrix and let ${a}_{1}{a}_{2}\cdots {a}_{n}$ be the column vectors of A. Then

${A}^{\text{T}}A={\left({a}_{ij}\right)}^{\text{T}}{a}_{ij}=\left({a}_{i}^{\text{T}}{a}_{j}\right)=\left({a}_{i}{a}_{j}\right)$

Therefore,

${A}^{\text{T}}A={I}_{n}$

$A=\left[\begin{array}{cccc}{a}_{11}& {a}_{12}& \cdots & {a}_{1n}\\ {a}_{21}& {a}_{22}& \cdots & {a}_{2n}\\ ⋮& ⋮& \ddots & ⋮\\ {a}_{m1}& {a}_{m2}& \cdots & {a}_{mn}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}^{\text{T}}=\left[\begin{array}{cccc}{a}_{11}& {a}_{21}& \cdots & {a}_{m1}\\ {a}_{12}& {a}_{22}& \cdots & {a}_{m2}\\ ⋮& ⋮& \ddots & ⋮\\ {a}_{1n}& {a}_{2n}& \cdots & {a}_{mn}\end{array}\right]$

$\begin{array}{l}{A}^{\text{T}}A=\left[\begin{array}{cccc}{a}_{11}& {a}_{21}& \cdots & {a}_{m1}\\ {a}_{12}& {a}_{22}& \cdots & {a}_{m2}\\ ⋮& ⋮& \ddots & ⋮\\ {a}_{1n}& {a}_{2n}& \cdots & {a}_{mn}\end{array}\right]\left[\begin{array}{cccc}{a}_{11}& {a}_{12}& \cdots & {a}_{1n}\\ {a}_{21}& {a}_{22}& \cdots & {a}_{2n}\\ ⋮& ⋮& \ddots & ⋮\\ {a}_{m1}& {a}_{m2}& \cdots & {a}_{mn}\end{array}\right]\\ =\left[\begin{array}{cccc}{a}_{11}{a}_{11}+{a}_{21}{a}_{21}+\cdots +{a}_{m1}{a}_{m1}& {a}_{11}{a}_{12}+{a}_{21}{a}_{22}+\cdots +{a}_{m1}{a}_{m2}& \cdots & {a}_{11}{a}_{1n}+{a}_{21}{a}_{2n}+\cdots +{a}_{m1}{a}_{mn}\\ {a}_{12}{a}_{11}+{a}_{22}{a}_{21}+\cdots +{a}_{m2}{a}_{m1}& {a}_{12}{a}_{12}+{a}_{22}{a}_{22}+\cdots +{a}_{m2}{a}_{m2}& \cdots & {a}_{12}{a}_{1n}+{a}_{22}{a}_{2n}+\cdots +{a}_{m2}{a}_{mn}\\ ⋮& ⋮& \ddots & ⋮\\ {a}_{1n}{a}_{11}+{a}_{2n}{a}_{21}+\cdots +{a}_{mn}{a}_{m1}& {a}_{1n}{a}_{12}+{a}_{2n}{a}_{22}+\cdots +{a}_{mn}{a}_{m2}& \cdots & {a}_{1n}{a}_{1n}+{a}_{2n}{a}_{2n}+\cdots +{a}_{mn}{a}_{mn}\end{array}\right]\end{array}$

if and only if

${a}_{i}{a}_{j}=\left\{\begin{array}{l}1,i=j\\ 0,i\ne j\end{array}$

${A}^{\text{T}}A=\left[\begin{array}{cccc}1+0+\cdots +0& 0+0+\cdots +0& \cdots & 0+0+\cdots +0\\ 0+0+\cdots +0& 0+1+\cdots +0& \cdots & 0+0+\cdots +0\\ ⋮& ⋮& \ddots & ⋮\\ 0+0+\cdots +0& 0+0+\cdots +0& \cdots & 0+0+\cdots +1\end{array}\right]$

${A}^{\text{T}}A=\left[\begin{array}{cccc}1& 0& \cdots & 0\\ 0& 1& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & 1\end{array}\right]={I}_{n}$

A simple example 3.5. The matrix

$A={\left[\begin{array}{ccc}3/\sqrt{11}& -1/\sqrt{6}& -1/\sqrt{66}\\ 1/\sqrt{11}& 2/\sqrt{6}& -4/\sqrt{66}\\ 1/\sqrt{11}& 1/\sqrt{6}& 7/\sqrt{66}\end{array}\right]}_{3×3}$ is orthogonal matrix. Note that

${A}^{\text{T}}A={I}_{n}$

$\begin{array}{l}\left[\begin{array}{ccc}3/\sqrt{11}& 1/\sqrt{11}& 1/\sqrt{11}\\ -1/\sqrt{6}& 2/\sqrt{6}& 1/\sqrt{6}\\ -1/\sqrt{66}& -4/\sqrt{66}& 7/\sqrt{66}\end{array}\right]\left[\begin{array}{ccc}3/\sqrt{11}& -1/\sqrt{6}& -1/\sqrt{66}\\ 1/\sqrt{11}& 2/\sqrt{6}& -4/\sqrt{66}\\ 1/\sqrt{11}& 1/\sqrt{6}& 7/\sqrt{66}\end{array}\right]\\ =\left[\begin{array}{ccc}\left(9+1+1\right)/11& \left(-3+2+1\right)/\sqrt{66}& \left(-3-4+7\right)/\sqrt{726}\\ \left(-3+2+1\right)/\sqrt{66}& \left(1+4+1\right)/6& \left(1-8+7\right)/\sqrt{396}\\ \left(-3-4+7\right)/\sqrt{726}& \left(1-8+7\right)/\sqrt{396}& \left(1+16+49\right)/66\end{array}\right]\\ =\left[\begin{array}{ccc}11/11& 0/\sqrt{66}& 0/\sqrt{726}\\ 0/\sqrt{66}& 6/6& 0/\sqrt{396}\\ 0/\sqrt{726}& 0/\sqrt{396}& 66/66\end{array}\right]=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]={I}_{3×3}\end{array}$

Consider the determinant function restricted to $O\left(n\right)$, $\mathrm{det}:O\left(n\right)\to {R}^{*}$.

For $A\in O\left(n\right)$,

${\left(\mathrm{det}A\right)}^{2}=\mathrm{det}{A}^{\text{T}}\mathrm{det}A=\mathrm{det}\left({A}^{\text{T}}A\right)=\mathrm{det}{I}_{n}=1,$

which implies that $\mathrm{det}A=±1$. Thus, we have

$O\left(n\right)=O{\left(n\right)}^{+}\cup O{\left(n\right)}^{-},$

where

$O{\left(n\right)}^{+}=\left\{A\in O\left(n\right):\mathrm{det}A=I\right\},O{\left(n\right)}^{-}=\left\{A\in O\left(n\right):\mathrm{det}A=-I\right\}.$

6. Special Orthogonal Group

$SO\left(n,R\right)=\left\{A={\left[{a}_{ij}\right]}_{n×n}\in GL\left(n,R\right)|A{A}^{\text{T}}={I}_{n},\mathrm{det}A=1\right\}$

The SO(n) is a subgroup of the orthogonal group O(n) and also known as the special orthogonal group or the set of rotations group.

The orthogonal and special orthogonal groups, O(n) and SO(n) [4]. An $n×n$ real matrix A is said to be orthogonal if the column vectors that make up A are orthonormal, that is, if

$\underset{i=1}{\overset{N}{\sum }}{A}_{ij}{A}_{ik}={\delta }_{jk}$

Equivalently, A is orthogonal if it preserves the inner product, namely, if $〈x,y〉=〈Ax,Ay〉$ for all vectors $x,y$ in ${R}^{n}$. (Angled brackets denote the usual inner product on ${R}^{n}$, $〈x,y〉={\sum }_{i}{x}_{i}{y}_{i}$ ) Still another equivalent definition is that A is orthogonal if ${A}^{tr}A=I$, i.e., if ${A}^{tr}={A}^{-1}$. ( ${A}^{tr}$ is the transpose of A, ${\left({A}^{tr}\right)}_{ij}={A}_{ji}$.).

Since $\mathrm{det}{A}^{tr}=\mathrm{det}A$, we see that if A is orthogonal, then $\mathrm{det}\left({A}^{trA}\right)={\left(\mathrm{det}A\right)}^{2}=\mathrm{det}I=1$. Hence $\mathrm{det}A=±1$, for all orthogonal matrices A.

This formula tells us, in particular, that every orthogonal matrix must be invertible. But if A is an orthogonal matrix, then

$〈{A}^{-1}x,{A}^{-1}y〉=〈A\left({A}^{-1}x\right),A\left({A}^{-1}x\right)〉=〈x,y〉$

Thus, the inverse of an orthogonal matrix is orthogonal. Furthermore, the product of two orthogonal matrices is orthogonal, since if A and B both preserve inner products, then so does AB. Thus, the set of orthogonal matrices forms a group

The set of all $n×n$ real orthogonal matrices is the orthogonal group O(n), and is a subgroup of $GL\left(n,C\right)$. The limit of a sequence of orthogonal matrices is orthogonal, because the relation ${A}^{trA}=I$ is preserved under limits. Thus O(n) is a matrix Lie group.

The set of $n×n$ orthogonal matrices with determinant one is the special orthogonal group SO(n). Clearly this is a subgroup of O(n), and hence of $GL\left(n,C\right)$.

Moreover, both orthogonality and the property of having determinant one is preserved under limits, and so SO(n) is a matrix Lie group. Since elements of O(n) already have determinant ±1, SO(n) is “half” of O(n).

7. Unitary Groups

$U\left(n,C\right)=\left\{A={\left[{a}_{ij}\right]}_{n×n}\in GL\left(n,C\right)|A\stackrel{¯}{{A}^{\text{T}}}=I\right\}$

For $A=\left[{a}_{ij}\right]\in {M}_{n}\left(C\right)$,

${A}^{*}={\left(\stackrel{¯}{A}\right)}^{\text{T}}=\stackrel{¯}{{A}^{\text{T}}}$,

is the Hermitian conjugate of A, i.e., ${\left({A}^{*}\right)}_{ij}={a}_{ji}$.

Definition 3.5. [5] In a matrix A whose elements are complex numbers, the matrix obtained by replacing each element with its conjugate is called the conjugate of A matrix and is denoted by $\stackrel{¯}{A}$.

Simple Example 5.1:

$A=\left[\begin{array}{ccc}6& 4+3i& 5i\\ 3& i& 1-i\\ 3-2i& -i& 1+i\end{array}\right]$ conjugate of the matrix

$\stackrel{¯}{A}=\left[\begin{array}{ccc}6& 4-3i& -5i\\ 3& -i& 1+i\\ 3+2i& i& 1-i\end{array}\right]$

Theorem 5.2. Let A, B be two matrices and k any scalar.

1) $\stackrel{¯}{\stackrel{¯}{A}}=A$

2) $\left(\stackrel{¯}{kA}\right)=\stackrel{¯}{k}\text{ }\stackrel{¯}{A}$

3) $\stackrel{¯}{\left(A+B\right)}=\stackrel{¯}{A}+\stackrel{¯}{B}$

4) $\stackrel{¯}{\left(AB\right)}=\stackrel{¯}{A}+\stackrel{¯}{B}$

Example 5.3: Let

$A=\left[\begin{array}{ccc}1& -i& 1-i\\ i& -3& 2+i\\ 1+i& 5-i& 3\end{array}\right]$ and $B=\left[\begin{array}{ccc}1-i& i& 0\\ 3& 2-2i& -i\end{array}\right]$

Note that,

${A}^{*}={\left(\stackrel{¯}{A}\right)}^{\text{T}}$

${A}^{*}={\left(\left[\stackrel{¯}{\begin{array}{ccc}1& -i& 1-i\\ i& -3& 2+i\\ 1+i& 5-i& 3\end{array}}\right]\right)}^{\text{T}}={\left(\left[\begin{array}{ccc}1& i& 1+i\\ -i& -3& 2-i\\ 1-i& 5+i& 3\end{array}\right]\right)}^{\text{T}}=\left[\begin{array}{ccc}1& -i& 1-i\\ i& -3& 5+i\\ 1+i& 2-i& 3\end{array}\right]$

${B}^{*}={\left(\stackrel{¯}{\left[\begin{array}{ccc}1-i& i& 0\\ 3& 2-2i& -i\end{array}\right]}\right)}^{\text{T}}={\left(\left[\begin{array}{ccc}1+i& -i& 0\\ 3& 2+2i& i\end{array}\right]\right)}^{\text{T}}=\left[\begin{array}{cc}1+i& 3\\ -i& 2+2i\\ 0& i\end{array}\right]$

Definition 5.4. A square matrix with ${\left(\stackrel{¯}{A}\right)}^{\prime }=A$ is called Hermitian matrix. If the square matrix $A={a}_{ij}$ is the Hermitian matrix, then ${a}_{ij}=\stackrel{¯}{{a}_{ji}}$. The diagonal elements of a Hermitian matrix are real numbers.

Simple Example 5.5:

$A=\left[\begin{array}{ccc}1& i& 1+i\\ -i& -4& 2-i\\ 1-i& 2+i& 3\end{array}\right]$ matrix is a Hermitian matrix.

Definition 5.6: [5] A square matrix with ${\left(\stackrel{¯}{A}\right)}^{\prime }=-A$ is called the inverse Hermitian matrix. If the square matrix $A={a}_{ij}$ is the inverse Hermitian matrix, then ${a}_{ij}=-\stackrel{¯}{{a}_{ji}}$. The diagonal elements of an inverse Hermitian matrix are 0 or their imaginary numbers.

Simple Example 5.7: Let

$A=\left[\begin{array}{ccc}i& 1-i& 5\\ -1-i& 2i& -i\\ -2& i& 0\end{array}\right]$ matrix is the inverse Hermitian matrix.

Example 5.8: Let $A=\left[\begin{array}{cc}\frac{1}{2}\left(1+i\right)& \frac{1}{2}\left(1+i\right)\\ \frac{1}{2}\left(1-i\right)& \frac{1}{2}\left(-1+i\right)\end{array}\right]$

$\begin{array}{c}A{A}^{*}=\left[\begin{array}{cc}\frac{1}{2}\left(1+i\right)& \frac{1}{2}\left(1+i\right)\\ \frac{1}{2}\left(1-i\right)& \frac{1}{2}\left(-1+i\right)\end{array}\right]{\left(\left[\stackrel{¯}{\begin{array}{cc}\frac{1}{2}\left(1+i\right)& \frac{1}{2}\left(1+i\right)\\ \frac{1}{2}\left(1-i\right)& \frac{1}{2}\left(-1+i\right)\end{array}}\right]\right)}^{\text{T}}\\ =\left[\begin{array}{cc}\frac{1}{2}\left(1+i\right)& \frac{1}{2}\left(1+i\right)\\ \frac{1}{2}\left(1-i\right)& \frac{1}{2}\left(-1+i\right)\end{array}\right]{\left(\left[\begin{array}{cc}\frac{1}{2}\left(1-i\right)& \frac{1}{2}\left(1-i\right)\\ \frac{1}{2}\left(1+i\right)& \frac{1}{2}\left(-1-i\right)\end{array}\right]\right)}^{\text{T}}\end{array}$

$=\left[\begin{array}{cc}\frac{1}{2}\left(1+i\right)& \frac{1}{2}\left(1+i\right)\\ \frac{1}{2}\left(1-i\right)& \frac{1}{2}\left(-1+i\right)\end{array}\right]\left[\begin{array}{cc}\frac{1}{2}\left(1-i\right)& \frac{1}{2}\left(1+i\right)\\ \frac{1}{2}\left(1-i\right)& \frac{1}{2}\left(-1-i\right)\end{array}\right]=\left[\begin{array}{cc}\frac{1}{2}+\frac{1}{2}& \frac{1}{2}-\frac{1}{2}\\ \frac{1}{2}-\frac{1}{2}& \frac{1}{2}+\frac{1}{2}\end{array}\right]=\left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right]$

Theorem 5.9: Let $A=\left[\begin{array}{cccc}\frac{1}{2}\left({a}_{11}+i{b}_{11}\right)& \frac{1}{2}\left({a}_{12}+i{b}_{12}\right)& \cdots & \frac{1}{2}\left({a}_{1n}+i{b}_{1n}\right)\\ \frac{1}{2}\left({a}_{21}+i{b}_{21}\right)& \frac{1}{2}\left({a}_{22}+i{b}_{22}\right)& \cdots & \frac{1}{2}\left({a}_{2n}+i{b}_{2n}\right)\\ ⋮& ⋮& \ddots & ⋮\\ \frac{1}{2}\left({a}_{m1}+i{b}_{m1}\right)& \frac{1}{2}\left({a}_{m2}+i{b}_{2m}\right)& \cdots & \frac{1}{2}\left({a}_{mn}+i{b}_{mn}\right)\end{array}\right]$

$A\stackrel{¯}{{A}^{\text{T}}}=I$

if and only if

${a}_{i}{a}_{j}=\left\{\begin{array}{l}1,i=j\\ 0,i\ne j\end{array}$

$\left[\begin{array}{cccc}\frac{1}{2}\left(1+i\cdot 1\right)& \frac{1}{2}\left(0+i\cdot 0\right)& \cdots & \frac{1}{2}\left(0+i\cdot 0\right)\\ \frac{1}{2}\left(0+i\cdot 0\right)& \frac{1}{2}\left(1+i\cdot 1\right)& \cdots & \frac{1}{2}\left(0+i\cdot 0\right)\\ ⋮& ⋮& \ddots & ⋮\\ \frac{1}{2}\left(0+i\cdot 0\right)& \frac{1}{2}\left(0+i\cdot 0\right)& \cdots & \frac{1}{2}\left(1+i\cdot 1\right)\end{array}\right]\left[\begin{array}{cccc}\frac{1}{2}\left(1-i\cdot 1\right)& \frac{1}{2}\left(0-i\cdot 0\right)& \cdots & \frac{1}{2}\left(0-i\cdot 0\right)\\ \frac{1}{2}\left(0-i\cdot 0\right)& \frac{1}{2}\left(1-i\cdot 1\right)& \cdots & \frac{1}{2}\left(0-i\cdot 0\right)\\ ⋮& ⋮& \ddots & ⋮\\ \frac{1}{2}\left(0-i\cdot 0\right)& \frac{1}{2}\left(0-i\cdot 0\right)& \cdots & \frac{1}{2}\left(1-i\cdot 1\right)\end{array}\right]$

$\left[\begin{array}{cccc}\frac{1}{2}\left(1+i\right)& 0& \cdots & 0\\ 0& \frac{1}{2}\left(1+i\right)& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & \frac{1}{2}\left(1+i\right)\end{array}\right]\left[\begin{array}{cccc}\frac{1}{2}\left(1-i\right)& 0& \cdots & 0\\ 0& \frac{1}{2}\left(1-i\right)& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & \frac{1}{2}\left(1-i\right)\end{array}\right]$

$\left[\begin{array}{cccc}\frac{1}{2}\left(2\right)& 0& \cdots & 0\\ 0& \frac{1}{2}\left(2\right)& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & \frac{1}{2}\left(2\right)\end{array}\right]=\left[\begin{array}{cccc}1& 0& \cdots & 0\\ 0& 1& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & 1\end{array}\right]$

An $n×n$ complex matrix A is said to be unitary if the column vectors of A are orthonormal [4], that is, if

$\underset{i=1}{\overset{n}{\sum }}\stackrel{¯}{{A}_{ij}}{A}_{ik}={\delta }_{jk}$

Equivalently, A is unitary if it preserves the inner product, namely, if $〈x,y〉=〈{A}_{x},{A}_{y}〉$ for all vectors x, y in ${C}^{n}$. (Angled brackets here denote the inner product on ${C}^{n}$, $〈x,y〉={\sum }_{i}\stackrel{¯}{{x}_{i}}{y}_{i}$. We will adopt the convention of putting the complex conjugate on the left.) Still another equivalent definition is that A is unitary if ${A}^{*}A=I$, i.e., if ${A}^{*}={A}^{-1}$. ( ${A}^{*}$ is the adjoint of A, ${\left({A}^{*}\right)}_{ij}={A}_{ji}$ ).

Since $\mathrm{det}{A}^{*}=\stackrel{¯}{\mathrm{det}A}$, we see that if A is unitary, then $\mathrm{det}\left({A}^{*}A\right)={|\mathrm{det}A|}^{2}=\mathrm{det}I=1$. Hence $|\mathrm{det}A|=1$, for all unitary matrices A. This in particular shows that every unitary matrix is invertible. The same argument as for the orthogonal group shows that the set of unitary matrices forms a group.

This in particular shows that every unitary matrix is invertible. The same argument as for the orthogonal group shows that the set of unitary matrices forms a group. The set of all $n×n$ unitary matrices is the unitary group U(n), and is a subgroup of $GL\left(n,C\right)$. The limit of unitary matrices is unitary, so U(n) is a matrix Lie group. The set of unitary matrices with determinant one is the special unitary group SU(n). It is easy to check that SU(n) is a matrix Lie group. Note that a unitary matrix can have determinant ${\text{e}}^{i\theta }$ for any θ, and so SU(n) is a smaller subset of U(n) than SO(n) is of O(n). (Specifically, SO(n) has the same dimension as O(n), whereas SU(n) has dimension one less than that of U(n)).

8. Symplectic Groups

$n×n$ real skew symmetric matrix A [6], i.e., one for which ${S}^{\text{T}}=-A$. For such a matrix,

$\mathrm{det}{A}^{\text{T}}=\mathrm{det}\left(-A\right)={\left(-1\right)}^{n}\mathrm{det}A$

giving

$\mathrm{det}A={\left(-1\right)}^{n}\mathrm{det}A$

The most interesting case occurs if $\mathrm{det}A\ne 0$ when n must be even and we then write n = 2m. The standard example of this is built up using the 2 × 2 block

$J=\left[\begin{array}{cc}0& 1\\ -1& 0\end{array}\right]$

If $m\ge 1$ we have the non-degenerate skew symmetric matrix

${J}_{2m}=\left[\begin{array}{cccc}J& {O}_{2}& \cdots & {O}_{2}\\ {O}_{2}& J& \cdots & {O}_{2}\\ ⋮& ⋮& \ddots & ⋮\\ {O}_{2}& {O}_{2}& \cdots & J\end{array}\right]$

The matrix group

$S{P}_{2m}\left(R\right)=\left\{A\in G{L}_{2m}\left(R\right):{A}^{\text{T}}{J}_{2m}A={J}_{2m}\right\}\le GL\left( R \right)$

Is called the $2m×2m$ (real) symplectic group.

We will now look at the coordinate-free version of these groups. A bilinear form B is called skew symmetric if $B\left(v,w\right)=-B\left(w,v\right)$. If B is skew-symmetric and non-degenerate, then $m=\mathrm{dim}V$ must be even, since the matrix of B relative to any basis for V is skew-symmetric and has nonzero determinant.

9. Cyclic Groups

The next class of groups we will consider consists of the cyclic groups. Before defining these groups, we need to explain how exponents work. If G is a group and $x\in G$, then if m is a positive integer, ${x}^{m}=x\cdots x$ (m factors). We define ${x}^{-m}$ to be ${\left({x}^{-1}\right)}^{m}$. Also, ${x}^{0}=1$. Then the usual laws of exponentiation hold for all integers m, n:

1) ${x}^{m}{x}^{n}={x}^{m+n}$

2) ${\left({x}^{m}\right)}^{n}={x}^{mn}$

Definition 7.1. A group G is said to be cyclic if there exists an element $x\in G$ such that for every $y\in G$, there is an integer m such that $y={x}^{m}$. Such an element x is said to generate G.

10. Dihedral Groups: Generators and Relations

In the next example, we give an illustration of a group G that is described by giving a set of its generators and the relations the generators satisfy. The group we will study is called the dihedral group. We will see in due course that the dihedral groups are the symmetry groups of the regular polygons in the plane. As we will see in this example, defining a group by giving generators and relations does not necessarily reveal much information about the group.

Example 8.1. [2] Let us now verify that D(2) is a group. Since the multiplication of words is associative, it follows from the requirement that ${a}^{2}={b}^{2}=1$ and $ab=ba$ that every word can be collapsed to one of $1,a,b,ab,ba$. But $ab=ba$, so $D\left(2\right)=\left\{1,a,b,ab\right\}$. To see that D(2) is closed under multiplication, we observe that

$\begin{array}{l}a\left(ab\right)={a}^{2}b=b,b\left(ab\right)=\left(ba\right)b=a{b}^{2}=a,\\ \left(ab\right)a=\left(ba\right)a=b{a}^{2}=b\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(ab\right)\left(ab\right)=\left(ba\right)\left(ab\right)=b{a}^{2}b={b}^{2}=1.\end{array}$

Therefore, D(2) is closed under multiplication, so it follows from our other remarks that D(2) a group. Note that the order of D(2) is 4.

Example 8.2. [2] (Dihedral Groups) The dihedral groups are groups that are defined by specifying two generators a and b and also specifying the relations that the generators satisfy. When we define a group by generators and relations, we consider all words in the generators, in this case a and b: these are all the strings or products ${x}_{1}{x}_{2}\cdots {x}_{n}$, where each xi is either a or b, and n is an arbitrary positive integer. For example, abbaabbaabbaabba is a word with n = 16. Two words are multiplied together by placing them side by side.

Thus,

$\left({x}_{1}{x}_{2}\cdots {x}_{n}\right)\left({y}_{1}{y}_{2}\cdots {y}_{p}\right)={x}_{1}{x}_{2}\cdots {x}_{n}{y}_{1}{y}_{2}\cdots {y}_{p}.$

This produces an associative binary operation on the set of words. The next step is to impose some relations that a and b satisfy. Suppose m > 1. The dihedral group D(m) is defined to be the set of all words in a and b with the above multiplication that we assume is subject to the following relations:

${a}^{m}={b}^{2}=1,ab=b{a}^{m-1}.$

It is understood that the cyclic groups $〈a〉$ and $〈b〉$ have orders m and two respectively. By, ${a}^{-1}={a}^{m-1}$ and $b={b}^{-1}$. For example, if m = 3, then ${a}^{3}={b}^{2}=1$, so

$aaabababbb=\left(aaa\right)\left(bab\right)\left(ab\right)\left(bb\right)=\left({a}^{2}\right)\left(ab\right)={a}^{3}b=b.$

The reader can show that $D\left(3\right)=\left\{1,a,{a}^{2},b,ab,ba\right\}$. For example, ${a}^{2}b=a\left(ab\right)=a\left(b{a}^{2}\right)=\left(ab\right){a}^{2}=b{a}^{4}=ba$. Hence, D(3) has order 6. We will givea more convincing argument in due course.

11. Quaternionic Groups

We recall some basic properties of the quaternions [3]. Consider the four-dimensional real vector space H consisting of the 2 × 2 complex matrices

$w=\left[\begin{array}{cc}x& \stackrel{¯}{-y}\\ y& \stackrel{¯}{x}\end{array}\right]$

with $x,y\in C$.

One checks directly that H is closed under multiplication in ${M}_{2}\left(C\right)$. If $w\in H$ then ${w}^{*}\in H$ and

${w}^{*}w=w{w}^{*}={\left({|x|}^{2}+|y|\right)}^{2}I$

(where w denotes the conjugate-transpose matrix). Hence every nonzero element of H is invertible. Thus H is a division algebra (or skew field) over R. This division algebra is a realization of the quaternions. The more usual way of introducing the quaternions is to consider the vector space H over R with basis $f=\left\{1,i,j,k\right\}$. Define a multiplication so that 1 is the identity and

${i}^{2}={j}^{2}={k}^{2}=-1;$

$ij=-ji=k,ki=-ik=j,jk=-kj=i;$

then extend the multiplication to H by linearity relative to real scalars. To obtain an isomorphism between this version of H and the 2_2 complex matrix version, take

$1=I,i=\left[\begin{array}{cc}i& 0\\ 0& -i\end{array}\right],j=\left[\begin{array}{cc}0& 1\\ -1& 0\end{array}\right],k=\left[\begin{array}{cc}0& i\\ i& 0\end{array}\right]$

where i is a fixed choice of $\sqrt{-1}$. The conjugation $w\to {w}^{*}$ satisfies ${\left(uv\right)}^{*}={v}^{*}{u}^{*}$. In terms of real components, ${\left(a+bi+cj+dk\right)}^{*}=a-bi-cj-dk$ for $a,b,c,d\in R$. It is useful to write quaternions in complex form as $x+jy$ with $x,y\in C$ ; however, note that the conjugation is then given as

${\left(x+jy\right)}^{*}=x+yj=x-jy$

On the 4n-dimensional real vector space ${H}^{n}$ we define multiplication by $a\in H$ on the right:

$\left({u}_{1},\cdots ,{u}_{n}\right)\cdot a=\left({u}_{1}a,\cdots ,{u}_{n}a\right)$

We note that $u\cdot 1=u$ and $u\cdot \left(ab\right)=\left(u\cdot a\right)\cdot b$. We can therefore think of ${H}^{n}$ as a vector space over H. Viewing elements of ${H}^{n}$ as $n×1$ column vectors, we define Au for $u\in {H}^{n}$ and $A\in {M}_{n}\left(H\right)$ by matrix multiplication. Then $A\left(u\cdot a\right)=\left(Au\right)\cdot a$ for $a\in H$ ; hence A defines a quaternionic linear map. Here matrix multiplication is defined as usual, but one must be careful about the order of multiplication of the entries.

We can make ${H}^{n}$ into a 2n-dimensional vector space over C in many ways; for example, we can embed C into H as any of the subfields

$R1+Ri,R1+Rj,R1+Rk$

Using the first of these embeddings, we write $z=x+jy\in {H}^{n}$ with $x,y\in {C}^{n}$, and likewise $C=A+jB\in {M}_{n}\left(H\right)$ with $A,B\in {M}_{n}\left(C\right)$. The maps

$z\to \left[\begin{array}{c}x\\ y\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}C\to \left[\begin{array}{cc}A& \stackrel{¯}{-B}\\ B& \stackrel{¯}{A}\end{array}\right]$

identify ${H}^{n}$ with ${C}^{2n}$ and ${M}_{n}\left(H\right)$ with the real subalgebra of ${M}_{2n}\left(C\right)$ consisting of matrices T such that

$JT=\stackrel{¯}{T}J$

where

$J=\left[\begin{array}{cc}0& I\\ -I& 0\end{array}\right]$

The Future Perspective of This Paper

The future of this paper is to help some to better understand the introductory part of linear groups, while others, like me, to be an incentive in further work and engaging in science.

NOTES

1https://www.routledge.com/Linear-Groups-The-Accent-on-Infinite-Dimensionality/Dixon-Kurdachenko-Subbotin/p/book/9781138542808.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

 [1] Carrell, J.B. (2017) Groups, Matrices, and Vector Spaces. Department of Mathematics University of British Columbia Vancouver, BC Canada.http://www.uop.edu.pk/ocontents/Groups,%20Matrices,%20and%20Vector%20Spaces%20A%20Group%20Theoretic%20Approach%20to%20Linear%20Algebra.pdf [2] Grillet, P.A. (2007) Abstract Algebra. Department of Mathematics Tulane University USA. http://dobrochan.ru/src/pdf/1204/Grillet_P._A._-_Abstract_Algebra_(2007)(684).pdf [3] Goodman, R. and Wallach, N.R. (2009) Symmetry, Representations, and Invariants. Department of Mathematics, Rutgers University of California, San Diego Piscataway. https://www.maths.ed.ac.uk/~v1ranick/papers/goodwallx.pdf [4] Hall. B.C. (2000) An Elementary Introduction to Groups and Representations. University of Notre Dame, Department of Mathematics, Notre Dame. https://arxiv.org/abs/math-ph/0005032 [5] Agargun, A.G. and Uzdag, H. (2007) Linear Algebra and Calculus Problems. Yildiz Teknik University, Turkey. https://www.cimri.com/kaynak-kitaplari/en-ucuz-lineer-cebir-ve-cozumlu-problemleri-a-goksel-agargunhulya-ozdag-9781111138769-fiyatlari,528793 [6] Baker, A. (1953) Matrix Groups an Introduction to Lie Group Theory (Library of Congress Cataloging-in-Publication Data Baker, Andrew, 1953), Department of Mathematics, University of Glasgow, Glasgow, USA. http://inis.jinr.ru/sl/vol1/UH/_Ready/Mathematics/Algebra/Baker%20A.%20Matrix%20groups,%20an%20introduction%20to%20Lie%20groups%20(Springer,%202002)(L)(173s).pdf