Root Systems Lie Algebras ()

Amor Hasić^{1}, Fatih Destović^{2}

^{1}Department of Computer Science, International University of Novi Pazar, Novi Pazar, Serbia.

^{2}Faculty of Educational Sciences, University of Sarajevo, Sarajevo, Bosnia and Herzegovina.

**DOI: **10.4236/alamt.2023.131001
PDF
HTML XML
137
Downloads
667
Views
Citations

A root system is any collection of vectors that has properties that satisfy the roots of a semi simple Lie algebra. If g is semi simple, then the root system A, (Q) can be described as a system of vectors in a Euclidean vector space that possesses some remarkable symmetries and completely defines the Lie algebra of g. The purpose of this paper is to show the essentiality of the root system on the Lie algebra. In addition, the paper will mention the connection between the root system and Ways chambers. In addition, we will show Dynkin diagrams, which are an integral part of the root system.

Keywords

Share and Cite:

Hasić, A. and Destović, F. (2023) Root Systems Lie Algebras. *Advances in Linear Algebra & Matrix Theory*, **13**, 1-20. doi: 10.4236/alamt.2023.131001.

1. Introduction

A root system in mathematics is a configuration of vectors in Euclidean space that satisfies certain geometric properties. The concept is fundamental in the theory of Lie algebras and Lie groups, especially in the theory of classification and representation of semisimple Lie algebras. Since Lie groups (and some analogues such as algebraic groups) and Lie algebras became important in many parts of mathematics during the twentieth century, the apparently special nature of root systems is inconsistent with the number of areas in which they are applied.

Definition 1.1. A root system
$\left(V,R\right)$ is a finite-dimensional real vector space *V* wich an inner product (*i.e.* a Euclidean vector space), such that the following properties hold:

a) The vectors in *R* span *V*.

b) If *α* is in *R* and
$c\in R$ , then
$c\alpha $ is in *R* only if
$c=\pm 1$ , then *R *is called a *reduced *root system.

c) For any two roots $\alpha ,\beta $ , the number

${\mu}_{\alpha \beta}=2\frac{\left(\alpha ,\beta \right)}{\left(\beta ,\beta \right)}$

is integer.

The dimension of *V* is called the rank of the root system and the elements of *R* are called roots.

d) Let ${s}_{\alpha}:V\to V$ be defined by

${s}_{\alpha}\cdot \beta =\beta -2\frac{\left(\beta ,\alpha \right)}{\left(\alpha ,\alpha \right)}$

for
$\beta \in V$ . Another calculation also shows that
${s}_{\alpha}$ preserves the inner product
$\left(\_;\_\right)$ , *i.e.*,

$\left({s}_{\alpha}\left(\alpha \right),{s}_{\alpha}\left(\beta \right)\right)=\left(\alpha ,\beta \right)$

for $\alpha ,\beta \in V$ , that is, ${s}_{\alpha}$ is in the orthogonal group $O\left(V\right)$ . Evidently,

$\mathrm{det}\left({s}_{\alpha}\right)=\pm 1$ :

Theorem 1.2. Suppose that *α* and *β* are linearly independent elements. Then

1. $c\left(\beta ,\alpha \right)c\left(\alpha ,\beta \right)=0,1,2\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}3$ .

2. If $c\left(\alpha ,\beta \right)=0$ , then $c\left(\beta ,\alpha \right)=c\left(\alpha ,\beta \right)=0$ .

3. If $\left(\alpha ,\beta \right)<0$ and $\Vert \alpha \Vert \le \Vert \beta \Vert $ , then $c\left(\alpha ,\beta \right)=-1$ and

$c\left(\beta ,\alpha \right)=-\left(\beta ,\beta \right)/\left(\alpha ,\alpha \right)=-1,-2\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}-3.$

4. If $\left(\alpha ,\beta \right)>0$ and $\Vert \alpha \Vert \le \Vert \beta \Vert $ , then $c\left(\alpha ,\beta \right)=1$ and

$c\left(\beta ,\alpha \right)=\left(\beta ,\beta \right)/\left(\alpha ,\alpha \right)=1,2\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}3.$

5. If $\left(\alpha ,\beta \right)>0$ and $\Vert \alpha \Vert \le \Vert \beta \Vert $ , the $-2\left(\alpha ,\beta \right)+\left(\beta ,\beta \right)=0$ . If $\left(\alpha ,\beta \right)<0$ and $\Vert \alpha \Vert \le \Vert \beta \Vert $ , then $2\left(\alpha ,\beta \right)+\left(\beta ,\beta \right)=0$ .

Proof. 1. Since *α* and *β* are linearly independent, the Cauchy-Schwartz Inequality implies that

${\left(\alpha ,\beta \right)}^{2}<\left(\alpha ,\alpha \right)\left(\beta ,\beta \right)$ .

Hence

$c\left(\beta ,\alpha \right)c\left(\alpha ,\beta \right)=4\frac{{\left(\alpha ,\beta \right)}^{2}}{\left(\alpha ,\alpha \right)\left(\beta ,\beta \right)}<4$

Since $c\left(\alpha ,\beta \right)$ and $c\left(\beta ,\alpha \right)$ are integers, we must have

$c\left(\alpha ,\beta \right)c\left(\beta ,\alpha \right)=0,1,2\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}3.$

2. Obvious. If $c\left(\alpha ,\beta \right)=0$ , then $c\left(\beta ,\alpha \right)=c\left(\alpha ,\beta \right)=0$ .

3. If $\left(\alpha ,\beta \right)<0$ and $\Vert \alpha \Vert \le \Vert \beta \Vert $ ,

$2\frac{\left(\alpha ,\beta \right)}{\left(\beta ,\beta \right)}>2\frac{\left(\beta ,\alpha \right)}{\left(\alpha ,\alpha \right)}$

So $c\left(\alpha ,\beta \right)>c\left(\beta ,\alpha \right)$ . $c\left(\alpha ,\beta \right)c\left(\beta ,\alpha \right)=-1,-2\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}-3$ . we must have $c\left(\alpha ,\beta \right)=-1$ , whence

$2\left(\alpha ,\beta \right)=\left(\beta ,\beta \right)$ . Thus

$c\left(\beta ,\alpha \right)=2\frac{\left(\beta ,\alpha \right)}{\left(\alpha ,\alpha \right)}=\frac{\left(\beta ,\beta \right)}{\left(\alpha ,\alpha \right)}.$ * *

4. If $\left(\alpha ,\beta \right)>0$ and $\Vert \alpha \Vert \le \Vert \beta \Vert $ , then

$2\frac{\left(\alpha ,\beta \right)}{\left(\beta ,\beta \right)}<2\frac{\left(\beta ,\alpha \right)}{\left(\alpha ,\alpha \right)}$

So $c\left(\alpha ,\beta \right)<c\left(\beta ,\alpha \right)$ . By Part (1), $c\left(\alpha ,\beta \right)c\left(\beta ,\alpha \right)=1,2\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}3$ . Since $c\left(\alpha ,\beta \right)$ is the smaller integer factor, we must have $c\left(\alpha ,\beta \right)=1$ , whence $2\left(\alpha ,\beta \right)=\left(\beta ,\beta \right)$ . Thus

$c\left(\beta ,\alpha \right)=2\frac{\left(\beta ,\alpha \right)}{\left(\alpha ,\alpha \right)}=\frac{\left(\beta ,\beta \right)}{\left(\alpha ,\alpha \right)}.$

5. Suppose that $\left(\alpha ,\beta \right)>0$ and $\Vert \alpha \Vert \le \Vert \beta \Vert $ . Then by Part (4), $c\left(\alpha ,\beta \right)=1$ , so $-2\left(\alpha ,\beta \right)+\left(\beta ,\beta \right)=0$ . On the other hand, if $\left(\alpha ,\beta \right)<0$ and $\Vert \alpha \Vert \le \Vert \beta \Vert $ , then by Part (4), $c\left(\alpha ,\beta \right)=-1$ , whence $2\left(\alpha ,\beta \right)+\left(\beta ,\beta \right)=0$ .

The novelty of this work is based on the connection of root systems with Lie algebra. In addition, the importance of Dynkin diagrams and Ways chamebers with the rook system and Lie algebra is shown. Modern pictures and graphs were used with the help of modern tools to more closely convey the appearance and quality of the work.

2. Basic Theory of Root Systems

Theory 2.1. [1] Let *V* be a finite-dimensional vector space over *R* equipt with an inner product
$(,)$ . The Cauchy-Schwartz inequality asserts that

$\left|\left(u,v\right)\right|\le \Vert u\Vert \Vert v\Vert $

for $u,v\in V$ . It follows that if $u,v\in V$ are nonzero, then

$-1\le \frac{\left(u,v\right)}{\Vert u\Vert \Vert v\Vert}\le 1$

If
$u,v\in V$ are nonzero, then we define the angle between *u* and *v* to be the unique number
$0\le \theta \le \pi $ such that

$\left(u,v\right)=\Vert u\Vert \Vert v\Vert \mathrm{cos}\vartheta $

The inner product measures the angle between two vectors, though it is a bit more complicated in that the lengths of *x *and *y *are also involved. The term “angle” does make sense geometrically.

Proposition 2.2. [2] Suppose that *α* and *β* are linearly independent roots and
$\Vert \alpha \Vert \le \Vert \beta \Vert $ . Let *θ* be the angle between *α *and *β*. Then we have the following Table 1 & Figure 1.

Proof. Here $\mathrm{cos}\theta =\left(\alpha ,\beta \right)/\left(\Vert \alpha \Vert \Vert \beta \Vert \right)$ , so $\left|\mathrm{cos}\theta \right|=\sqrt{c\left(\alpha ,\beta \right)c\left(\beta ,\alpha \right)}/2$ . Now $c\left(\alpha ,\beta \right)c\left(\beta ,\alpha \right)=0,1,2$ , or 3 and $\frac{{\Vert \beta \Vert}^{2}}{{\Vert \alpha \Vert}^{2}}=\left|c\left(\beta ,\alpha \right)\right|$ . Moreover, $\mathrm{cos}\theta $ has the same sign as $c\left(\alpha ,\beta \right)$ . This gives us Table 1 below.

Table 1. $c\left(\alpha ,\beta \right)c\left(\beta ,\alpha \right)=0,1,2$ , or 3 and $\frac{{\Vert \beta \Vert}^{2}}{{\Vert \alpha \Vert}^{2}}=\left|c\left(\beta ,\alpha \right)\right|$ .

Figure 1. The allowed angles and length ratios, for the case of an acute angle.

Lemma 2.3. If $0<\vartheta <\pi /2$ , then $\alpha -\beta \in R$ .

Proof If 1, 2 or 3 is written as a product of two positive integers, then one of the factors is 1. Up to swapping *α* and *β* we can assume that
${\mu}_{\beta \alpha}=1$ . The reflection in *α* sends *β* to
$\beta -{\mu}_{\beta \alpha}\alpha $ , thus
$\beta -\alpha \in R$ .

Definition 2.4. Suppose
$\left({V}_{1},R\right)$ and
$\left({V}_{2},Q\right)$ are root systems. *Consider the vector space,*
${V}_{1}\oplus {V}_{2}$ with the natural inner product determined by the inner products on
${V}_{1}$ and
${V}_{2}$ . Then
$R\cup Q$ is a root system in
${V}_{1}\oplus {V}_{2}$ , called the direct sum of *R* and *Q*.

Definition 2.5.* *A root system
$\left(V,R\right)$ is called reducible if there exists an orthogonal decomposition
$V={V}_{1}\oplus {V}_{2}$ with
$\mathrm{dim}{V}_{1}>0$ and
$\mathrm{dim}{V}_{2}>0$ such that every element of *R* is either in
${V}_{1}$ or in
${V}_{2}$ . If no such decomposition exists,
$\left(V,R\right)$ is called irreducible.

Example 2.6. [3] The following Figure 2 shows the root systems of rank 1 and 2. All of them are indecomposable (except ${A}_{1}+{A}_{1}$ ), and reduced (except $B{C}_{1}$ and $B{C}_{2}$ ).

Proposition 2.7. Every root system of the second rank is isomorphic to one of the systems in Figure 2.

Proof: Suppose that
${V}_{1}={R}^{2}$ ; therefore, let
$R\u228f{R}^{2}$ be the root system. Let be the smallest angle occurring between any two vectors in *R*. Since the elements of *R* span *R*^{2}, we can find two linearly independent vectors *α* and *β* in *R*. If the angle between *α* and *β* is greater than
$\frac{\pi}{2}$ , then the angle between *α* and -*β* is less than
$\frac{\pi}{2}$ therefore, the minimum angle is at most
$\frac{\pi}{2}$ .

3. Root Systems For Classical Complex Lie Algebras

Definition 3.1. A root
$\alpha \in V$ is said to be simple if *α* is positive and *α* is not the sum of two positive roots. The collection Γ of all simple roots is called a simple system of roots.

Lemma 3.2. Let
$\Gamma =\left\{{\alpha}_{1},\cdots ,{\alpha}_{n}\right\}$ be a simple system of roots. Then every positive root *δ* can be written as
$\delta ={\alpha}_{{i}_{1}}+{\alpha}_{{i}_{2}}+\cdots +{\alpha}_{{i}_{k}}$ , where each initial partial sum
${\alpha}_{{i}_{1}}+\cdots +{\alpha}_{{i}_{j}}$ (
$1\le j\le k$ ) is a root.

Proof. For every positive root
$\delta ={\displaystyle {\sum}_{i=1}^{n}{\mu}_{i}{\alpha}_{i}}$ , the height of *δ* is defined to be the positive number ht
$\delta ={\displaystyle {\sum}_{i=1}^{n}{\mu}_{i}}$ We prove this lemma by induction on ht *δ*. If
$\delta =1$ , then *δ* is simple and there is nothing to prove. So assume that *m* > and that the lemma’s conclusion holds for all positive roots of height < *m*. Now suppose that *δ* is a positive root of height m. Now apply the induction hypothesis to the root
$\delta -{\alpha}_{i}$ , which has height
$m-1$ . Then
$\delta -{\alpha}_{i}={\alpha}_{{i}_{1}}+\cdots +{\alpha}_{{i}_{m-1}}$ , where each initial partial sum is a root. Then
$\delta ={\alpha}_{{i}_{1}}+\cdots +{\alpha}_{{i}_{m-1}}+{\alpha}_{i}$ . Thus *δ* satisfies the conclusion of the lemma, completing the induction step as well as the proof.

Definition 3.3. The root system Λ is decomposable if Λ is a union $\Lambda ={\Lambda}_{1}\cup {\Lambda}_{2}$ with ${\Lambda}_{1}\ne \varnothing $ , ${\Lambda}_{2}\ne \varnothing $ , and ${\Lambda}_{1}\perp {\Lambda}_{2}$ .

Definition 3.4. If Γ is a simple system of roots in Λ, we say that is Γ decomposable if Γ is a union $\Gamma ={\Gamma}_{1}\cup {\Gamma}_{2}$ , with ${\Gamma}_{1}\ne \varnothing $ , ${\Gamma}_{2}\ne \varnothing $ , and ${\Gamma}_{1}\perp {\Gamma}_{2}$ .

Figure 2. The rank-two root systems.

Lemma 3.4. Let Γ be a simple system of roots in Λ. Then Λ is decomposable if and only if Γ is decomposable.

Proof. Suppose that Γ is decomposable, with
$\Lambda ={\Lambda}_{1}\cup {\Lambda}_{2}$ For
$i=1,2$ , let
${\Gamma}_{i}={\Lambda}_{i}\cap \Gamma $ . Then neither
${\Gamma}_{1}$ nor
${\Gamma}_{2}$ can be empty. For if, say
${\Gamma}_{1}=\varnothing $ , then
${\Gamma}_{2}=\Gamma $ which implies that
${\Lambda}_{1}\perp \Gamma $ Since Γ is a basis of *E*, we conclude that
${\Lambda}_{1}\perp \Lambda $ , and so
${\Lambda}_{1}=\varnothing $ , contradiction.

Conversely, suppose that Γ is decomposable, with
$\Gamma ={\Gamma}_{1}\cup {\Gamma}_{2}$ . We arrange the elements of Γ so that
${\Gamma}_{1}=\left\{{\alpha}_{1},\cdots ,{\alpha}_{r}\right\}$ and
${\Gamma}_{2}=\left\{{\alpha}_{r+1},\cdots ,{\alpha}_{l}\right\}$ . Now let
$\delta \in \Gamma $ . We claim that *δ* is a linear combination of elements of
${\Gamma}_{1}$ or *δ* is a linear combination of elements of
${\Gamma}_{2}$ . To prove this claim, we may assume that *δ* is positive. Now suppose, to the contrary, that *δ* is a linear combination

$\delta ={\displaystyle \underset{i=1}{\overset{n}{\sum}}{\mu}_{i}{\alpha}_{i}}+{\displaystyle \underset{i=n+1}{\overset{j}{\sum}}{\mu}_{i}{\alpha}_{i}}$

where both sums on the right are nonzero. Without loss of generality, we can assume that
${\alpha}_{{i}_{1}}\in {\Gamma}_{1}$ . Let *s* be the smallest integer such that
${\alpha}_{{i}_{1}}\in {\Gamma}_{2}$ . Then
$\gamma ={\alpha}_{{i}_{1}}+\cdots +{\alpha}_{{i}_{s-1}}+{\alpha}_{{i}_{s}}$ is a root.

Now consider the root ${r}_{{i}_{s}}\text{\_}\gamma $ . This root equals

${r}_{{i}_{s}}\text{\_}\left({\alpha}_{{i}_{1}}+\cdots +{\alpha}_{{i}_{s-1}}+{\alpha}_{{i}_{s}}\right)={\alpha}_{{i}_{1}}+\cdots +{\alpha}_{{i}_{s-1}}-{\alpha}_{{i}_{s}},$

which is not a linear combination of simple roots with nonnegative integer coefficients, a contradiction. This proves the claim.

Using the claim, we now let ${\Lambda}_{1}$ be the set of roots which are linear combinations of elements of ${\Gamma}_{1}$ , and let ${\Lambda}_{2}$ be the set of roots which are linear combinations of elements of ${\Gamma}_{2}$ . Then ${\Lambda}_{1}\ne \varnothing $ , ${\Lambda}_{2}\ne \varnothing $ , ${\Lambda}_{1}\perp {\Lambda}_{2}$ , and $\Lambda ={\Lambda}_{1}\cup {\Lambda}_{2}$ . Thus Λ is decomposable.

Definition 3.5. Let Γ be a simple system of roots in Λ. Then Λ is decomposable if and only if Γ is decomposable.

Example 3.6. [2] Let *E* be a two-dimensional inner product space. We will show that, up to isometry, there are only three possible indecomposable simple systems of roots Γ on *E*. Suppose that
$\Gamma =\left\{{\alpha}_{1},{\alpha}_{2}\right\}$ . Then
$\left({\alpha}_{1},{\alpha}_{2}\right)\ne 0$ , since Γ is indecomposable. We may assume that
$\Vert {\alpha}_{1}\Vert \le \Vert {\alpha}_{2}\Vert $ .
$c\left({\alpha}_{1},{\alpha}_{2}\right)=-1$ and
$c\left({\alpha}_{2},{\alpha}_{1}\right)=-1,-2\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}-3$ (Figures 3-5).

Figure 3. $c\left({\alpha}_{2},{\alpha}_{1}\right)=-1\Rightarrow \Vert {\alpha}_{1}\Vert =\Vert {\alpha}_{2}\Vert $ and $\mathrm{cos}\theta =-\frac{1}{2}$ .

Figure 4. $c\left({\alpha}_{1},{\alpha}_{2}\right)=-2\Rightarrow \Vert {\alpha}_{2}\Vert =\sqrt{2}\Vert {\alpha}_{1}\Vert $ and $\mathrm{cos}\theta =-\frac{\sqrt{2}}{2}$ .

Figure 5. $c\left({\alpha}_{1},{\alpha}_{2}\right)=-3\Rightarrow \Vert {\alpha}_{2}\Vert =\sqrt{3}\Vert {\alpha}_{1}\Vert $ and $\mathrm{cos}\theta =-\frac{\sqrt{3}}{2}$ .

Let
$\Gamma =\left\{{\alpha}_{1},{\alpha}_{2}\right\}$ . be a simple system of roots in Λ. We introduce a partial ordering
$\prec $ on Λ as follows: if
$\alpha ,\beta \in \Lambda $ , then
$\alpha \prec \beta $ if and only if
$\beta -\alpha ={\displaystyle {\sum}_{I}{\mu}_{I}{\alpha}_{I}}$ where each
${\mu}_{I}\in {Z}^{+}$ and at least one
${\mu}_{I}$ is positive. It is clear that
$\prec $ is indeed a partial order on Λ. Of course,
$\prec $ depends on the choice of Λ. Recall that the simple system Λ was obtained via a lexicographic order < on *E*. Since each simple root *α _{i}* is a positive root under <, it is clear that if

4. Weyl Chambers

Definition 4.1. Let *R* be a root system in *V*. The hyperplanes
${W}_{\alpha}$ subdivide *F* into finitely many polyhedral convex cones. We recall that each root
$\alpha \in R$ . The elements of the set

${W}_{\alpha}=\left\{x\in V:\left(x,\alpha \right)=0\right\}$

Also, recall that a vector
$v\in V$ is regular with respect to *R* if and only if

$v\in {V}_{reg}\left(R\right)=V-{\displaystyle {\cup}_{\alpha \in R}{W}_{\alpha}}$

Evidently,
${V}_{reg}$ is an open subset of *V*. A path component of the space
${V}_{reg}$ is called a Weyl chamber of *V* with respect to *R*.

If *C* is a Weyl chamber, then
$-C=\left\{x\in F/-x\in C\right\}$ is also a Weyl chamber. It is called the Weyl chamber opposite to *C*. A hyperplane
$P\subset F$ is called a wall of the Weyl chamber *C* if
$P\cap C=\varnothing $ and
$P\cap \stackrel{\xaf}{C}$ contains a nonempty subset open in *P*.

A subsystem Π of a root system Λ is called a system of simple roots (ora base) of the system *n* if Π is linearly independent and each
$\beta \in \Lambda $ can be represented in the form

$\beta =\underset{\alpha \in \text{\Lambda}}{{\displaystyle \sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{\alpha}\alpha $

where
${\eta}_{\alpha}$ , are integers, which are simultaneously either nonpositive or nonnegative. In the first case is *β* said to be positive (*β *> 0), in the second negative (*β *< 0) with respect to Π.

Lemma 4.2. [3] For any Weyl chamber *C* the system Π(*C*) is a system of simple roots. The roots that are positive (negative) with respect to *Pi*(*C*) coincide with *C*-positive (respectively, *C*-negative) roots. The correspondence
$C\to \Pi \left(C\right)$ between the Weyl chambers and systems of simple roots is bijective. For any Weyl chamber *C*, we have

$C=\left\{x\in V:x\in \left(x,\alpha \right)>0\left(\alpha \in \Pi \left(C\right)\right)\right\},$

$\stackrel{\xaf}{C}=\left\{x\in V:x\in \left(x,\alpha \right)\ge 0\left(\alpha \in \Pi \left(C\right)\right)\right\}.$

The walls of the Weyl chamber *C* are the hyperplanes
${W}_{\alpha}$ where
$\alpha \in \Pi \left(C\right)$ .

Proof. The closure of *C* consists of *C* and points
$x\in V$ with
$x\ne C$ such that there exists a sequence
${\left({x}_{n}\right)}_{n=1}^{\infty}$ of elements of *C* such that
${x}_{n}\to x$ as
$n\to \infty $ Let *x* be an element of *C* the this second type. Assume that there exists
$\alpha \in \Pi \left(C\right)$ such that
$\left(x,\alpha \right)<0$ . Since
$\left({x}_{n},\alpha \right)\to \left(x,\alpha \right)$ as
$n\to \infty $ , there exists a positive integer *n* such that
$\left({x}_{n},\alpha \right)<0$ . This is a contradiction. It follows that
$\stackrel{\xaf}{C}$ is contained in
$\left\{x\in V:x\in \left(x,\alpha \right)\ge 0\left(\alpha \in \Pi \left(C\right)\right)\right\}$ . Let *x* be in
$\left\{x\in V:x\in \left(x,\alpha \right)\ge 0\left(\alpha \in \Pi \left(C\right)\right)\right\}$ . we need to prove that
$x\in \stackrel{\xaf}{C}$ . Let
${x}_{0}\in C$ . Consider the sequence
${\left(x+\left(1/n\right){x}_{0}\right)}_{n=1}^{\infty}$ Evidently this sequence converges to *x* and is contained in *C*. It follows that *x* is in
$\stackrel{\xaf}{C}$ . This proves the first assertion of the lemma. For the second assertion, let
$v\in V$ . If
$v\in {V}_{reg}\left(R\right)$ , then *v* is by definition in some Weyl chamber. Assume that
$v=\text{2}{V}_{reg}\left(R\right)$ . Then
$v\notin {V}_{reg}\left(R\right)$ , Define

$p:V\to R\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{by}\text{\hspace{0.17em}}\text{\hspace{0.05em}}p\left(x\right)=\underset{\alpha \in R}{{\displaystyle \prod}}\left(x,\alpha \right)$

The function *p* is a non-zero polynomial function on *V*, and the set of zeros of *p* is exactly
$\cup}_{\alpha \in R}{P}_{\alpha$ . Thus,
$p\left(x\right)=0$ . Since *p* is a non-zero polynomial function on *V*, *p* cannot vanish on an open set. Hence, for each positive integer *n*, there exists vn such that
$\Vert x-{x}_{n}\Vert <1/n$ and
$p\left({x}_{n}\right)\ne 0$ . The sequence
${\left({x}_{n}\right)}_{n=1}^{\infty}$ converges to *x* and is contained in
${V}_{reg}\left(R\right)$ ; in particular every element of the sequence is contained in some Weyl chamber. Since the number of Weyl chambers of *V* with respect to *R* is finite. We have
$\left({x}_{{n}_{k}},\alpha \right)\ge 0$ for all
$\alpha \in \Pi \left(C\right)$ and positive integers *k*. Taking limits, we find that
$\left(x,\alpha \right)\ge 0$ for all
$\alpha \in \Pi \left(C\right)$ , so that
$x\in \stackrel{\u2323}{C}$ .

Example.4.3. [3] Using the lexicographic order with respect to the basis composed of the weights
${\epsilon}_{i}$ , one can easily construct systems of simple roots
${\Pi}_{g}$ , in the root systems
${\Delta}_{g}$ , of the classical Lie algebras *g*,

$g=g{l}_{n}\left(C\right)$ of $s{l}_{n}(\; C\; )$

${\Pi}_{g}=\left\{{\alpha}_{1},\cdots ,{\alpha}_{n-1}\right\}$ , where ${\alpha}_{i}={\epsilon}_{i}-{\epsilon}_{i+1}$

${\Delta}_{g}=\left\{{\epsilon}_{i}-{\epsilon}_{j}|i<j;i,j=1,\cdots ,n\right\}$

The corresponding Weyl chamber consists of the set of diagonal matrices $\text{diag}\left({x}_{1},\cdots ,{x}_{n}\right)$ such that ${x}_{1}>{x}_{2}>\cdots >{x}_{n}$ ,

$g=s{o}_{2l}\left(C\right),l\ge 2;$

${\Pi}_{g}=\left\{{\alpha}_{1},\cdots ,{\alpha}_{i}\right\}$ where ${\alpha}_{i}={\epsilon}_{i}-{\epsilon}_{i-1}$ $\left(i=1,\cdots ,l-1\right)$ , ${\alpha}_{i}={\epsilon}_{i+1}-{\epsilon}_{l}$

${\Delta}_{g}^{+}=\left\{{\epsilon}_{i}\pm {\epsilon}_{j}|i<j;i,j=1,\cdots ,l\right\}$

$g=s{o}_{2l+1}\left(C\right),l\ge 1;$

${\Pi}_{g}=\left\{{\alpha}_{1},\cdots ,{\alpha}_{i}\right\}$ where ${\alpha}_{i}={\epsilon}_{i}-{\epsilon}_{i-1}$ $\left(i=1,\cdots ,l-1\right)$ , ${\alpha}_{i}={\epsilon}_{i}$

${\Delta}_{g}^{+}=\left\{{\epsilon}_{i}\pm {\epsilon}_{j},{\epsilon}_{i}|i,j=1,\cdots ,l\right\}$

$g=s{p}_{2l}\left(C\right),l\ge 1;$

${\Pi}_{g}=\left\{{\alpha}_{1},\cdots ,{\alpha}_{i}\right\}$ where ${\alpha}_{i}={\epsilon}_{i}-{\epsilon}_{i-1}$ $\left(i=1,\cdots ,l-1\right)$ , ${\alpha}_{i}=2{\epsilon}_{i}$

${\Delta}_{g}^{+}=\left\{{\epsilon}_{i}\pm {\epsilon}_{j},2{\epsilon}_{i}|i,j=1,\cdots ,l\right\}$

Lemma 4.4. In this situation, if *α* and *β* are not orthogonal then

${\Vert \alpha \Vert}^{2}=k{\Vert \beta \Vert}^{2}$ , $k=1,2,3$

and

${\left(\mathrm{cos}\angle \left(\alpha ,\beta \right)\right)}^{2}=\frac{{\left(\alpha ,\beta \right)}^{2}}{{\Vert \alpha \Vert}^{2}{\Vert \beta \Vert}^{2}}=\frac{k}{4}$

Proof: We know that

$2\frac{\left(\alpha ,\beta \right)}{{\Vert \alpha \Vert}^{2}},2\frac{\left(\alpha ,\beta \right)}{{\Vert \beta \Vert}^{2}}\in Z$ * *

Taking the product,

$\frac{4{\left(\alpha ,\beta \right)}^{2}}{{\Vert \alpha \Vert}^{2}{\Vert \beta \Vert}^{2}}\in Z$

but *α* and *β* are neither proportional nor perpendicular, so
$\frac{{\left(\alpha ,\beta \right)}^{2}}{{\Vert \alpha \Vert}^{2}{\Vert \beta \Vert}^{2}}=\frac{k}{4}$ where *k* = 1; 2, or 3. Since
$\Vert \alpha \Vert \ge \Vert \beta \Vert $ , the first term in the first equation is the smaller integer, hence

$\left|2\frac{\left(\alpha ,\beta \right)}{{\Vert \alpha \Vert}^{2}}\right|=1$

Straightforward manipuulations of this imply what we want.

Corollary 4.5. Suppose *α*,*β* are distinct simple roots and
$\left(\alpha ,\beta \right)\ne 0$ . Then

$\angle \left(\alpha ,\beta \right)=\{\begin{array}{l}120\u02da\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1\\ 135\u02da\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=2\\ 150\u02da\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=3\end{array}$

with
${\Vert \alpha \Vert}^{2}=k{\Vert \beta \Vert}^{2}$ *,* or vice versa.

5. Cartan Matrices And Dynkin Diagrams

Definition 5.1. A system $\Phi =\left\{{\alpha}_{1},\cdots ,{\alpha}_{s}\right\}$ is said to be admissible if

${a}_{ij}=\left({\alpha}_{i}|{\alpha}_{j}\right)$ is a nonpositive integer for all
$i\ne j$ . The integer square matrix
$A\left(\Phi \right)=\left({a}_{ij}\right)$ is called the *matrix of the system *Φ. Let
${m}_{ij}={a}_{ij}{a}_{ji}$ and let
${\theta}_{ij}$ be the angle between the vectors
${\gamma}_{i}$ and
${\gamma}_{j}$ (
$i\ne j$ ), implies that for an admissible system Φ the numbers
${m}_{ij}$ and the angles
${\theta}_{ij}$ can assume only the

following values: ${m}_{ij}=0,1,2,3,4$ ; ${\theta}_{ij}=\pi \left(1-\frac{1}{{\mu}_{ij}}\right)$ , where ${\mu}_{ij}=2,3,4,6,\infty $ , respectively.

Definition 5.2. The *Dynkin diagram* of an admissible system of vectors is the graph described above in which the edge joining the vertices numbered by *i* and *j* (
$i\ne j$ ,
${m}_{ij}>0$ ) is of multiplicity
${m}_{ij}$ . If
$\Vert {a}_{ij}\Vert <\Vert {a}_{ji}\Vert $ , then the corresponding edge is oriented by an arrow pointing from the j-th vertex towards the i-th one.

Theorem 5.3. The *Dynkin diagrams* of the classical simple Lie algebras *g* are of the following list.

The *A _{n}* root lattice—that is, the lattice generated by the An roots—is most easily described as the set of integer vectors in
${R}^{n+1}$ whose components sum to zero (Figure 6).

Figure 6. *A _{n}* root system.

Example: The *A*_{3} root lattice is known to crystallographers as a face-centered cubic lattice.

Simple roots in *A*_{3}

The *A*_{3} root system (as well as other third-order root systems) can be modeled in the Zometool Construction set (Figures 7-9).

Root system-WikipediaModel of the root system in the Zometool system.Brian C. Hall “Lie Groups, Lie Algebras, and Representations”—Fig. 8.16, 229.

Figure 7. The roots in *A*_{3} make up the vertices of a cuboctahedron.

Brian C. Hall “Lie Groups, Lie Algebras, and Representations”—Fig. 8.17, 229.

Figure 8. The roots in *A*_{3} lie at the midpoints of the edges of a cube.

The *B _{n}* root lattice—that is, the lattice generated by the

Figure 9. *B _{n}* root system.

Example:

*B*_{1} is isomorphic to *A*_{1} via scaling by
$\sqrt{2}$ , and is therefore not a distinct root system (Figure 10).

Simple roots in *B*_{4}

Brian C. Hall “Lie Groups, Lie Algebras, and Representations”—Fig. 8.18, 230.

Figure 10. The *B*_{3} root system, with the elements of the base in *dark gray*.

The *C _{n}* root lattice—that is, the lattice generated by the

Figure 11. *C _{n}* root system.

Example:

*C*_{2} is isomorphic to *B*_{2} via scaling by
$\sqrt{2}$ and a 45 degree rotation, and is therefore not a distinct root system (Figure 12, Figure 13).

Simple roots in *C*_{4 }

Brian C. Hall “Lie Groups, Lie Algebras, and Representations”—Fig. 8.20, 229.

Figure 12. Root system *C*_{3} with the elements of the base in *dark gray*.* *

Brian C. Hall “Lie Groups, Lie Algebras, and Representations”—Fig. 8.31, 230.

Figure 13. The *C*_{3} root system consists of the vertices of an octahedron, together with the midpoints of the edges of the octahedron.

The *D _{n}* root lattice—that is, the lattice generated by the

Figure 14. *D _{n}* root system.

Example:

*D*_{3} coincides with *A*_{3}, and is therefore not a distinct root system. The 12 *D*_{3} root vectors are expressed as the vertices of, a lower symmetry construction of the cuboctahedron.

*D*_{4} has additional symmetry called triality. The 24 *D*_{4} root vectors are expressed as the vertices of, a lower symmetry construction of the 24-cell. 72 vertices (Figure 15).

Simple roots in *D*_{4 }

Figure 15. Root system *D*_{3}.

72 vertices of 122 represent the root vectors of *E*_{6} (Figure 16, Figure 17, Figure 18 & Figure 19).

Figure 16. *E*_{6} root system.

Figure 17. *E*_{6} root system [5] .

126 vertices of 231 represent the root vectors of *E*_{7}.

Figure 18. *E*_{7} root system.

Figure 19. *E*_{7} root system [5] .

240 vertices of 421 represent the root vectors of *E*_{8} (Figure 20, Figure 21).

Figure 20. *E*_{8} root system.

Root system—Wikipedia.

Figure 21. *E*_{8} root system.

The *F*_{4} root lattice—that is, the lattice generated by the *F*_{4} root system is the set of points in *R*^{4} such that either all the coordinates are integers or all the coordinates are half-integers (a mixture of integers and halfintegers is not allowed). This lattice is isomorphic to the lattice of Hurwitz quaternions (Figure 22, Figure 23).

Figure 22. *F*_{4} root system.

Simple roots in *F*_{4 }

Figure 23. *F*_{4} root system [5] .

The *G*_{2} root lattice—that is, the lattice generated by the *G*_{2} roots—is the same as the *A*_{2} root lattice (Figure 24).

Figure 24. *G*_{2} root system.

Simple roots in *G*_{2}

The root system *G*_{2} has 12 roots, which form the vertices of a hexagram. One choice of simple roots is: (
${\alpha}_{1},\beta ={\alpha}_{2}-{\alpha}_{1}$ ) where
${\alpha}_{i}={e}_{i}-{e}_{i+1}$ for
$i=1,2$ .

Example 5.4. [4] The extended Dynkin diagrams of simple classical Lie algebras are of the following form (each diagram contains *n*+1 vertices, the right column lists the standard notation for each of the diagrams) (Figure 25):

Figure 25. Simple classical dynkin diagrams.

Lemma 5.5. A Coxeter-Dynkin graph is a tree (Figure 26).

Proof. Suppose, to the contrary, that there are circuits. Let ${\alpha}_{1},\cdots ,{\alpha}_{n}$ be the vertices of a minimal circuit.

Figure 26. Coxeter-Dynkin graph.

Since the circuit is minimal, no root ${\alpha}_{i}$ is connected to a root ${\alpha}_{j}$ in the circuit unless $j\equiv \left(i+1\right)\mathrm{mod}n$ or $j\equiv \left(i-1\right)\mathrm{mod}n$ . Suppose now that ${\alpha}_{i}$ and ${\alpha}_{j}$ are consecutive roots in the circuit. We claim that

$\frac{1}{2}\left({\alpha}_{i},{\alpha}_{i}\right)+2\left({\alpha}_{i},{\alpha}_{j}\right)+\frac{1}{2}\left({\alpha}_{j},{\alpha}_{j}\right)\le 0$

To show this, we may assume that $\Vert {\alpha}_{i}\Vert \le \Vert {\alpha}_{j}\Vert $ . Then obviously,

$\frac{1}{2}\left({\alpha}_{i},{\alpha}_{i}\right)-\frac{1}{2}\left({\alpha}_{j},{\alpha}_{j}\right)\le 0$

We have,

$2\left({\alpha}_{i},{\alpha}_{j}\right)+\left({\alpha}_{j},{\alpha}_{j}\right)=0$

Adding the left hand sides of the last two relations above, we obtain inequality
$\frac{1}{2}\left({\alpha}_{i},{\alpha}_{i}\right)+2\left({\alpha}_{i},{\alpha}_{j}\right)+\frac{1}{2}\left({\alpha}_{j},{\alpha}_{j}\right)\le 0$ . Thus, in particular,
$\frac{1}{2}\left({\alpha}_{i},{\alpha}_{i}\right)+2\left({\alpha}_{i},{\alpha}_{i+1}\right)+\frac{1}{2}\left({\alpha}_{i+1},{\alpha}_{i+1}\right)\le 0$ for all
$i=1,\cdots ,n$ , where the index
$i+1$ is counted modulo *n*. Adding these inequalities, we obtain

$0\ge \underset{i\equiv 1}{\overset{n}{{\displaystyle \sum}}}\left(\frac{1}{2}\left({\alpha}_{i},{\alpha}_{i}\right)+2\left({\alpha}_{i},{\alpha}_{i+1}\right)+\frac{1}{2}\left({\alpha}_{i+1},{\alpha}_{i+1}\right)\right)$

$\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\left({\alpha}_{i},{\alpha}_{i}\right)+2\underset{i\equiv 1}{\overset{n}{{\displaystyle \sum}}}\left({\alpha}_{i},{\alpha}_{i+1}\right)$

On the other hand,

$0\le \left(\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\alpha}_{i},\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\alpha}_{i}\right)$

$=\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\left({\alpha}_{i},{\alpha}_{i}\right)+\underset{i\ne j}{{\displaystyle \sum}}\left({\alpha}_{i},{\alpha}_{j}\right)$

$\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\left({\alpha}_{i},{\alpha}_{i}\right)+2\underset{i\equiv 1}{\overset{n}{{\displaystyle \sum}}}\left({\alpha}_{i},{\alpha}_{i+1}\right)$

by our remark at the beginning of the proof about adjacent vertices. Inequalities $\frac{1}{2}\left({\alpha}_{i},{\alpha}_{i}\right)+2\left({\alpha}_{i},{\alpha}_{j}\right)+\frac{1}{2}\left({\alpha}_{j},{\alpha}_{j}\right)\le 0$ and ${\sum}_{i=1}^{n}\left({\alpha}_{i},{\alpha}_{i}\right)}+2{\displaystyle {\sum}_{i\equiv 1}^{n}\left({\alpha}_{i},{\alpha}_{i+1}\right)$ , imply that $\sum}_{i=0}^{n}{\alpha}_{i$ . But this is a contradiction since the ${\alpha}_{i}$ are linearly independent.

Lemma 5.6. In a Dynkin diagram, suppose that roots *γ* and *δ* are joined by a simple edge. Then the configuration resulting from the deletion of *γ* and *δ* and replacement by the single root *γ*+*δ*, and then joining *γ*+*δ* to all roots connected to *γ* or *δ *by the same types of edges as *γ* or *δ* is also a Dynkin diagram.

Proof. Note first that since *γ* and *δ* are connected, we have
$\left(\gamma ,\delta \right)\le 0$ and thus *γ* + *δ* is a root. Moreover, since
$c\left(\gamma ,\delta \right)=c\left(\delta ,\gamma \right)=-1$ , we have
$\left(\gamma ,\gamma \right)=\left(\delta ,\delta \right)$ and
$2\left(\gamma ,\delta \right)+\left(\gamma ,\gamma \right)=1$ . Hence
$\left(\gamma +\delta ,\gamma +\delta \right)=\left(\gamma ,\gamma \right)$ .

Let *S* be the collection of roots *β* in the Dynkin diagram such that
$\beta \ne \gamma $ ,
$\beta \ne \delta $ , and *β* is connected to *γ* or *δ*.

So let
$\beta \in S$ . Without loss of generality, we can assume that *β* is connected to *γ*. Then
$\left(\delta ,\beta \right)=0$ , and so
$c\left(\gamma +\delta ,\beta \right)=c\left(\gamma ,\beta \right)$ .

Moreover,

$c\left(\beta ,\gamma +\delta \right)=\frac{2\left(\beta ,\gamma +\delta \right)}{\left(\gamma +\delta ,\gamma +\delta \right)}=\frac{2\left(\beta ,\gamma \right)}{\left(\gamma ,\gamma \right)}=c\left(\beta ,\gamma \right).$

Hence

$c\left(\gamma +\delta ,\beta \right)c\left(\beta ,\gamma +\delta \right)=c\left(\gamma ,\beta \right)c\left(\beta ,\gamma \right).$

This shows that the number of bonds in the edge joining *β* and *γ* + *δ* is the same as the number of bonds in the edge joining *β* and *γ*.

Finally, since
$\Vert \gamma +\delta \Vert =\Vert \gamma \Vert $ , the direction of the edge joining *β* and *γ* + *δ* is the same as the direction of the edge joining *β** *and *γ* (Figure 27).

Example

Figure 27. Dynkin diagrams* β + γ. *

6. The Future Perspective of This Paper

The future of this work is related to dealing with Lie algebras. The beauty of this science is learned every day by doing as much research and work on it as possible. The close connection with other mathematical disciplines creates abundant opportunities for further research and dealing with what our work is in the future. The primary goal of the paper was to be useful to anyone studying Lie algebra. In addition, this paper shows the influence of Dynkin diagrams on the root system, and the beauty of their diagram, as a close connection with Ways chambers.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Roberts, B. (2018-2019) Lie Algebras. University of Idaho, Moscow. https://www.webpages.uidaho.edu/~brooksr/liealgebraclass.pdf |

[2] |
Gonzalez, F.B. (2007) Lie Algebras. https://fgonzale.pages.tufts.edu/lie.algebras.book.pdf |

[3] |
Hasic, A. (2021) Introduction to Lie Algebras and Their Representations. Advances in Linear Algebra & Matrix Theory, 11, 67-91. Introduction to Lie Algebras and Their Representations (scirp.org) https://doi.org/10.4236/alamt.2021.113006 |

[4] |
Onishchik, A.L. and Vinberg, E.B. (Eds.) (1990) Lie Groups and Lie Algebras III. Structure of Lie Groups and Lie Algebras. VINITI, Moscow. https://books.google.me/books?id=l8nJCNiIQAAC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false |

[5] |
Wikipedia (2023) Root System. https://en.wikipedia.org/wiki/Root_system |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.