The Application of Linear Ordinary Differential Equations

Abstract

In this article, we will explore the applications of linear ordinary differential equations (linear ODEs) in Physics and other branches of mathematics, and dig into the matrix method for solving linear ODEs. Although linear ODEs have a comparatively easy form, they are effective in solving certain physical and geometrical problems. We will begin by introducing fundamental knowledge in Linear Algebra and proving the existence and uniqueness of solution for ODEs. Then, we will concentrate on finding the solutions for ODEs and introducing the matrix method for solving linear ODEs. Eventually, we will apply the conclusions we’ve gathered from the previous parts into solving problems concerning Physics and differential curves. The matrix method is of great importance in doing higher dimensional computations, as it allows multiple variables to be calculated at the same time, thus reducing the complexity.

Keywords

Share and Cite:

Cui, H. (2020) The Application of Linear Ordinary Differential Equations. Applied Mathematics, 11, 1292-1315. doi: 10.4236/am.2020.1112088.

1. Introduction

Ordinary Differential Equations (ODEs) has a broad range of applications in other subjects, such as Physics and Differential Geometry. Under specific circumstances, the problems encountered in these related areas are extremely difficult to solve using real numbers as parameters. To solve higher-dimensional cases, we need to use constant matrices as parameters (H. Cartan, 1981) . Ordinary differential equations (ODEs) are fundamental for the study for differential equations with multiple variables (Sheldon Axler, 2015) . Although linear ODEs have a comparatively easy form, they are effective in solving certain physical and geometrical problems (Zoltan Vizvari, 2020) . There exist various researches combining matrix with linear ODEs, yet most are purely theoretical and very few of them aim to discover the application of such a computational method in simplifying calculations in other related subjects.

In this article, we will explore the applications of the matrix method in linear ordinary differential equations (linear ODEs) in Physics and other branches of Mathematics. We will begin by introducing fundamental knowledge in Linear Algebra and proving the existence and uniqueness of solution for ODEs. Then, we will concentrate on finding the solutions for ODEs and introducing the matrix method for solving linear ODEs. Eventually, we will apply the conclusions we’ve gathered from the previous parts into solving problems concerning Physics and differential curve. Our ultimate aim is to be able to solve any given linear ordinary equations using constant matrices as parameters, and apply such a computational method into solving the evolution function of the movement of a free electron in a constant three-dimensional electric field and proving the Frenet formula in calculating the torsion of a 3-D curve (Fage M.K., 1974) .

Let us begin with some basic examples of how ODEs can be used to solve physical problems.

Example 1 (Radioactive chains of decay)

The differential equation for the number N of radioactive Nucleus is

$\lambda$ (1.1)

where k is the decay constant:

1) Give the evolution function $\left(u+v\right)\cdot w=u\cdot w+v\cdot w$ with initial condition $a\left(t\right)$.

2) We now look at a chain, where the original nucleus decays into another radioactive nucleon. Let α, β, γbe three types of nucleus. αtransfers to βwith decay constant $B\left(t\right)$, whereas βtransfers to γwith decay constant $I\to {\text{R}}^{3}$. Assume that there are N αin the beginning, find the evolution functions of the number of nucleus α, βand γ.

3) If $\frac{\text{d}\left(a\left(t\right)\cdot B\left(t\right)\right)}{\text{d}t}={a}^{\prime }\left(t\right)\cdot B\left(t\right)+a\left(t\right)\cdot {B}^{\prime }\left(t\right)$, show that after a long term evolution, the ratio of the number of αand that of βtends to $a\left(t\right)=\left({a}_{1}\left(t\right),{a}_{2}\left(t\right),{a}_{3}\left(t\right)\right),B\left(t\right)=\left({B}_{1}\left(t\right),{B}_{2}\left(t\right),{B}_{3}\left( t \right)\right)$

Example 2 (Motion in the liquid)

In this question, we consider a ball sinking in a liquid under the influence of gravity. There are three forces horizontally acts on this body: the gravity, the buoyancy and the viscous force.

1) Write the differential equation for this motion and solve it. One may dispute whether the gravity is greater than the buoyancy.

2) Find the ultimate velocity.

The first two examples can be solved mainly through using separation of variable and the resolving kernel for linear ODE. The third example is, however, difficult to solve because the magnetic field B is a three-dimensional matrix instead of a constant. To find the answer for a problem like this, we need to introduce the matrix method for solving linear ODE.

Example 3 (Motion of a charged particle in an electromagnetic field)

A charged particle with mass m and charge q moves in electromagnetic field with a constant magnetic field B and a changing electronic field $\begin{array}{l}\frac{\text{d}}{\text{d}t}a\left(t\right)\cdot B\left(t\right)=\frac{\text{d}}{\text{d}t}\left({a}_{1}\left(t\right){B}_{1}\left(t\right)+\cdots \right)\\ =\frac{\text{d}}{\text{d}t}\left({a}_{1}\left(t\right){B}_{1}\left(t\right)+\cdots \right)+\frac{\text{d}}{\text{d}t}\left({a}_{2}\left(t\right){B}_{2}\left(t\right)+\cdots \right)\\ ={{a}^{\prime }}_{1}\left(t\right){B}_{1}\left(t\right)+{{a}^{\prime }}_{2}\left(t\right){B}_{2}\left(t\right)+{{a}^{\prime }}_{3}\left(t\right){B}_{3}\left(t\right)+{a}_{1}\left(t\right){{B}^{\prime }}_{1}\left(t\right)+\cdots \\ ={a}^{\prime }\left(t\right)B\left(t\right)+a\left(t\right){B}^{\prime }\left(t\right)\end{array}$. We make the following assumption that the rate of the change of $a:I\to {\text{R}}^{3}$ is small enough that the magnetic field it generates is negligible with respect toB. The evolution equation is

${a}^{\prime }\left(t\right)\ne 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall t\in I$ (1.2)

where m signifies the mass of the particle, v the velocity, q the charge,B the magnetic field and $s=\varphi \left(t\right)$ the electronic field depending with time, with the initial condition $\left\{\begin{array}{l}a\left(t\right)\\ B\left(s\right)=a\left({\varphi }^{-1}\left(s\right)\right)\\ |{B}^{\prime }\left(s\right)|=1\end{array}$.

1) What is the resolving kernel of this linear differential equation?

2) What is the evolution function of the velocity, i.e., find $\varphi :I\to J$.

3) Find the trajectory of this particle,i.e., find $B\left(s\right)=a\left({\varphi }^{-1}\left(s\right)\right)$.

From the above examples, especially the last one, we see that even if the differential equation is relatively easy, we do not know in a general way how to solve it specifically if we cannot integrate the equation directly. Our goal of this article is to be able to solve the differential equation of the type of example 3.

2. Linear Algebra

2.1. Vector Spaces and Linear Maps

Definition 1 (Vector spaces) Aset V containingtwooperations that satisfy the eight axioms listed belowis considered a vector space (math world) :

Vector Addition: $\forall s,|{B}^{\prime }\left(s\right)|=1$

Scalar Multiplication : $\begin{array}{l}a\left(t\right)=a\left({\varphi }^{-1}\left(s\right)\right)=B\left(s\right)=B\left(\varphi \left(t\right)\right)\\ {a}^{\prime }\left(t\right)=\frac{\text{d}}{\text{d}t}B\left(\varphi \left(t\right)\right)={B}^{\prime }\left(\varphi \left(t\right){\varphi }^{\prime }\left(t\right)\right)\\ |{a}^{\prime }\left(t\right)|=|{B}^{\prime }\left(\varphi \left(t\right)\right)||{\varphi }^{\prime }\left(t\right)|\end{array}$

To be a vector space,all elements X,Y,Z inVand any scalars r,s inR must satisfy the following axioms:

1) $|{\varphi }^{\prime }\left(t\right)|=|{a}^{\prime }\left(t\right)|$.

2) $\varphi \left(t\right)$.

3) For allX, $\varphi \left(t\right)={\int }_{0}^{t}{\varphi }^{\prime }\left(\tau \right)\text{d}\tau ={\int }_{0}^{t}{a}^{\prime }\left(\tau \right)\text{d}\tau$ .

4)For anyX,there exists a X such that $a\left(t\right)=\left(r\mathrm{cos}\left(t\right),r\mathrm{sin}\left(t\right),ht\right)$ .

5) $B\left(s\right)$.

6) $|{B}^{\prime }\left(s\right)|=1$.

7) $a\left(t\right)$.

8) ${a}^{\prime }\left(t\right)=\left(-r\mathrm{sin}\left(t\right),r\mathrm{cos}\left(t\right),h\right),|{a}^{\prime }\left(t\right)|=\sqrt{{r}^{2}+{h}^{2}}$ .

Definition 2 (Basis) We consider vectors $\varphi \left(t\right)$ a basis of $\varphi \left(t\right)=s={\int }_{0}^{t}{a}^{\prime }\left(\tau \right)\text{d}\tau =t\sqrt{{r}^{2}+{h}^{2}}$ if:

1) $\frac{s}{\sqrt{{r}^{2}+{h}^{2}}}$ generate $B\left(s\right)=a\left(t\right)=\left(r\mathrm{cos}\frac{s}{\sqrt{{r}^{2}+{h}^{2}}},r\mathrm{sin}\frac{s}{\sqrt{{r}^{2}+{h}^{2}}},h\frac{s}{\sqrt{{r}^{2}+{h}^{2}}}\right).$.

2) $v,u$ are linearly independent.

Definition 3 A map $v×u$ of vector spaces is a linear map,if ${\text{R}}^{3}$, $\left(v×u\right)\cdot w=\mathrm{det}\left(v,u,w\right)$, $u×v=v×u$.

2.2. Relations of Matrices and Linear Maps

Having given the definition of linear maps and matrices, we can show that the information in a linear map can be completely depicted through a matrix given a chosen basis.

What we would like to find out is $\left(au+bu\right)×w=a\left(u×2\right)+b\left(v×w\right)$, given $u×v=0$. First, we find a basis $\left(u×v\right)\cdot u=0,\left(u×v\right)\cdot v=0$, and write X under this basis:

$|u×v|=|u||v|\mathrm{sin}\theta$ (2.1)

Then, since $\left(u×v\right)×w=\left(u\cdot w\right)v-\left(v\cdot w\right)u\ne u×\left(v×w\right)$ for every i, to find $\frac{\text{d}}{\text{d}t}\left(u\left(t\right)×v\left(t\right)\right)={u}^{\prime }\left(t\right)×v\left(t\right)+u\left(t\right)×{v}^{\prime }\left(t\right)$, all we need to know is a $a\left(t\right)$. When $B\left(s\right)$ is given for every $a\left(t\right)$, every $|{B}^{\prime }\left(s\right)|=1$ is found, so that we can depict $a\left(t\right)$ for every i through $|{B}^{″}\left(s\right)|$ matrix:

$a\left(t\right)$ (2.2)

Therefore, $k=0$ can be expressed as:

$a\left(t\right)$ (2.3)

which tells us that given a basis, a linear map can be completely depicted through a matrix.

Additionally, linear map and matrix satisfy the following relations:

1) Linear map $k=\frac{1}{R}$ can be represented by matrix $a\left(t\right)=\left(r\mathrm{cos}\left(t\right),r\mathrm{sin}\left(t\right),ht\right)$

2) Linear maps $a\left(t\right),B\left(s\right)=\left(r\mathrm{cos}\frac{s}{\sqrt{{r}^{2}+{h}^{2}}},r\mathrm{sin}\frac{s}{\sqrt{{r}^{2}+{h}^{2}}},h\frac{s}{\sqrt{{r}^{2}+{h}^{2}}}\right)$ can be represented by matrix ${B}^{″}\left( s \right)$

3) Linear maps $\frac{r}{{r}^{2}+{h}^{2}}$ can be represented by matrix $a\left( s \right)$

4) Linear maps $|{a}^{\prime }\left(s\right)|=1$ can be represented by matrix $n\left( s \right)$

2.3. Exponential Map of Matrices

In this subsection, we are going to give the definition of the exponential map of matrices, which is a generalization of exponential of real numbers. We will also present some basic properties of exponential maps of matrices.

Firstly, we know that for a complex number ${a}^{″}\left(s\right)=k\left(s\right)n\left(s\right)$, ex can be expressed using the sum of an infinite Taylor series:

$a\left(s\right)$ (2.4)

Then, we present some basic properties of the exponential map ${a}^{″}\left(s\right)$ :

1) ${a}^{\prime }\left( s \right)$

2) $|{a}^{\prime }\left(s\right)|=1,{a}^{\prime }\left(s\right)\cdot {a}^{\prime }\left(s\right)={|{a}^{\prime }\left(s\right)|}^{2}=1$

3) $\frac{\text{d}}{\text{d}s}\left({a}^{″}\left(s\right)\cdot {a}^{\prime }\left(s\right)+{a}^{\prime }\left(s\right){a}^{″}\left(s\right)\right)=2{a}^{\prime }\left(s\right)\cdot {a}^{″}\left(s\right)=0$

4) ${a}^{\prime }\left(s\right)\cdot {a}^{″}\left(s\right)=0$

5) ${a}^{″}\left( s \right)$

Having got these properties, we now substitute “x” in ${a}^{\prime }\left(s\right)$ with a matrix A, so that $b\left(s\right)={a}^{\prime }\left(s\right)×n\left(s\right)$. After the substitution, the exponential map for

matrix still satisfies most of the properties mentioned above, yet specifically, we need to notice here that $b\left(s\right)$ for most matrices A and B, since the multiplication of matrices does not fit the commutative law. Our aim now is to be able to calculate $|b\left(s\right)|=|{a}^{\prime }\left(s\right)×n\left(s\right)|=|{a}^{\prime }\left(s\right)||n\left(s\right)|=1×1=1$ using a relatively easy approach.

Given a matrix, its exponential is generally hard, even impossible to calculate just by following its definition. A procedure of calculating the exponential map will be given after the introduction of Jordan normal forms (JNF).

2.4. Jordan Normal Forms

We are going to admit the following theorem of the existence of Jordan normal forms.

Theorem 1 For any n × n matrix A,there exists a matrix

$a\left(s\right)$ (2.5)

such that A is similar to $b\left(s\right)$. Here

$\tau \left(s\right)=|{b}^{\prime }\left(s\right)|$ (2.6)

and ${b}^{\prime }\left(s\right)\parallel n\left(s\right)$ are not necessarily distinct.

For a proof, see for example .

In this subsection, we will present how to find the Jordan normal form and the transition matrix for a given matrix. The finding of JNF and transition matrix follows the following basic procedure:

1) Calculate the characteristic polynomial of the given matrix.

2) Let the characteristic polynomial equal to zero, and calculate the eigenvalues,

3) Put the eigenvalues on the main diagonal, and decide the numbers of “1s” on the up-right corner of every small Jordan matrix through comparing the number of set of eigenvectors that the potential solutions have.

4) Having found JNF, use ${b}^{\prime }\left(s\right)$ to calculate the transition matrix, then find its inverse.

To clarify the process, we should now analyze an example: to calculate the Jordan normal form of matrix ${a}^{\prime }\left(s\right)×n\left(s\right)={a}^{″}\left(s\right)×n\left(s\right)+{a}^{\prime }\left(s\right)×{n}^{\prime }\left(s\right)=0+{a}^{\prime }\left(s\right)×{n}^{\prime }\left(s\right),$ and transition matrix Q of $\left({a}^{\prime }{\left(s\right)}^{\prime }\left(s\right)\right){a}^{\prime }=0$ : Firstly, calculate the characteristic polynomial, and let it equal zero :

${b}^{\prime }\left(s\right)$. (2.7)

Then solve the eigenvalues, we have $b\left(s\right)$. Therefore, the Jordan normal form of A is either ${b}^{\prime }\left(s\right)$ or ${a}^{\prime }\left( s \right)$

Then, since Av = 2v has only one solution, matrix A has only one set of eigenvector. The matrix ${b}^{\prime }\left(s\right)\parallel n\left(s\right)$, however, has two eigenvectors. Therefore, only $n\left(s\right)$ can be the JNF of A.

Having got JNF, we use ${a}^{\prime }\left(s\right)$ to calculate the transition matrix. Such an equation can be solved using method of undetermined coefficients. Let Q be $k\left(s\right)$, and we can easily get a = 1; b = 0; c = 1; d = 1. Therefore, the transition matrix is $\tau \left(s\right)$.

2.5. Calculation of the Exponential Map of Matrices

Besides the properties of exponential maps mentioned above, we would like to further prove that $a\left(s\right)$ ; where P is a matrix,Q its transition matrix, and $b\left(s\right)$ the inverse of Q (Chantladze T., 2000) .

$n\left( s \right)$

Since $t\left(s\right)={a}^{\prime }\left(s\right)$.

${t}^{\prime }\left(s\right)$ (2.8)

Therefore, ${b}^{\prime }\left(s\right)$.

Having got such a property, we could then calculate the given example: ${n}^{\prime }\left(s\right)$, where

$t\left(s\right)$ (2.9)

To simplify our calculation, we have to first find the Jordan normal form of A as well as the transition matrix and its inverse matrix. As has been calculated in 2.4,

$b\left(s\right)$ (2.10)

$n\left(s\right)$ (2.11)

$k\left(s\right)$ (2.12)

According to the definition of matrix map,

$\tau \left(s\right)$ (2.13)

Therefore, ${t}^{\prime }\left(s\right)=k\left(s\right)n\left(s\right)$ can be represented as:

${b}^{\prime }\left(s\right)$ (2.14)

Basically, the calculation of $b\left(s\right)$ follows the following procedure:

Step 1: Calculate the JNF of A and find the transition matrix Q.

Step 2: Calculate $t\left( s \right)$

Step 3: ${b}^{\prime }\left(s\right)\parallel n\left( s \right)$

The example presented here is relatively easy, but it presents clearly the procedure of calculating the exponential maps of matrices. In the following section, we will utilize the exponential map of matrices to develop the matrix method for solving linear ordinary differential equations.

3. Ordinary Differential Equations

In this section, we are going to prove the central results of ODE theory, the Picard-Lindelöf theorem. It is on the existence and uniqueness of the initial value problems:

$|{b}^{\prime }\left(s\right)|=\tau \left(s\right)$ (3.1)

where F is a Lipschitz function with respect to vector x. For the proof reference, see . This theorem is quite useful in theory. For example, in Section 6, we are going to present a direct application of this theorem to differential geometry.

According to the definition, ${b}^{\prime }\left(s\right)$ is continuous in a bounded region $\tau \left(s\right)n\left(s\right)$, and satisfies in U the Lipschitz condition in $\begin{array}{l}n\left(s\right)=b\left(s\right)×t\left(s\right),\\ {n}^{\prime }\left(s\right)={b}^{\prime }\left(s\right)×t\left(s\right)+b\left(s\right)×{t}^{\prime }\left(s\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }\text{\hspace{0.17em}}=\tau \left(s\right)n\left(s\right)×t\left(s\right)+b\left(s\right)×k\left(s\right)n\left(s\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }\text{\hspace{0.17em}}=\tau \left(s\right)\left(-b\left(s\right)\right)+k\left(s\right)\left(-t\left(s\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }\text{\hspace{0.17em}}=-\tau \left(s\right)b\left(s\right)-k\left(s\right)t\left(s\right)\end{array}$, where k is independent of $\left\{\begin{array}{l}{t}^{\prime }\left(s\right)=kn\\ {n}^{\prime }\left(s\right)=-\tau \left(s\right)b\left(s\right)-k\left(s\right)t\left(s\right)\\ {b}^{\prime }\left(s\right)=\tau \left(s\right)n\left(s\right)\end{array}$. Find $X=\left[\begin{array}{c}t\left(s\right)\\ n\left(s\right)\\ b\left(s\right)\end{array}\right]$ such that $\frac{\text{d}}{\text{d}s}X=\left[\begin{array}{c}{t}^{\prime }\left(s\right)\\ {n}^{\prime }\left(s\right)\\ {b}^{\prime }\left(s\right)\end{array}\right]=\left[\begin{array}{ccc}0& k\left(s\right)& 0\\ -k\left(s\right)& 0& -\tau \left(s\right)\\ 0& \tau \left(s\right)& 0\end{array}\right]×\left[\begin{array}{c}t\left(s\right)\\ n\left(s\right)\\ b\left(s\right)\end{array}\right]$ we have ${t}^{\prime }\left(s\right)$, then let R be defined as a rectangle ${b}^{\prime }\left(s\right)$ such that ${n}^{\prime }\left(s\right)$. What we would like to prove now is that $k\left(s\right)>0,\tau \left(s\right)$ such that $a\left(s\right):I\to {\text{R}}^{3}$, the 1st order differential equation ${k}_{a}\left(s\right)=k\left(s\right)$ has one unique solution ${\tau }_{a}\left(s\right)=\tau \left(s\right)$ for which ${a}_{1}\left(s\right),{a}_{2}\left(s\right)$.

First, we need to define the following functions:

${a}_{1}\left(s\right)=A{a}_{2}\left(s\right)$ (3.2)

and we want to prove that ${X}^{\prime }\left(s\right)=\left[\begin{array}{ccc}0& k\left(s\right)& 0\\ -k\left(s\right)& 0& -\tau \left(s\right)\\ 0& \tau \left(s\right)& 0\end{array}\right]×X\left(s\right)$ is the solution we are looking for.

To begin with, we need to show that ${X}_{0}=\left[\begin{array}{c}t\left(s\right)\\ n\left(s\right)\\ b\left(s\right)\end{array}\right]$ is bounded by the rectangleR, where $X\left(s\right)$. Assume that $\left[\begin{array}{c}{t}^{\prime }\left(s\right)\\ {n}^{\prime }\left(s\right)\\ {b}^{\prime }\left(s\right)\end{array}\right]$ is bounded by R, we have:

$t\left(s\right)$ (3.3)

which tells us that $a\left(s\right)=a\left(0\right)+{\int }_{0}^{s}t\left(\tau \right)\text{d}\tau$. Therefore, since $a\left(s\right)$ is clearly bounded byR, and such a conclusion holds for ${k}_{a}\left(s\right)=k\left(s\right)$, we can conclude that ${\tau }_{a}\left(s\right)=\tau \left(s\right)$ is bounded by R, ${t}_{2}\left(s\right)$ through induction.

Next, we will prove

$A{t}_{1}\left(s\right)$ (3.4)

Suppose that this inequality holds for $A{t}_{1}\left(s\right)$, we will have ${t}_{2}\left(s\right)$. According to the Lipschitz condition, we also got $X\left(t\right)$. Therefore, we have

$t\left(s\right)$ (3.5)

which tells us that

$\begin{array}{c}{a}_{2}\left(s\right)={a}_{2}\left(0\right)+{\int }_{0}^{s}{t}_{0}\left(\tau \right)\text{d}\tau ={a}_{2}\left(0\right)+{\int }_{0}^{s}A{t}_{1}\left(\tau \right)\text{d}\tau \\ ={a}_{2}\left(0\right)+A{\int }_{0}^{s}{t}_{1}\left(\tau \right)\text{d}\tau ={a}_{2}\left(0\right)+A\left({a}_{1}\left(s\right)-{a}_{1}\left(0\right)\right)\\ ={a}_{2}\left(0\right)-A{a}_{1}\left(0\right)+A{a}_{1}\left(s\right)=\stackrel{\to }{b}+A{a}_{1}\left(s\right)\end{array}$ (3.6)

When n = 1, such a conclusion holds true:

${a}_{1}\left(s\right),{a}_{2}\left(s\right)$ (3.7)

so according induction, the inequality

${a}_{1}\left(s\right)=A{a}_{2}\left(s\right)$ (3.8)

is true.

Then, we will prove that for ${F}_{visc}=6\pi \eta rv$, sequence $\eta$ converges. Based on ${F}_{visc}=\alpha v$, we have

$\frac{\text{d}N}{\text{d}t}=-kN$

Since $N\left(t\right)$ converges absolutely $N\left(0\right)={N}_{0}$, we know that

$\alpha ,\beta ,\gamma$ is converges uniformly for $\alpha$. Therefore, $\beta$ is continuous and converges uniformly for ${k}_{1}$.

And now we can prove the existence of the solution by showing $\beta$ satisfying $\gamma$. Because ${k}_{2}$ converges uniformly on $\alpha$ and $\alpha ,\beta$, we know that $\gamma$ tends uniformly to ${k}_{2}>{k}_{1}$. Let n approach infinity for

$\alpha$ (3.9)

we can get

$\beta$ (3.10)

Since $\frac{{k}_{1}}{{k}_{2}-{k}_{1}}$ is continuous, it has $E\left(t\right)$ as its derivative and $E\left(t\right)$.

Having proved the existence of the solution $m\frac{\text{d}v}{\text{d}t}=qv×B+qE\left(t\right)$, we will further prove its uniqueness given $E\left(t\right)$. Let us assume that there exists another different solution $v\left(0\right)={v}_{0},x\left(0\right)=0$. When $V×V\to V$, let $R×V\to V$, and we have

$X+Y=Y+X$ (3.11)

However,

$\left(X+Y\right)+Z=X+\left(Y+Z\right)$ (3.12)

hence $0+X=X+0=X$. We can successively obtain the upper bound of $X+\left(-X\right)=0$ on $r\left(sX\right)=\left(rs\right)X$ by repeating such argument, which eventually gives us

$\left(r+s\right)X=rX+sX$ (3.13)

which approaches zero. Therefore, $r\left(X+Y\right)=rX+rY$ on interval $1X=X$, which contradicts the assumption that ${v}_{1},{v}_{2},\cdots ,{v}_{k}\in {R}^{n}$ is different from ${R}_{n}$, and this proves the uniqueness of solution ${v}_{1},{v}_{2},\cdots ,{v}_{k}$.

4. Matrix Method for Linear ODE

The precedent section concerning the existence and uniqueness of solutions of ODEs with given boundary condition is quite useful theoretically. It has the disadvantage of not being able to give an explicit expression of the solution, though, which is demanded in many physical problems. In this section, we are going to focus on a special kind of ODEs: the linear ODEs and give an explicit expression of solutions using the “resolving kernel” (Halas Zdenek, 2005) .

4.1. Linear ODEs

An nth degree ordinary differential equation is called a nth degree linear ordinary differential equation if the variable function x and all of its nth order derivative are of first degree. Its normal for can be written as

${R}^{n}$ (4.1)

where ${v}_{1},{v}_{2},\cdots ,{v}_{k}$ and $\varphi :V\to W$ are continuous functions on their domains. The linear ordinary differential equation is called homogeneous if $\forall v,w\in V$, and a homogeneous linear ODE has the following 2 properties:

1) If $\alpha ,\beta \in R$ are solutions for a homogeneous linear ODE, then $\varphi \left(\alpha v+\beta w\right)=\alpha \varphi \left(v\right)+\beta \varphi \left(w\right)$ ( $\varphi \left(x\right)\in {R}^{n}$ is a constant) is also a solution.

2) If $x\in {R}^{n}$ is a solution for such a homogeneous differential equation, and ${v}_{1},{v}_{2},\cdots ,{v}_{k}\in {R}_{n}$, then $\begin{array}{l}x={x}_{1}{v}_{1}+\cdots +{x}_{n}{v}_{n}\\ \varphi \left(x\right)=\varphi \left({x}_{1}{v}_{1}+\cdots +{x}_{n}{v}_{n}\right)={x}_{1}\varphi \left({v}_{1}\right)+\cdots +{x}_{n}\varphi \left({v}_{n}\right)\end{array}$.

The linear ODE can also be written in the following form:

$\varphi \left({v}_{i}\right)={a}_{i1}{v}_{1}+\cdots +{a}_{in}{v}_{n}$ (4.2)

$\varphi \left({v}_{i}\right)$ (4.3)

where ${a}_{ij}\left(j\in \left(1,\cdots ,n\right)\right)$ is a continuous matrix, ${a}_{ij}$ a continuous function of x, and ${v}_{ij}$ an initial value. Such an ODE, as have been proven in section three, has a unique solution. The approach to find its solution will be thoroughly discussed in section 4.2 and 4.3.

4.2. Resolving Kernel

Definition 4 (Resolving kernel) Let $\varphi \left({v}_{ij}\right)$ be the unique solution of thefollowing linear ODE:

$\varphi \left({v}_{ij}\right)$ (4.4)

${a}_{ij}$ (4.5)

$\left[\begin{array}{ccc}{a}_{11}& \cdots & {a}_{1n}\\ ⋮& \ddots & ⋮\\ {a}_{n1}& \cdots & {a}_{nn}\end{array}\right]$ is called the Resolving Kernel of the linear ODE:

$\varphi \left(x\right)={x}_{1}\varphi \left({v}_{1}\right)+\cdots +{x}_{n}\varphi \left({v}_{n}\right)$ (4.6)

$\left[\begin{array}{ccc}{x}_{1}& \cdots & {x}_{n}\end{array}\right]×\left[\begin{array}{ccc}{a}_{11}& \cdots & {a}_{1n}\\ ⋮& \ddots & ⋮\\ {a}_{n1}& \cdots & {a}_{nn}\end{array}\right]$

Property 1: $\varphi$ if ${A}_{\varphi }$

Proof: Given $\varphi +\psi$, ${A}_{\varphi }+{A}_{\psi }$, by the definition of $\lambda \varphi$, $\lambda {A}_{\varphi }$ is the unique solution of:

$\varphi ο\psi$ (4.7)

By the definition of ${A}_{\varphi }{A}_{\psi }$ is the unique solution of:

$x,{\text{e}}^{x}$ (4.8)

Therefore, based on the differential equations above, ${\text{e}}^{x}=\underset{n=0}{\overset{\infty }{\sum }}\frac{{x}^{n}}{n!}$.

Property 2: ${\text{e}}^{x}$ is reversible

Proof: We try to inverse ${\text{e}}^{x}=\mathrm{cos}\left(\theta \right)+i\mathrm{sin}\left(\theta \right)$ :

${\text{e}}^{x+iy}={\text{e}}^{x}\left(\mathrm{cos}\left(y\right)+i\mathrm{sin}\left(y\right)\right)$ (4.9)

Therefore, ${\text{e}}^{f\left(x\right)}={\sum }_{n=0}^{\infty }\frac{f{\left(x\right)}^{n}}{n}$ is reversible. The inverse is ${\text{e}}^{x+y}={\text{e}}^{x}{\text{e}}^{y}$.

Property 3: $\frac{\text{d}}{\text{d}t}{\text{e}}^{at}=a{\text{e}}^{at}$, where ${\text{e}}^{x}$ and ${\text{e}}^{A}=\mathrm{exp}\left(A\right)={\sum }_{n=0}^{\infty }\frac{{A}^{n}}{n!}$. Det() means matrix determinant.

Proof: Let ${\text{e}}^{A+B}\ne {\text{e}}^{A}{\text{e}}^{B}$ be a basis of ${\text{e}}^{A}$, then let ${J}_{A}=\left(\begin{array}{ccc}{J}_{{\lambda }_{1},{e}_{1}}& & 0\\ & \ddots & \\ 0& & {J}_{\lambda ,{x}_{i}}\end{array}\right)$, where ${J}_{A}$, ${J}_{A}=\left(\begin{array}{ccc}\lambda & & 0\\ & \ddots & \\ 0& & \lambda \end{array}\right)$ :

${\lambda }_{1},\cdots ,{\lambda }_{k}\in C$ (4.10)

Since ${J}_{p}=QP{Q}^{-1}$, and ${J}_{A}$ :

$\begin{array}{l}A\left(t\right)R\left(t\right){e}_{1}^{R\left(t\right){e}_{2}{\cdots }^{R\left(t\right){e}_{n}}}+\cdots +R\left(t\right){e}_{1}^{\cdots A\left(t\right)R\left(t\right){e}_{n}}\\ =A\left(t\right){{e}^{\prime }}_{1}{}^{\cdots {{e}^{\prime }}_{n}}+\cdots +{{e}^{\prime }}_{1}{}^{\cdots A\left(t\right){{e}^{\prime }}_{n}}=TrA\left(t\right){{e}^{\prime }}_{1}{}^{\cdots {{e}^{\prime }}_{n}}\\ =TrA\left(t\right){\left(R\left(t\right){e}_{1}\right)}^{\cdots R\left(t\right){e}_{n}}=\left[TrA\left(t\right)\right]\left[\mathrm{det}\left(R\left(t\right){e}_{1}^{\cdots {e}_{n}}\right)\right]\end{array}$ (4.11)

Therefore, $\left(1-\lambda \right)\left(3-\lambda \right)+1={\left(\lambda -2\right)}^{2}=0$,

${\lambda }_{1}={\lambda }_{2}=2$ (4.12)

Since,

$\left[\begin{array}{cc}2& 1\\ 0& 2\end{array}\right]$ (4.13)

4.3. Solving Linear ODEs

After showing the definition of Resolving Kernel and several of its properties, we are now going to apply it into solving certain linear ODEs:

How to solve:

$\left[\begin{array}{cc}2& 0\\ 0& 2\end{array}\right]$ (4.14)

X can be written as $\left[\begin{array}{cc}2& 0\\ 0& 2\end{array}\right]$, and $\left[\begin{array}{cc}2& 1\\ 0& 2\end{array}\right]$ can be written as $AQ=Q{J}_{A}$, which equals to:

$\left[\begin{array}{cc}a& b\\ c& d\end{array}\right]$ (4.15)

Let $\left[\begin{array}{cc}1& 0\\ 1& 1\end{array}\right]$ be the resolving kernel of ${\text{e}}^{QP{Q}^{-1}}=Q{\text{e}}^{p}{Q}^{-1}$ :

${Q}^{-1}$ (4.16)

And since ${\text{e}}^{QP{Q}^{-1}}=\underset{n=0}{\overset{\infty }{\sum }}\frac{{\left(QP{Q}^{-1}\right)}^{n}}{n!}$, where ${\left(QP{Q}^{-1}\right)}^{n}=QP{Q}^{-1}QP{Q}^{-1}Q\cdots QP{Q}^{-1}=Q{P}^{n}{Q}^{-1}$ can be expressed as

$\underset{n=0}{\overset{\infty }{\sum }}\frac{{\left(QP{Q}^{-1}\right)}^{n}}{n!}=\underset{n=0}{\overset{\infty }{\sum }}Q\frac{{\left(P\right)}^{n}}{n!}{Q}^{-1}=Q\underset{n=0}{\overset{\infty }{\sum }}\frac{{\left(P\right)}^{n}}{n!}{Q}^{-1}=Q{\text{e}}^{pn}{Q}^{-1}$, ${\text{e}}^{QP{Q}^{-1}}=Q{\text{e}}^{p}{Q}^{-1}$ as ${\text{e}}^{A}$, and so on.

Therefore,

$A=\left[\begin{array}{cc}1& 1\\ -1& 3\end{array}\right]$ (4.17)

where ${J}_{A}=\left[\begin{array}{cc}2& 1\\ 0& 2\end{array}\right],Q=\left[\begin{array}{cc}1& 0\\ 1& 1\end{array}\right],{Q}^{-1}=\left[\begin{array}{cc}1& 0\\ -1& 1\end{array}\right]$ can be expressed as ${\text{e}}^{A}=Q{\text{e}}^{{J}_{A}}{Q}^{-1}=\left[\begin{array}{cc}1& 0\\ 1& 1\end{array}\right]{\text{e}}^{{J}_{A}}\left[\begin{array}{cc}1& 0\\ -1& 1\end{array}\right]$

4.4. Linear ODEs with Constant Coefficients

Given ${\text{e}}^{{J}_{A}}={\text{e}}^{\left[\begin{array}{cc}2& 1\\ 0& 2\end{array}\right]}={\text{e}}^{\left[\begin{array}{cc}2& 0\\ 0& 2\end{array}\right]+\left[\begin{array}{cc}0& 1\\ 0& 0\end{array}\right]}={\text{e}}^{\left[\begin{array}{cc}2& 0\\ 0& 2\end{array}\right]}{\text{e}}^{\left[\begin{array}{cc}0& 1\\ 0& 0\end{array}\right]}={\text{e}}^{2}{\text{e}}^{\left[\begin{array}{cc}0& 1\\ 0& 0\end{array}\right]}$, where matrix ${\text{e}}^{\left[\begin{array}{cc}0& 1\\ 0& 0\end{array}\right]}={\sum }_{n=0}^{\infty }\frac{{\left[\begin{array}{cc}0& 1\\ 0& 0\end{array}\right]}^{n}}{n!}={\text{e}}^{2}\left[\left[\begin{array}{cc}1& 0\\ 0& 1\end{array}\right]+\left[\begin{array}{cc}0& 1\\ 0& 0\end{array}\right]+0+0+\cdots \right]={\text{e}}^{2}\left[\begin{array}{cc}1& 1\\ 0& 1\end{array}\right]$ is constant, we will prove the following theorem:

Theorem 2 Let ${\text{e}}^{A}$ be the resolving kernel of the ODE:

${\text{e}}^{2}\left[\begin{array}{cc}1& 0\\ 1& 1\end{array}\right]\left[\begin{array}{cc}1& 1\\ 0& 1\end{array}\right]\left[\begin{array}{cc}1& 0\\ -1& 1\end{array}\right]={\text{e}}^{2}\left[\begin{array}{cc}0& 1\\ -1& 2\end{array}\right]=\left[\begin{array}{cc}0& {\text{e}}^{2}\\ -{\text{e}}^{2}& 2{\text{e}}^{2}\end{array}\right]$ (4.18)

${\text{e}}^{A}$ (4.19)

${\text{e}}^{{J}_{A}}$ can then be expressed as ${\text{e}}^{A}={\text{e}}^{Q{J}_{A}{Q}^{-1}}=Q{\text{e}}^{{J}_{A}}{Q}^{-1}$

Proof: $\left\{\begin{array}{l}{x}^{\prime }\left(t\right)=F\left(t,x\left(t\right)\right)\\ x\left(a\right)={x}_{0}\end{array}$,

$F\left(t,x\left(t\right)\right)$ (4.20)

5. Applications in Physics

Having introduced basic knowledge about linear algebra and ODE and the matrix method for linear ODE, we are now going to apply them in solving Physics problems presented in the introduction:

1) (Radioactive chains of decay) The differential equation for the number Nof radioactive Nucleus is

$U\in {R}^{1+n}$ (5.1)

where k is the decay constant. This is a problem that can be solved through ordinary approaches. To solve N, all we have to do is to apply integration after separating the variables, and we can get: $x\left(t\right):|F\left(t,{x}_{1}\right)-F\left(t,{x}_{2}\right)|\le k|{x}_{1}-{x}_{2}|$.

Then, we will discuss some more complex circumstances. Now look at a chain, where the original nucleus decays into another radioactive nucleon. Let $t,{x}_{1},{x}_{2}$ be three types of nucleus. $M\in R$ transfers to $\forall \left(t,x\left(t\right)\right)\in U$ with decay constant $|F\left(t,x\left(t\right)\right)| whereas $\left(t,x\left(t\right)\right)\in {R}^{1+n}:|t-a|\le h,|x\left(t\right)-{x}_{0}|\le j$ transfers to $Mh with decay constant $\forall t\in R$. Assume that there are N $|t-a|\le h$ in the beginning.

Based on the information given, we can list the following differential equations which characterize the rate of change each kind of nucleus:

${x}^{\prime }\left(t\right)=F\left(t,x\left(t\right)\right)$ (5.2)

$x\left(t\right)$ (5.3)

${x}_{0}=x\left(a\right)$ (5.4)

The first equation can be solved easily through the approach mentioned at the very beginning. The result is:

$\begin{array}{l}{x}_{0}\left(t\right)={x}_{0}\\ {x}_{1}\left(t\right)={x}_{0}+{\int }_{a}^{t}F\left(\tau ,{x}_{0}\left(\tau \right)\right)\text{d}\tau \\ {x}_{2}\left(t\right)={x}_{0}+{\int }_{a}^{t}F\left(\tau ,{x}_{1}\left(\tau \right)\right)\text{d}\tau \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }\text{\hspace{0.17em}}⋮\\ {x}_{n}\left(t\right)={x}_{0}+{\int }_{a}^{t}F\left(\tau ,{x}_{n-1}\left(\tau \right)\right)\text{d}\tau \end{array}$ (5.5)

The second equation can be solved using

$x\left(t\right)={\mathrm{lim}}_{n\to \infty }{x}_{n}\left(\tau \right)$ (5.6)

where $\varphi \left(t\right)={x}_{n}\left(t\right)$, $a-h, $\varphi \left(t\right)={x}_{n-1}\left(t\right)$, $|{x}_{n}\left(t\right)-{x}_{0}|=|{\int }_{a}^{t}F\left(\tau ,{x}_{n-1}\left(\tau \right)\right)\text{d}\tau |\le M|t-a|\le Mh\le j,$. Such an approach is usually utilized in solving linear ODE and will be thoroughly introduced in the third section in this article:

${x}_{0}-j<{x}_{n-1}\left(t\right)<{x}_{0}+j$ (5.7)

Having got ${x}_{0}\left(t\right)$ can be easily calculated through integration:

${x}_{1}\left(t\right)$ (5.8)

so far, the three chains are all solved. Then, to evaluate the ratio of the number of α and that of β after a longtime of evolution, we need to calculate:

$\varphi \left(t\right)={x}_{n}\left(t\right)$ (5.9)

Let time t approach infinity, we get:

$\forall n\in N$ (5.10)

So far, this example is thoroughly solved.

2) (Linear motion of a particle in liquids) In this question, we consider a ball sinking in a liquid under the influence of gravity. There are three forces horizontally acts on this body: the gravity, the buoyancy and the viscous force.

The first thing we would like to do is to find the differential equation that characterizes the motion. The viscous force, as is mentioned in the question stem, is in direct proportion to the velocity v. The gravitational force points downwards, and the viscous force and buoyancy point upwards. Therefore, according to

newton’s second law, the net force $|{x}_{n}\left(t\right)-{x}_{n-1}\left(t\right)|\le \frac{M{k}^{n-1}}{n!}{|t-a|}^{n}$ can be expressed as $n-1$,

which equals to ${x}_{n}\left(t\right)-{x}_{n-1}\left(t\right)={\int }_{a}^{x}\left(F\left(\tau ,{x}_{n-1}\left(\tau \right)\right)-F\left(\tau ,{x}_{n-2}\left(\tau \right)\right)\right)\text{d}t$, where $|F\left(\tau ,{x}_{n-1}\left(\tau \right)\right)-F\left(\tau ,{x}_{n-2}\left(\tau \right)\right)|\le k|{x}_{n-1}\left(\tau \right)-{x}_{n-2}\left(\tau \right)|$ is the mass, rho the density of the liquid, V the volume of the ball, g the gravitational constant, v the velocity, and $|F\left(\tau ,{x}_{n-1}\left(\tau \right)\right)-F\left(\tau ,{x}_{n-2}\left(\tau \right)\right)|\le \frac{M{k}^{n-1}{|\tau -a|}^{n-1}}{\left(n-1\right)!}$ a constant. Substituting $|{x}_{n}\left(t\right)-{x}_{n-1}\left(t\right)|\le \frac{M{k}^{n-1}}{\left(n-1\right)!}{\int }_{a}^{t}{|\tau -a|}^{n-1}\text{d}\tau =\frac{M{k}^{n-1}}{n!}{|t-a|}^{n}$ with $|{x}_{1}\left(t\right)-{x}_{0}|\le \frac{M{k}^{n-1}}{\left(n-1\right)!}|{\int }_{a}^{\tau }F\left(\tau ,{x}_{0}\right)\text{d}\tau |\le M|t-a|$, where $|{x}_{n}\left(t\right)-{x}_{n-1}\left(t\right)|\le \frac{M{k}^{n-1}}{n!}{|t-a|}^{n}$ is a constant, we can get:

$a-h\le t\le a+h$ (5.11)

Then, since the equation is a first order linear ODE, it can be solved using the approach is Example 1:

$〈{x}_{n}\left(t\right)〉$ (5.12)

where $|{x}_{n}\left(t\right)-{x}_{n-1}\left(t\right)|\le \frac{M{k}^{n-1}}{n!}{|t-a|}^{n}$, and ${x}_{0}+\left({x}_{1}\left(t\right)-{x}_{0}\right)+\cdots +\left({x}_{n}\left(t\right)-{x}_{n-1}\left(t\right)\right)+\cdots \le {x}_{0}+Mh+\cdots +\frac{M{k}^{n-1}{h}^{n}}{n!}+\cdots$ represents the initial velocity. This differential equation can be solved in the following way:

${x}_{0}+Mh+\cdots +\frac{M{k}^{n-1}{h}^{n}}{n!}+\cdots$ (5.13)

So far, the differential equation which characterizes the motion of the ball has been solved. To find the ultimate velocity of the ball, let time t approach infinity, and we can get:

${x}_{0}+\left({x}_{1}\left(t\right)-{x}_{0}\right)+\cdots +\left({x}_{n}\left(t\right)-{x}_{n-1}\left(t\right)\right)+\cdots$ (5.14)

3) (Harmonic oscillations) A harmonic oscillation is a linear movement (along the axis), where the resulting force is always directed against and proportional to the distance to the position of the equilibrium.

We will first assume that there is no friction force on the oscillating object. Since the force is always directed against and proportional to the distance to the position of the equilibrium, the force can be expressed as $a-h\le t\le a+h$, where k is a constant and x represents the equation for displacement. The we will have the following equation:

$〈{x}_{n}\left(t\right)〉$ (5.15)

where $a-h\le t\le a+h$ represents the initial position and $\varphi \left(t\right)=x\left(t\right)$ represents the initial velocity.

We will then use the matrix method directly to solve this equation as practice, though this might make the calculation more difficult. First, let ${x}^{\prime }\left(t\right)=F\left(t,x\left(t\right)\right)$ be the matrix representing the displacement and velocity:

${x}_{n}\left(t\right)$ (5.16)

Then we write the resolving kernel of this equation:

$\left(a-h,a+h\right)$ (5.17)

Since $|F\left(t,x\left(t\right)\right)-F\left(t,{x}_{n}\left(t\right)\right)|\le k|x\left(t\right)-{x}_{n}\left(t\right)|$ and $F\left(t,{x}_{n}\left(t\right)\right)$ we can solve $F\left(t,x\left(t\right)\right)$ using its series expansion:

${x}_{n}\left(t\right)={x}_{0}+{\int }_{a}^{t}F\left(\tau ,{x}_{n-1}\left(\tau \right)\right)\text{d}\tau ,$ (5.18)

Having got $x\left(t\right)={x}_{0}+{\int }_{a}^{t}F\left(\tau ,x\left(\tau \right)\right)\text{d}\tau ,$, we can calculate $F\left(\tau ,x\left(\tau \right)\right)$ using $F\left(t,x\left(t\right)\right)$ :

$x\left(a\right)={x}_{0}$ (5.19)

Therefore, $x\left(t\right)$ and $x\left(a\right)={x}_{0}$

Then, to consider a more complex situation, we assume that there exists a friction which is also proportional to the velocity and directed opposite to the velocity. We can rewrite the differential equation that characterizes the motion as

$\varphi \left(t\right)=\omega \left(t\right)$. (5.20)

write $|t-a|\le h$ so that $|\omega \left(t\right)-x\left(t\right)|\le A$, where $\omega \left(t\right)-x\left(t\right)={\int }_{a}^{t}\left(F\left(\tau ,\omega \left(t\right)\right)-F\left(\tau ,x\left(\tau \right)\right)\right)\text{d}\tau$

Since now $|F\left(\tau ,\omega \left(t\right)\right)-F\left(\tau ,x\left(\tau \right)\right)|\le k|\omega \left(t\right)-x\left(t\right)|\le kA$ becomes more complex, the approach previously used in the non-frictional circumstance is no more applicable. To calculate there solving kernel, we need to use the method mentioned in section 2, to first find the Jordan Normal Form (JNF) and the transition matrix of matrix A. The first step is

to solve $|\omega \left(t\right)-x\left(t\right)|\le kA|t-a|$ :

$|\omega \left(t\right)-x\left(t\right)|$ (5.21)

and we can get the solutions $\left(a-h,a+h\right)$. Now that $\frac{{k}^{2}A}{2!}{|t-a|}^{2},\cdots ,\frac{{n}^{2}A}{n!}{|t-a|}^{n},\cdots ,$ can either be $\omega \left(t\right)=x\left(t\right)$ or $\left(a-h,a+h\right)$. Since the previous equation has two solutions, we know that $\omega \left(t\right)$ should be the first one, since the first matrix has two eigenvalues.

Having got $x\left(t\right)$, we can then calculate the transition matrix Q by using $x\left(t\right)$. The value of Q is thus $B\left(t\right)={A}_{0}\left(t\right)x+{A}_{1}\left(t\right)\frac{\text{d}x}{\text{d}t}+\cdots +{A}_{n}\left(t\right)\frac{{\text{d}}^{n}x}{\text{d}{t}^{n}}$. Suggest that ${A}_{i}\left(t\right)\left(i=1,2,\cdots ,n\right)$, $B\left(t\right)$ can be calculated using $B\left(t\right)=0$, can we can get the result

${\varphi }_{1},\cdots ,{\varphi }_{n}$ (5.22)

Then, ${C}_{1}{\varphi }_{1}+\cdots +{C}_{n}{\varphi }_{n}$ can be expressed as

${C}_{i}\left(i=1,2,\cdots ,n\right)$ (5.23)

Then

$\varphi$ (5.24)

so that the displacement equation $\varphi \left({t}_{0}\right)=0$, where

$\varphi \left({t}_{0}\right)=0$ (5.25)

Now that there are two kinds for situations we need to discuss. If $\frac{\text{d}x}{\text{d}t}=A\left(t\right)x+B\left(t\right)$, $\varphi \left(0\right)={x}_{0}$,

Then equals

$A\left(t\right)$ (5.26)

However, if $B\left(t\right)$, $\varphi \left(0\right)$ the equals

$R\left(t,{t}_{0}\right)$ (5.27)

Since ${R}^{\prime }=A\left(t\right)R$ the final solution to $R\left({t}_{0},{t}_{0}\right)=I$ can be written as

$R\left({t}_{0},{t}_{0}\right)$ (5.28)

Having solved the three questions above, we’re now going to take a look at the one concerning electromagnetism, which cannot be solved without utilizing the matrix method for linear ODE:

Example 4 (Motion of a charged particle in an electromagnetic field)

A charged particle with mass m and charge q moves in an electromagnetic field with a constant magnetic fieldB and a changing electronic field $\frac{\text{d}x}{\text{d}t}=A\left(t\right)x+B\left(t\right)$. We make the following assumption that the rate of the change of $\varphi \left({t}_{0}\right)={x}_{0}$ is small enough that the magnetic field it generates is negligible with respect to B. The evolution equation is

$R\left({t}_{2},{t}_{0}\right)=R\left({t}_{2},{t}_{1}\right)R\left({t}_{1},{t}_{0}\right)$ (5.29)

where m signifies the mass of the particle, v the velocity, q the charge, B the magnetic field and ${t}_{1}\in \left({t}_{0},{t}_{2}\right)$ the electronic field depending with time, with the initial condition ${x}_{0}\in {\text{R}}^{n}$. Here, we assume that the Matrix B is

$\varphi \left(t\right)=R\left(t,{t}_{1}\right)R\left({t}_{1},{t}_{0}\right){x}_{0}$ (5.30)

To solve such a differential equation, we need to first solve the Jordan normal form of B and the transition matrix as well as its inverse matrix. The first step is to calculate the eigenvalues of B: we have to first calculate the determinant of $R\left(t,{t}_{1}\right),\varphi \left(t\right)$, where λ suggests each eigenvalue, I the identity:

$\begin{array}{l}{\varphi }^{\prime }\left(t\right)=A\left(t\right)\varphi \left(t\right)\\ \varphi \left({t}_{1}\right)=R\left({t}_{1},{t}_{0}\right){x}_{0}\end{array}$ (5.31)

Let $R\left(t,{t}_{1}\right),\varphi \left(t\right)$, we get $\begin{array}{l}{\varphi }^{\prime }\left({t}_{1}\right)=A\left({t}_{1}\right)\varphi \left({t}_{1}\right)\\ \varphi \left({t}_{0}\right)={x}_{0}\end{array}$, where $\varphi \left(t\right)=R\left(t,{t}_{0}\right){x}_{0}$. Therefore, the eigenvalues of B are 0, $R\left(t,{t}_{0}\right)$ and $R\left({t}_{0},t\right)$ Since three eigenvalues of B are different, the matrix B is a diagonalizable matrix. Therefore, the Jordan normal form of B will have 0, $\begin{array}{l}R\left(t,{t}_{0}\right)R\left({t}_{0},t\right)=R\left(t,t\right)=I\\ R\left({t}_{0},t\right)R\left(t,{t}_{0}\right)=R\left({t}_{0},{t}_{0}\right)=I\end{array}$, and $R\left(t,{t}_{0}\right)$ on its main diagonal, and zeros on the rest of its space:

$R\left({t}_{0},t\right)$ (5.32)

Then we need to calculate the transition matrix Q:

When $\mathrm{det}\left(R\left(t,{t}_{0}\right)\right)=\mathrm{exp}\left({\int }_{{t}_{0}}^{t}TrA\left(\tau \right)\text{d}\tau \right)$. Solving the equation above, we get $\text{exp}\left(f\left(t\right)\right)={\text{e}}^{f\left(t\right)}$.

When $Tr\left(A\right)={\sum }_{i=1}^{n}{a}_{ii}$. Solving the equation above,

${\text{R}}^{n}\left({e}_{1},{e}_{2},\cdots ,{e}_{n}\right)$ (5.33)

we get ${\text{R}}^{n}$. When $r\left(t\right)=\mathrm{det}\left(R\left(t,{t}_{0}\right)\right)$. Solving the equation above,

$R\left(t,{t}_{0}\right)=R\left(t\right)$ (5.34)

we get $\mathrm{det}\left(R\left(t\right)\right)\left[{e}_{1}{\cdots }^{{e}_{n}}\right]=R\left(t\right){e}_{1}{\cdots }^{R\left(t\right){e}_{n}}$. Therefore, the eigenvectors are $\begin{array}{c}r\left(t\right)\left({e}_{1}{\cdots }^{{e}_{n}}\right)={R}^{\prime }\left(t\right){e}_{1}{\cdots }^{R\left(t\right){e}_{n}}+\cdots +R\left(t\right){e}_{1}{\cdots }^{{R}^{\prime }\left(t\right){e}_{n}}\\ =A\left(t\right)R\left(t\right)\left({e}_{1}{\cdots }^{R\left(t\right){e}_{2}{\cdots }^{{}^{R\left(t\right){e}_{n}}}}\right)R\left(t\right){e}_{1}{\cdots }^{A\left(t\right)R\left(t\right){e}_{n}}\end{array}$, ${R}^{\prime }\left(t\right)=A\left(t\right)R\left(t\right)$, and $\left\{R\left(t\right){e}_{1},\cdots ,R\left(t\right){e}_{n}\right\}=\left\{{{e}^{\prime }}_{1}{}^{\cdots {{e}^{\prime }}_{n}}+\cdots +{{e}^{\prime }}_{1}{}^{\cdots A\left(t\right){{e}^{\prime }}_{n}}\right\}$.

To get the transition matrix, all we have to do is to put together the three eigenvectors we have got:

$\begin{array}{l}A\left(t\right)R\left(t\right){e}_{1}{}^{R\left(t\right){e}_{2}{\cdots }^{R\left(t\right){e}_{n}}}+\cdots +R\left(t\right){e}_{1}{}^{\cdots A\left(t\right)R\left(t\right){e}_{n}}\\ =A\left(t\right){{e}^{\prime }}_{1}{}^{\cdots {{e}^{\prime }}_{n}}+\cdots +{{e}^{\prime }}_{1}{}^{\cdots A\left(t\right){{e}^{\prime }}_{n}}=TrA\left(t\right){{e}^{\prime }}_{1}{}^{\cdots {{e}^{\prime }}_{n}}\\ =TrA\left(t\right){\left(R\left(t\right){e}_{1}\right)}^{\cdots R\left(t\right){e}_{n}}=\left[TrA\left(t\right)\right]\left[\mathrm{det}\left(R\left(t\right){e}_{1}{}^{\cdots {e}_{n}}\right)\right]\end{array}$ (5.35)

The next step is to calculate the inverse matrix of Q. Since the process of calculation is very complicated, here we will directly give the final result:

${r}^{\prime }\left(t\right)=TrA\left(t\right)r\left(t\right)$ (5.36)

$\begin{array}{l}\mathrm{ln}r\left(t\right)=\int TrA\left(\tau \right)\text{d}\tau +C\\ r\left(t\right)={C}^{\prime }\mathrm{exp}\left({\int }_{{t}_{0}}^{t}TrA\left(\tau \right)\text{d}\tau \right)\end{array}$ (5.37)

Then, we can calculate the resolving kernel $\begin{array}{l}1=\mathrm{det}I=\mathrm{det}\left(R\left({t}_{0},{t}_{0}\right)\right)=r\left({t}_{0}\right)={C}^{\prime }\mathrm{exp}\left({\int }_{{t}_{0}}^{t}TrA\left(\tau \right)\text{d}\tau \right)={C}^{\prime },{C}^{\prime }=1,\\ \mathrm{det}\left(R\left(t,{t}_{0}\right)\right)=r\left(t\right)=\mathrm{exp}\left({\int }_{{t}_{0}}^{t}TrA\left(\tau \right)\text{d}\tau \right).\end{array}$ through using the matrix method for solving ODEs mentioned before:

$\frac{{\text{d}}^{m}X}{\text{d}{t}^{m}}={A}_{0}\left(t\right)X+{A}_{1}\left(t\right)\frac{\text{d}X}{\text{d}t}+\cdots +{A}_{m-1}\left(t\right)\frac{{\text{d}}^{m-1}X}{\text{d}{t}^{m-1}}?$ (5.38)

Having gotten the resolving kernel $\left[\begin{array}{c}x\\ {x}^{\prime }\\ ⋮\\ {x}^{m-1}\end{array}\right]$, the evolution function of the velocity $\frac{\text{d}x}{\text{d}t}$ can be expressed as $\left[\begin{array}{c}{x}^{\prime }\\ {x}^{″}\\ ⋮\\ {x}^{m}\end{array}\right]$ where $\left[\begin{array}{c}x\\ {x}^{\prime }\\ ⋮\\ \begin{array}{c}{x}^{m-1}\\ A\left(t\right)\cdots {A}_{m-1}\left(t\right)\end{array}\end{array}\right]=\left(\begin{array}{cccc}0& {I}_{n×n}& \cdots & 0\\ & \ddots & \ddots & 0\\ & 0& \ddots & {I}_{n×n}\\ {A}_{0}\left(t\right)& \cdots & \cdots & {A}_{m-2}\left(t\right){A}_{m-1}\left(t\right)\end{array}\right)=A\left(t\right)X$.

Then with $R\left(t\right)$ can be found by the equation $A\left(t\right)$ where $R\left(t\right)=\left(\begin{array}{cccc}{R}_{0}\left(t\right)& {R}_{1}\left(t\right)& \cdots & {R}_{m-1}\left(t\right)\\ {{R}^{\prime }}_{0}\left(t\right)& {{R}^{\prime }}_{1}\left(t\right)& \cdots & {{R}^{\prime }}_{m-1}\left(t\right)\\ ⋮& ⋮& \ddots & ⋮\\ {R}_{0}^{\left(m-1\right)}\left(t\right)& {R}_{1}^{\left(m-1\right)}\left(t\right)& \cdots & {R}_{m-1}^{\left(m-1\right)}\left(t\right)\end{array}\right)$.

6. Applications in Curves

6.1. Inner Product in R3

Assume that there are two vectors, $X\left(t\right)=\left[\begin{array}{c}x\left(t\right)\\ {x}^{\prime }\left(t\right)\\ ⋮\\ {x}^{m-1}\left(t\right)\end{array}\right]$, in R3. We define the inner product of u and v to be $x\left(t\right)$. The modulus of u is ${R}_{0}\left(t\right){x}_{0}+{R}_{1}\left(t\right){{x}^{\prime }}_{0}+\cdots +{R}_{m-1}\left(t\right){x}_{0}^{\left(m-1\right)}$. The calculation of the inner product has the following properties:

1) ${x}^{\prime }\left( t \right)$

2) ${R}_{1}\left(t\right){{x}^{\prime }}_{0}+\cdots +{{R}^{\prime }}_{m-1}\left(t\right){x}_{0}^{m-1}$, then if $X\left(t\right)=R\left(t\right){X}_{0}=\left(\begin{array}{cccc}{R}_{0}\left(t\right)& {R}_{1}\left(t\right)& \cdots & {R}_{m-1}\left(t\right)\\ {{R}^{\prime }}_{0}\left(t\right)& {{R}^{\prime }}_{1}\left(t\right)& \cdots & {{R}^{\prime }}_{m-1}\left(t\right)\\ ⋮& ⋮& \ddots & ⋮\\ {R}_{0}^{\left(m-1\right)}\left(t\right)& {R}_{1}^{\left(m-1\right)}\left(t\right)& \cdots & {R}_{m-1}^{\left(m-1\right)}\left(t\right)\end{array}\right)×\left[\begin{array}{c}{x}_{0}\\ {{x}^{\prime }}_{0}\\ ⋮\\ {x}_{0}^{m-1}\end{array}\right]$,u is then perpendicular to v

3) $\mathrm{det}\left(R\left(t\right)\right)$, where $\mathrm{exp}\left({\int }_{{t}_{0}}^{t}TrA\left(\tau \right)\text{d}\tau \right)$ is a constant

4) $\frac{\text{d}x}{\text{d}t}=A\left(t\right)x+B\left( t \right)$

Let $A\left(t\right)$ and $R\left(t,{t}_{0}\right)$ be two differential maps of an open interval $\frac{\text{d}x}{\text{d}t}=A\left(t\right)x+B\left(t\right)$, we have $\varphi \left({t}_{0}\right)={x}_{0}$

Proof: $\varphi \left( t 0 \right)$

$R\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{t}R\left(t,\tau \right)B\left(\tau \right)\text{d}\tau$ (6.1)

6.2. Regular Curve

Definition 5 (Regular curve) A curve $\varphi \left({t}_{0}\right)=R\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{{t}_{0}}R\left(t,\tau \right)B\left(\tau \right)\text{d}\tau ={x}_{0}$ is called a regular curve if

$\begin{array}{c}{\varphi }^{\prime }\left({t}_{0}\right)={R}^{\prime }\left(t,{t}_{0}\right){x}_{0}+R\left(t,t\right)B\left(t\right)+{\int }_{{t}_{0}}^{t}{R}^{\prime }\left(t,\tau \right)B\left(\tau \right)\text{d}\tau \\ =A\left(t\right)R\left(t,{t}_{0}\right){x}_{0}+B\left(t\right)+{\int }_{{t}_{0}}^{t}A\left(t\right)R\left(t,\tau \right)B\left(\tau \right)\text{d}\tau \\ =A\left(t\right)\left(R\left(t,{t}_{0}\right){x}_{0}+B\left(t\right)+{\int }_{{t}_{0}}^{t}R\left(t,\tau \right)B\left(\tau \right)\text{d}\tau +B\left(\tau \right)\right)\\ =A\left(t\right)\varphi \left(t\right)+B\left( t \right)\end{array}$

In this case, we can find another parameter $\frac{\text{d}N}{\text{d}t}=-kN$ such that under this parameter, the velocity is constantly 1 for a:

$N\left(t\right)={N}_{0}{\text{e}}^{-kt}$ (6.2)

Proof: We try to find $\alpha ,\beta ,\gamma$ such that $\alpha$ is of velocity 1.

That is, $\beta$.

${k}_{1}$ (6.3)

Therefore, $\beta$. Then, to find $\gamma$, all we need to do is to calculate an integral:

${k}_{2}$ (6.4)

Example 5 Given $\alpha$, find $\frac{\text{d}{N}_{\alpha }}{\text{d}t}=-{k}_{1}{N}_{\alpha },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where}\text{\hspace{0.17em}}\text{ }{N}_{\alpha }\left(0\right)=N,$ such that B and a are the same curve, but $\frac{\text{d}{N}_{\beta }}{\text{d}t}=-{k}_{2}{N}_{\beta }+{k}_{1}{N}_{\alpha },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where}\text{\hspace{0.17em}}{N}_{\beta }\left(0\right)=0$.

First, we calculate the derivative and the modulus of the derivative of $\frac{\text{d}{N}_{\gamma }}{\text{d}t}={k}_{2}{N}_{\beta },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where}\text{\hspace{0.17em}}\text{ }{N}_{\beta }\left(0\right)=0$ :

${N}_{\alpha }\left(t\right)=N{\text{e}}^{-{k}_{1}t}$ (6.5)

Then, find ${N}_{\beta }\left(t\right)=R\left(t\right){N}_{p}\left(0\right)+{\int }_{0}^{t}R\left(t,\tau \right)f\left(\tau \right)\text{d}\tau$ :

$R\left(t\right)={\text{e}}^{-{k}_{2}t},R\left(t,\tau \right)={\text{e}}^{-{k}_{2}\left(t-\tau \right)},{N}_{p}\left(0\right)=0,f\left(\tau \right)={k}_{1}N{\text{e}}^{-{k}_{1}\tau }$ (6.6)

t therefore equals to $\begin{array}{c}{N}_{\beta }\left(t\right)={\int }_{0}^{t}{\text{e}}^{-{k}_{2}\left(t-\tau \right)}{k}_{1}N{\text{e}}^{-{k}_{1}\tau }\text{d}\tau \\ ={k}_{1}N{\text{e}}^{-{k}_{2}t}{\int }_{0}^{t}{\text{e}}^{\left({k}_{2}-{k}_{1}\right)z}\text{d}\tau \\ =\frac{{k}_{1}N{\text{e}}^{-{k}_{2}t}}{{k}_{2}-{k}_{1}}\left({\text{e}}^{\left({k}_{2}-{k}_{1}\right)t}-1\right)\\ =\frac{{k}_{1}N}{{k}_{2}-{k}_{1}}\left({\text{e}}^{-{k}_{1}t}-{\text{e}}^{-{k}_{2}t}\right)\end{array}$ and

${N}_{\beta }\left(t\right),{N}_{\gamma }\left(t\right)$ (6.7)

6.3. Vector Products in R3

Definition 6 (Vector Product) LetV be a three dimensional vector space, and $\begin{array}{l}\frac{\text{d}{N}_{\gamma }}{\text{d}t}={k}_{2}\frac{{k}_{1}N}{{k}_{2}-{k}_{1}}\left({\text{e}}^{-{k}_{1}t}-{\text{e}}^{-{k}_{2}t}\right)\\ {N}_{\gamma }=\frac{{k}_{1}N}{{k}_{2}-{k}_{1}}{\text{e}}^{-{k}_{2}t}-\frac{{k}_{2}N}{{k}_{2}-{k}_{1}}{\text{e}}^{-{k}_{1}t}\end{array}$ be vectors in such a vector space,the vector product, $\underset{t\to +\infty }{\mathrm{lim}}\frac{{N}_{\beta }\left(t\right)}{{N}_{\alpha }\left(t\right)}$ , is defined tobe the vector in $\underset{t\to +\infty }{\mathrm{lim}}\frac{{N}_{\beta }\left(t\right)}{{N}_{\alpha }\left(t\right)}=\underset{t\to +\infty }{\mathrm{lim}}\frac{{k}_{1}N}{{k}_{2}-{k}_{1}}\left(1-{\text{e}}^{{k}_{1}t}{}^{-{k}_{2}t}\right)=\frac{{k}_{1}}{{k}_{2}-{k}_{1}}$ satisfying $\sum F$ .

The vector product has the following properties:

1) $\sum F=m\frac{\text{d}v}{\text{d}t}$

2) $mg-\rho Vg-\alpha v$

3) If $\alpha$, the u and v are parallel.

4) $mg-\rho Vg$

5) ${M}^{\prime }g$

6) ${M}^{\prime }$

7) $\frac{\text{d}v}{\text{d}t}=-\frac{\alpha }{m}v+\frac{{M}^{\prime }g}{m}$

6.4. Curvature

Definition 7 (Curvature) Let $v\left(t\right)=R\left(t\right){v}_{0}+{\int }_{0}^{t}R\left(t,\tau \right)f\left(\tau \right)\text{d}\tau$ be a regular curve, $R\left(t\right)={\text{e}}^{-\frac{\alpha }{m}t},R\left(t,\tau \right)={\text{e}}^{-\frac{\alpha }{m}\left(t-\tau \right)},f\left(\tau \right)=\frac{{M}^{\prime }g}{m}$ be the reparametrization of ${v}_{0}$ such that $\begin{array}{c}V\left(t\right)={\text{e}}^{-\frac{\alpha }{m}t}{v}_{0}+{\int }_{0}^{t}{\text{e}}^{-\frac{\alpha }{m}\left(t-\tau \right)}\frac{{M}^{\prime }g}{m}\text{d}\tau \\ ={\text{e}}^{-\frac{\alpha }{m}t}{v}_{0}+\frac{{M}^{\prime }g}{m}{\text{e}}^{-\frac{\alpha }{m}t}+{\int }_{0}^{t}{\text{e}}^{-\frac{\alpha }{m}\tau }\text{d}\tau \\ ={\text{e}}^{-\frac{\alpha }{m}t}{v}_{0}+\frac{{M}^{\prime }g}{m}{\text{e}}^{-\frac{\alpha }{m}t}\frac{m}{\alpha }\left({\text{e}}^{\frac{\alpha }{m}t}-1\right)\\ =\left({v}_{0}-\frac{{M}^{\prime }g}{\alpha }\right){\text{e}}^{-\frac{\alpha }{m}t}+\frac{{M}^{\prime }g}{\alpha }\end{array}$ ,the curvature of $\underset{t\to +\infty }{\mathrm{lim}}v\left(t\right)=\underset{t\to +\infty }{\mathrm{lim}}\left({v}_{0}-\frac{{M}^{\prime }g}{m}\right){\text{e}}^{-\frac{\alpha }{m}t}+\frac{{M}^{\prime }g}{\alpha }=\frac{{M}^{\prime }g}{\alpha }$ ,k,is defined to be $-kx$ .

Property 1: If $m\frac{{\text{d}}^{2}x}{\text{d}{t}^{2}}=-kx$ is a straight line, the curvature $x\left(0\right)=0$

Property 2: If $v\left(0\right)={v}_{0}$ is a circle of radius R, the curvature $X=\left[\begin{array}{c}x\\ {x}^{\prime }\end{array}\right]$

Example 6 Find the curvature of curve $\frac{\text{d}x}{\text{d}t}=\left[\begin{array}{c}{x}^{\prime }\\ {x}^{″}\end{array}\right]=\left[\begin{array}{c}{x}^{\prime }\\ -\frac{k}{m}x\end{array}\right]=\left[\begin{array}{cc}0& 1\\ -\frac{k}{m}& 0\end{array}\right]×\left[\begin{array}{c}x\\ {x}^{\prime }\end{array}\right]=AX$ :

We have already found the reparameterization of

$R\left(t\right)={\text{e}}^{At}={\text{e}}^{\left[\begin{array}{cc}0& t\\ -\frac{k}{m}t& 0\end{array}\right]}$.

To find the curvature k, we just have to calculate $A\left(t\right)=\left[\begin{array}{cc}0& 1\\ -\frac{k}{m}& 0\end{array}\right]$, which equals to ${\left(At\right)}^{2}=\left[\begin{array}{cc}-\frac{k}{m}{t}^{2}& 0\\ 0& -\frac{k}{m}{t}^{2}\end{array}\right]=-\frac{k}{m}{t}^{2}I$

6.5. Torsion

From now on, we assume that $R\left(t\right)$ is of arc-length parameter (i.e, $\begin{array}{c}\underset{n=0}{\overset{\infty }{\sum }}\frac{{\left(At\right)}^{n}}{n!}=\underset{n=0}{\overset{\infty }{\sum }}\frac{{\left(-\frac{k}{m}{t}^{2}\right)}^{n}{I}^{n}}{\left(2n\right)!}+\underset{n=0}{\overset{\infty }{\sum }}\frac{{\left(-\frac{k}{m}{t}^{2}\right)}^{n}At}{\left(2n+1\right)!}\\ =\frac{{\left(-1\right)}^{n}{\left(\sqrt{\frac{k}{m}t}\right)}^{2n}I}{\left(2n\right)!}+\frac{{\left(-1\right)}^{n}{\left(\sqrt{\frac{k}{m}t}\right)}^{2n}At}{\left(2n+1\right)!}\\ =\left[\begin{array}{cc}\mathrm{cos}\sqrt{\frac{k}{m}t}& 0\\ 0& \mathrm{cos}\sqrt{\frac{k}{m}t}\end{array}\right]+\left[\begin{array}{cc}0& \sqrt{\frac{m}{k}}\mathrm{sin}\sqrt{\frac{k}{m}t}\\ -\sqrt{\frac{m}{k}}\mathrm{sin}\sqrt{\frac{k}{m}t}& 0\end{array}\right]\\ =\left[\begin{array}{cc}\mathrm{cos}\sqrt{\frac{k}{m}t}& \sqrt{\frac{m}{k}}\mathrm{sin}\sqrt{\frac{k}{m}t}\\ -\sqrt{\frac{m}{k}}\mathrm{sin}\sqrt{\frac{k}{m}t}& \mathrm{cos}\sqrt{\frac{k}{m}t}\end{array}\right]\end{array}$ ).

Definition 8 The normal vector $R\left(t\right)$ is defined to be $X\left(t\right)$ .

Theorem 3 For a given curve $X\left(t\right)=R\left(t\right)X\left(0\right)$ of arc-length parameter, $R\left(t\right)X\left(0\right)=\left[\begin{array}{cc}\mathrm{cos}\sqrt{\frac{k}{m}}t& \sqrt{\frac{m}{k}}\mathrm{sin}\sqrt{\frac{k}{m}}t\\ -\sqrt{\frac{m}{k}}\mathrm{sin}\sqrt{\frac{k}{m}}t& \mathrm{cos}\sqrt{\frac{k}{m}}t\end{array}\right]×\left[\begin{array}{c}0\\ {v}_{0}\end{array}\right]=\left[\begin{array}{c}\sqrt{\frac{m}{k}}{v}_{0}\mathrm{sin}\sqrt{\frac{k}{m}}t\\ {v}_{0}\mathrm{cos}\sqrt{\frac{k}{m}}t\end{array}\right]$ is perpendicular to $X\left(t\right)={v}_{0}\sqrt{\frac{m}{k}}\mathrm{sin}\sqrt{\frac{k}{m}}t$ .

Proof: Since $V\left(t\right)={v}_{0}\mathrm{cos}\sqrt{\frac{k}{m}}t$. Then

$m\frac{{\text{d}}^{2}x}{\text{d}{t}^{2}}=-kx-r\frac{\text{d}x}{\text{d}t}\cdot X\left(t\right)=\left[\begin{array}{c}x\\ {x}^{\prime }\end{array}\right]$.

Therefore, $\frac{\text{d}X}{\text{d}t}=\left[\begin{array}{c}{x}^{\prime }\\ {x}^{″}\end{array}\right]=\left[\begin{array}{c}{x}^{\prime }\\ -\frac{k}{m}x-\frac{r}{m}{x}^{\prime }\end{array}\right]=\left[\begin{array}{cc}0& 1\\ -\frac{k}{m}& -\frac{r}{m}\end{array}\right]×\left[\begin{array}{c}x\\ {x}^{\prime }\end{array}\right]$, and $X\left(t\right)={\text{e}}^{tA}X\left(0\right)$ is perpendicular to $X\left(0\right)=\left[\begin{array}{c}x\left(0\right)\\ {x}^{\prime }\left(0\right)\end{array}\right]=\left[\begin{array}{c}0\\ {v}_{0}\end{array}\right]$.

Definition 9 The binormal vector of curve a at s is defined to be $A\left(t\right)$.

The modulus of $\mathrm{det}\left(\lambda I-A\right)=\mathrm{det}\left[\begin{array}{cc}\lambda & -1\\ \frac{k}{m}& \lambda +\frac{{r}^{2}}{m}\end{array}\right]=0$ is constantly 1, since

$\lambda \left(\lambda +\frac{r}{m}\right)+\frac{k}{m}={\lambda }^{2}+\frac{r}{m}\lambda +\frac{k}{m}=0$.

We now present the definition of torsion, a concept that will be further explored based on our previous introduction concerning solving linear ODEs.

Definition 10 (Torsion) For a regular curve ${\lambda }_{1}=\frac{-r+\sqrt{{r}^{2}-4km}}{2m},{\lambda }_{2}=\frac{-r-\sqrt{{r}^{2}-4km}}{2m}$ of arc-length parameter, whose binormal vector is ${J}_{A}$ , $\left[\begin{array}{cc}{\lambda }_{1}& 0\\ 0& {\lambda }_{2}\end{array}\right]$ defined to be the torsion ofthe curve.

Lemma: $\left[\begin{array}{cc}{\lambda }_{1}& 1\\ 0& {\lambda }_{2}\end{array}\right]$

Proof: ${J}_{A}$ can be expressed as

${J}_{A}$

so $AQ=Q{J}_{A}$, and since $\left[\begin{array}{cc}1& 1\\ \frac{-r-\sqrt{{r}^{2}-4km}}{2m}& \frac{-r+\sqrt{{r}^{2}-4km}}{2m}\end{array}\right]$ is perpendicular to $Q=\left[\begin{array}{cc}a& b\\ c& d\end{array}\right]$, we can then conclude that ${Q}^{-1}$ is perpendicular to ${Q}^{-1}=\frac{1}{\mathrm{det}Q}\left[\begin{array}{cc}d& -b\\ -c& a\end{array}\right]$. Therefore, ${Q}^{-1}=\frac{1}{\frac{\sqrt{{r}^{2}-4mk}}{m}}\left[\begin{array}{cc}\frac{-r+\sqrt{{r}^{2}-4mk}}{2m}& -1\\ \frac{r+\sqrt{{r}^{2}-4mk}}{2m}& 1\end{array}\right]$, as ${\text{e}}^{tA}$ is also perpendicular to $\begin{array}{c}Q{\text{e}}^{\left[\begin{array}{cc}{\lambda }_{1}& 0\\ 0& {\lambda }_{2}\end{array}\right]}{Q}^{-1}=Q×\left[\begin{array}{cc}{\text{e}}^{t{\lambda }_{1}}& 0\\ 0& {\text{e}}^{t{\lambda }_{1}}{Q}^{-1}\end{array}\right]=\left[\begin{array}{cc}1& 1\\ {\lambda }_{1}& {\lambda }_{2}\end{array}\right]×\left[\begin{array}{cc}{\text{e}}^{t{\lambda }_{1}}& 0\\ 0& {\text{e}}^{t{\lambda }_{2}}\end{array}\right]×\left[\begin{array}{cc}{\lambda }_{2}& -1\\ -{\lambda }_{1}& 1\end{array}\right]\\ =\left[\begin{array}{cc}{\text{e}}^{t{\lambda }_{1}}& {\text{e}}^{t{\lambda }_{2}}\\ {\lambda }_{1}{\text{e}}^{t{\lambda }_{1}}& {\lambda }_{2}{\text{e}}^{t{\lambda }_{2}}\end{array}\right]×\left[\begin{array}{cc}{\lambda }_{2}& -1\\ -{\lambda }_{1}& 1\end{array}\right]\\ =\left[\begin{array}{cc}{\lambda }_{2}{\text{e}}^{t{\lambda }_{1}}-{\lambda }_{1}{\text{e}}^{t{\lambda }_{2}}& -{\text{e}}^{t{\lambda }_{1}}+{\text{e}}^{t{\lambda }_{2}}\\ 0& -{\lambda }_{1}{\text{e}}^{t{\lambda }_{1}}+{\lambda }_{2}{\text{e}}^{t{\lambda }_{2}}\end{array}\right]\end{array}$.

Now that assume we are given the curvature $\begin{array}{c}X\left(t\right)={\text{e}}^{tA}X\left(0\right)=\left[\begin{array}{cc}{\lambda }_{2}{\text{e}}^{t{\lambda }_{1}}-{\lambda }_{1}{\text{e}}^{t{\lambda }_{2}}& -{\text{e}}^{t{\lambda }_{1}}+{\text{e}}^{t{\lambda }_{2}}\\ 0& -{\lambda }_{1}{\text{e}}^{t{\lambda }_{1}}+{\lambda }_{2}{\text{e}}^{t{\lambda }_{2}}\end{array}\right]×\left[\begin{array}{c}0\\ {v}_{0}\end{array}\right]\\ =\left[\begin{array}{c}\left(-{\text{e}}^{t{\lambda }_{1}}+{\text{e}}^{t{\lambda }_{2}}\right){v}_{0}\\ -{\lambda }_{1}{\text{e}}^{t{\lambda }_{1}}+{\lambda }_{2}{\text{e}}^{t{\lambda }_{2}}{v}_{0}\end{array}\right]\end{array}$ as well as the torsion $x\left(t\right)={v}_{0}\left({\text{e}}^{t{\lambda }_{2}}-{\text{e}}^{-t{\lambda }_{1}}\right)$ of an arclength curve ${\lambda }_{1}=\frac{-r+\sqrt{{r}^{2}-4km}}{2m},{\lambda }_{2}=\frac{-r-\sqrt{{r}^{2}-4km}}{2m}$, we need to calculate $\sqrt{{r}^{2}-4km}>0$ the binormal vector, $x\left(t\right)$ the normal vector, and ${v}_{0}\left({\text{e}}^{t\frac{-r-\sqrt{{r}^{2}-4km}}{2m}}-{\text{e}}^{-t\frac{-r+\sqrt{{r}^{2}-4km}}{2m}}\right)$ the velocity vector. Our first step is to find the way to represent $\sqrt{{r}^{2}-4km}<0$, $x\left(t\right)$, and ${v}_{0}\left({\text{e}}^{-\frac{r}{2m}t}\left({\text{e}}^{it\sqrt{4mk-{r}^{2}}}-{\text{e}}^{-it\sqrt{4mk-{r}^{2}}}\right)\right)$ using $\mathrm{sin}\theta =\frac{{\text{e}}^{i\theta }-{\text{e}}^{-i\theta }}{2i}$, $x\left(t\right)$, $2{v}_{0}{\text{e}}^{-\frac{-r}{2m}t}\left(\mathrm{sin}\sqrt{4km-{r}^{2}}\right)t$ as well as $E\left(t\right)$ and $E\left(t\right)$.

As defined, $m\frac{\text{d}v}{\text{d}t}=qv×B+qE\left(t\right)$. Since $E\left(t\right)$ is perpendicular to both $v\left(0\right)={v}_{0},x\left(0\right)=0$ and $\left[\begin{array}{ccc}0& {B}_{3}& -{B}_{2}\\ -{B}_{3}& 0& {B}_{1}\\ {B}_{2}& -{B}_{1}& 0\end{array}\right]$, we can conclude that $\lambda I-B$. As $\mathrm{det}\left(\lambda I-B\right)=\mathrm{det}\left[\begin{array}{ccc}\lambda & -{B}_{3}& {B}_{2}\\ {B}_{3}& \lambda & -{B}_{1}\\ -{B}_{2}& {B}_{1}& \lambda \end{array}\right]=\lambda \left({\lambda }^{2}+{B}_{1}^{2}+{B}_{2}^{2}+{B}_{3}^{2}\right)$, $\mathrm{det}\left(\lambda I-B\right)=0$ can be expressed as ${\lambda }_{1}=0,{\lambda }_{2}=i|B|,{\lambda }_{3}=-i|B|$. Finally, since

$|B|=\sqrt{{B}_{1}^{2}+{B}_{2}^{2}+{B}_{3}^{2}}$ (6.8)

Therefore, we have the following equations:

$i|B|$ (6.9)

and the differential equation concerning $-i|B|$ can be written in the following form:

$i|B|$ (6.10)

To find $-i|B|$, ${J}_{B}=\left[\begin{array}{ccc}0& 0& 0\\ 0& i|B|& 0\\ 0& 0& -i|B|\end{array}\right]$, and $\lambda =0,\left[\begin{array}{ccc}0& {B}_{3}& -{B}_{2}\\ -{B}_{3}& 0& {B}_{1}\\ {B}_{2}& -{B}_{1}& 0\end{array}\right]\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]=0$, we need to introduce and prove the Fundamental Theorem of The Local Theory Of Curves: Given functions ${V}_{1}=\left[\begin{array}{c}{B}_{1}\\ {B}_{2}\\ {B}_{3}\end{array}\right]$, $\lambda =i|B|,\left[\begin{array}{ccc}-i|B|& {B}_{3}& -{B}_{2}\\ -{B}_{3}& -i|B|& {B}_{1}\\ {B}_{2}& -{B}_{1}& -i|B|\end{array}\right]\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]=0$ differentiable, we can find a space curve $\left\{\begin{array}{l}-i|B|{v}_{1}+{B}_{3}{v}_{2}-{B}_{2}{v}_{3}=0\\ -{B}_{1}{v}_{3}-i|B|{v}_{2}+{B}_{1}{v}_{3}=0\end{array}$ such that ${V}_{2}=\left[\begin{array}{c}i|B|{B}_{2}-{B}_{1}{B}_{3}\\ i|B|{B}_{1}-{B}_{2}{B}_{3}\\ {B}_{1}^{2}+{B}_{2}^{2}\end{array}\right]$, $\lambda =-i|B|,\left[\begin{array}{ccc}i|B|& {B}_{3}& -{B}_{2}\\ -{B}_{3}& i|B|& {B}_{1}\\ {B}_{2}& -{B}_{1}& i|B|\end{array}\right]\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]=0$. Furthermore, if $\left\{\begin{array}{l}i|B|{v}_{1}+{B}_{3}{v}_{2}-{B}_{2}{v}_{3}=0\\ -{B}_{1}{v}_{3}+i|B|{v}_{2}+{B}_{1}{v}_{3}=0\end{array}$ are two curves satisfying this condition, then there exists a rigid transformation A such that ${V}_{3}=\left[\begin{array}{c}-i|B|{B}_{2}-{B}_{1}{B}_{3}\\ -i|B|{B}_{1}-{B}_{2}{B}_{3}\\ {B}_{1}^{2}+{B}_{2}^{2}\end{array}\right]$.

According to Frenet Formula (Reinhard Schäfke, Dieter Schmidt, 2016) , ${V}_{1}=\left[\begin{array}{c}{B}_{1}\\ {B}_{2}\\ {B}_{3}\end{array}\right]$ is clearly an ODE. According to our linear ODE theory, given ${V}_{2}=\left[\begin{array}{c}i|B|{B}_{2}-{B}_{1}{B}_{3}\\ i|B|{B}_{1}-{B}_{2}{B}_{3}\\ {B}_{1}^{2}+{B}_{2}^{2}\end{array}\right]$, there exists a unique solution to the equation ${V}_{3}=\left[\begin{array}{c}-i|B|{B}_{2}-{B}_{1}{B}_{3}\\ -i|B|{B}_{1}-{B}_{2}{B}_{3}\\ {B}_{1}^{2}+{B}_{2}^{2}\end{array}\right]$. Therefore, we can find a group of $Q=\left[\begin{array}{ccc}{B}_{1}& i|B|{B}_{2}-{B}_{1}{B}_{3}& -i|B|{B}_{2}-{B}_{1}{B}_{3}\\ {B}_{2}& i|B|{B}_{1}-{B}_{2}{B}_{3}& -i|B|{B}_{1}-{B}_{2}{B}_{3}\\ {B}_{3}& {B}_{1}^{2}+{B}_{2}^{2}& {B}_{1}^{2}+{B}_{2}^{2}\end{array}\right]$ satisfying the given

equation. Then, accordingly, we can find a $\mathrm{det}\left(Q\right)=2i{|B|}^{2}\left({B}_{1}^{2}-{B}_{2}^{2}\right)$ such that ${Q}^{-1}=\frac{1}{\mathrm{det}\left(Q\right)}\left[\begin{array}{ccc}2\left({B}_{1}^{2}+{B}_{2}^{2}\right)i|B|{B}_{1}& -2\left({B}_{1}^{2}+{B}_{2}^{2}\right)i|B|{B}_{2}& 2i|B|{B}_{3}\left({B}_{1}^{2}-{B}_{2}^{2}\right)\\ -{B}_{2}{|B|}^{2}-i|B|{B}_{1}{B}_{3}& {B}_{1}{|B|}^{2}+i|B|{B}_{2}{B}_{3}& i|B|\left({B}_{1}^{2}-{B}_{2}^{2}\right)\\ {B}_{2}|B|-i|B|{B}_{1}{B}_{3}& i|B|{B}_{2}{B}_{3}-{B}_{1}{|B|}^{2}& i|B|\left({B}_{1}^{2}-{B}_{2}^{2}\right)\end{array}\right]$, where $R\left(t,\tau \right)$ satisfies $R\left(t,\tau \right)={\text{e}}^{\left(t-\tau \right)\frac{q}{m}B}=Q{\text{e}}^{\left(t-\tau \right)\frac{q}{m}{J}_{B}}{Q}^{-1}=Q\left[\begin{array}{ccc}1& 0& 0\\ 0& {\text{e}}^{\left(t-\tau \right)\frac{q}{m}i|B|}& 0\\ 0& 0& {\text{e}}^{\left(t-\tau \right)\frac{q}{m}\left(-i|B|\right)}\end{array}\right]{Q}^{-1}$, $R\left(t,\tau \right)$. Then, assume vector $v\left(t\right)$ is among the vectors satisfying this linear ODE, and vector $R\left(t\right)v\left(0\right)+{\int }_{0}^{t}R\left(t,\tau \right)\frac{q}{m}E\left(\tau \right)\text{d}\tau$, where A is a transformation, also satisfies this ODE, then according to the uniqueness of the solution to ODEs, $v\left(0\right)=0$ must equal to $v\left(t\right),x\left(t\right)$. Till now, we have proved the existence of the unique solution $x\left(t\right)=x\left(0\right)+{\int }_{0}^{t}v\left(\tau \right)\text{d}\tau$ and the existence and uniqueness of $x\left(0\right)=0$.

Finally, we have the following equations:

$u=\left({u}_{1},{u}_{2},{u}_{3}\right),v=\left({v}_{1},{v}_{2},{v}_{3}\right)$ (6.11)

Therefore, we proved that if $u\cdot v={u}_{1}\cdot {v}_{1}+{u}_{2}\cdot {v}_{2}+{u}_{3}\cdot {v}_{3}=|u|\cdot |v|\mathrm{cos}\theta$ are two curves satisfying this condition, then there exists a rigid transformation A such that $|u|=\sqrt{{u}_{1}^{2}+{u}_{2}^{2}+{u}_{3}^{2}}$.

7. Conclusion

Based on the six sections above, we see that linear ODEs has a vast range of application in Physics and Geometry. Only through using them can we characterize certain evolution functions and solve equations relating the rate of change of a function and its value. Also, through applying the matrix method for solving linear ODEs, we are able to simplify and solve equations of higher dimension. Overall, the matrix method is quite useful and effective in calculating multiple variables at the same time.

Acknowledgements

Some physics background: A theoretical expression for the viscous force on a ball at low speed is given by Stoke: $u\cdot v=v\cdot u$, where r represents the radius of the ball, v the speed and $u\ne 0,v\ne 0$ the viscosity of the fluid. In the question, we may write directly $u\cdot v=0$.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

  Cartan, H. (1981) Differential Calculus. Hermann.  Axler, S. (2015) Linear Algebra Done Right. Springer. https://doi.org/10.1007/978-3-319-11080-6  Vizvari, Z., Sari, Z., Klincsik, M. and Odry, P. (2020) Exact Schemes for Second-Order Linear Differential Equations in Self-Adjoint Cases. Advances in Difference Equations, 2020. https://doi.org/10.1186/s13662-020-02957-7  Fage, M.K. and Maximey, G.A. (1974) On Approximate Solution of Systems of Linear Ordinary Differential Equations. Acta Universitatis Carolinae. Mathematica et Physica, 15, 17.  Vector Space. https://mathworld.wolfram.com/VectorSpace.html  Chantladze, T., Lomtatidze, A. and Ugulava, D. (2000) Conjugacy and Disconjugacy Criteria for Second Order Linear Ordinary Differential Equations. Archivum Mathematicum, 36, 313-323.  Zdeněk, H. (2005) Continuous Dependence of Inverse Fundamental Matrices of Generalized Linear Ordinary Differential Equations on a Parameter. Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica, 44, 39-48.  Schäfke, R. and Schmidt, D. (2006) The Connection Problem for General Linear Ordinary Differential Equations at Two Regular Singular Points with Applications in the Theory of Special Functions. SIAM Journal on Mathematical Analysis, 11, 848-862. https://doi.org/10.1137/0511076     customer@scirp.org +86 18163351462(WhatsApp) 1655362766  Paper Publishing WeChat 