1. Introduction
Deep Learning is one of Machine Learning methods and it has attracted much attention recently. Leaning in general is divided into the supervised learning and unsupervised learning. In this paper we discuss only supervised learning. For general introduction to Deep Learning, see for example [1] . The monographs [2] [3] [4] are standard textbooks.
*Dedicated to the memory of Professor Ichiro Yokota.
Deep Learning is based on big data, so the supervised learning will give a heavy load to the computer. In order to alleviate the burden we usually use a practical method called the minibatch (a collection of randomly selected small data from big data, see Figure 5). Although the method is commonly used it is not widely understood why it is so effective from the mathematical viewpoint. We point out in Theorem II that a certain condition, which is practically satisfied in ordinary applications, is essential for the effectiveness of the minibatch method.
In this paper we present a short and concise review of Deep Learning for non-experts and provide a mathematical reinforcement to the minibatch from the viewpoint of Linear Algebra. Theorem I and II in the text are our main results and some related problems are presented.
After reading this paper, readers are encouraged to tackle a remarkable paper [5] which is definitely a monumental achievement.
2. Simple Neural Network
As a general introduction to Neuron Dynamics see for example [6] [7] .
For non-experts let us draw a neuron model based on the step function
. In Figure 1 the set
is the input data,
is the output data and the set
is the weights of synaptic connections. The parameter q is the threshold of the neuron.
Here,
is a simple step function defined by
Since an algorithm called the backpropagation in Neural Network System uses derivatives of some activation functions, the simple step function
is not suitable. Therefore, we usually use a function called the sigmoid function instead of the step function (see Figure 2). This function is used for a nonlinear compression of data.
Definition The sigmoid function is defined by
(1)
It is easy to see that the function satisfies
and its derivatives are expressed as polynomials of the original function
(2)
The graph is as follows.
Comment In Mathematics this is a famous function called the standard logistic function, while in Deep Learning this is called the sigmoid function.
A general logistic function
has three parameters
, the overall height L, the parameter l specifying the inverse width of the transient region and the location of the center
:
The standard logistic function is characterized by
. See a lecture note [8] as to why the function is so important.
Let us draw a neuron model based on the sigmoid function, See Figure 3.
The most important thing is to improve the set of weights of the synoptic connections
by hard learning. For the purpose we prepare m input data and teacher signals. In the following we assume
and
for simplicity. This causes no problem for the present explanation. There is a suitable value of q for each specific application.
For
we set
where T denotes the transpose of a vector (or a matrix).
Our method is to determine the weights of synaptic connections which match the output data with given teacher signals as much as possible. For the purpose we consider a square error like
(3)
(4)
and determine
so that
is minimized.
We want to realize this purpose by repeated learning. For that we consider a discrete dynamical system. Namely, we set
(5)
for
.
The initial value
is given appropriately (not so important).
2.1. Gradient Descent
For a general introduction to Gradient Descent see for example [9] .
The gradient descent is performed by changing
to
(6)
and we evaluate the error with the renewed weights
This renewal of the weights is the learning process. This is very popular in Mathematics1. We repeat this procedure T steps which is large enough like
Let us proceed. We set
for simplicity (t omitted). From (5) simple calculation gives
and from (6) we obtain
By substituting this into (5) we obtain
and the square error becomes
which is complicated enough. The renewal of the weights terminates when all
, which is a critical point as shown in the next subsection.
2.2. Number of Critical Points
Here we evaluate the number of critical points to be encountered in the process. First of all, let us show the definition of a critical point with a simple example.
Definition Let
be a smooth function. Then a critical (or stationary) point
is defined by
(7)
A critical point is not necessarily a point giving local minimum or local maximum.
We make an important assumption with respect to the input data.
Assumption (good data) For
(8)
are linearly independent.
The assumption tells that there is a set of integers
satisfying
(9)
Let us recall
and try to find if E has a critical point. Suppose E has a critical point at the weights
,
Let us set
for simplicity. The equation can be written as
and it implies
or in matrix form
Since the determinant of coefficient matrix is non-zero from (9) we have
and
because
. This means
The result shows that there is no critical point in this process except for
. Therefore, when modifying
by the gradient descent (6) successively there is no danger of being trapped by points taking local minima. Let us summarize the result :
Theorem I Under the assumption (8) there is only one global minimum
.
Next, let us explain how to choose linearly independent data. We assume that n is huge and m is sufficiently smaller than n.
For m input data
we consider the
Gram’s deterninant
(10)
If m is not large, evaluation of the Gram’s determinant is not so difficult. It is well-known that the input data are linearly independent if the Gram’s determinant is non-zero (see for example [10] ).
Let us give a short explanation for non-experts. For simplicity we consider the case of
and set
Then
and we assume that
are linearly dependent. For example, we can set
Short calculation after substituting this into the Gram's determinant above gives
and some fundamental operations of determinant give
Namely, we have shown
Taking the contraposition gives
Comment For
we denote by
a set of all
matrices over R and denote
It is well-known that the subset
is open and dense in
. Therefore, if we choose m vectors from Rn randomly they are almost all linearly independent.
3. Deep Neural Network
Deep Learning is the heart of Artificial Intelligence (AI) and we provide a concise review for non-experts. Deep Learning is a deep neural network consisting of the input layer (K components), the (multiplex) intermediate layers (
components and the output layer (N components). See Figure 4 (in which
).
3.1. Backpropagation
First, let us show a figure of deep neural network2 (the figure is short in intermediate layers).
Let us explain some notations.
is the input data. In this case, the weights of synaptic connections are represented by matrices
given by
(11)
and our aim is educating these matrices.
The synaptic connections are expressed as
and
and
respectively. See Figure 4 once more.
A comment is in order. The flow of data is as follows:
(12)
Now, if
is a set of teacher signals, then the square error becomes
Let us consider (discrete) time evolution of the model. For
, we must determine
and we expect
if t is large enough (the end of Learning !).
Assume now that the system is in the t-th step
(13)
Then the time evolution of the weights of synaptic connections is determined by the gradient descent:
(14)
which is a natural generalization of that of Section 2.1.
Before the calculation let us make a comment on the derivatives of a composite function.
Comment Let us explain with a simple example. For a composite function
the derivative is given by
A composite function is in general complicated, while its derivative is not so complicated.
3.2. Calculation
Let us perform the calculation of (14).
1) Case of
: From
we obtain
2) Case of
: From
we have
3) Case of
: From
we have
The calculation so far is based on one input data, for simplicity. However, we must treat a lot of data simultaneously. Let us denote by a the number of data to be considered.
For
we set
By taking all these into consideration the square error function in (13) changes to
(15)
at the t-th step.
In this case the calculation is very simple. For example, since
it is only changed to
The calculation for both
and
is quite similar (omitted).
In the following we make a mysterious assumption.
Assumption We assume that the data in the final step of the intermediate layers are linearly independent:
(16)
See Figure 4 and the flow of data (12).
We can show that there is no critical point in our system except for
. Therefore, when modifying
by the gradient descent (14) (replaced by
) successively there is no danger of being trapped by points taking local minima.
Proof: The proof is essentially the same as that of Section 2.2. However, the proof is the heart of the paper, so we repeat it.
A critical point is defined by the equations
From (15) we have and set
and it implies
Therefore, its matrix form becomes
Since the determinant of coefficient matrix is non-zero from (16) we have
and
and
gives
Moreover, these equations are independent of
, so that we finally obtain
Let us summarize the result :
Theorem II Under the assumption (16) there is only one global minimum
.
Let us emphasize that the assumption (16) is “local” because it refers to the linear independence of the data of the layer one step before the last of the flow:
A question arises naturally.
Problem I What is the meaning of the assumption in the total flow of data ?
Last in this section let us comment on the minibatch of Deep Learning. When input data A is huge the calculation of time evolution of the weights of synaptic connections will give a heavy load to the computer. In order to alleviate it the minibatch is practical and very useful, see Figure 5.
Then a question also arises naturally.
Problem II Is there a relation between A and T?
One may conjecture
where [k] is the Gauss symbol, which is the greatest integer less than or equal to k. For example, [3.14] = 3. However, I do not believe this is correct.
4. Concluding Remarks
Nowadays, studying Deep Learning is standard for university students in the world. However, it is not so easy for them because Deep Learning requires a lot of knowledge.
In this paper, we treated the minibatch of Deep Learning and gave the mathematical reinforcement to it from the viewpoint of Linear Algebra and presented some related problems. We expect that young researchers will solve the problems in the near future.
As a related topic of Artificial Intelligence, there is Information Retrieval. Since there is no more space, we only refer to some relevant works [11] [12] [13] .
Acknowledgements
We wish to thank Ryu Sasaki for useful suggestions and comments.
NOTES
1However, to choose ò in (6) correctly is a very hard problem.
2Figure 4 is of course not perfect because drawing a figure of a big size is not easy.