Derivation of Gaussian Probability Distribution: A New Approach

Abstract

The famous de Moivre’s Laplace limit theorem proved the probability density function of Gaussian distribution from binomial probability mass function under specified conditions. De Moivre’s Laplace approach is cumbersome as it relies heavily on many lemmas and theorems. This paper invented an alternative and less rigorous method of deriving Gaussian distribution from basic random experiment conditional on some assumptions.

Share and Cite:

Adeniran, A. , Faweya, O. , Ogunlade, T. and Balogun, K. (2020) Derivation of Gaussian Probability Distribution: A New Approach. Applied Mathematics, 11, 436-446. doi: 10.4236/am.2020.116031.

1. Introduction

A well celebrated, fundamental probability distribution for the class of continuous functions is the classical Gaussian distribution named after the German Mathematician Karl Friedrich Gauss in 1809.

Definition 1.1 Let $\mu$ and $\sigma$ be constants with $-\infty <\mu <\infty$ and $\sigma >0$ . The function

$f\left(x;\mu ,\sigma \right)=\frac{1}{\sqrt{2\pi {\sigma }^{2}}}{\text{e}}^{-\frac{1}{2}{\left(\frac{x-\mu }{\sigma }\right)}^{2}};\text{ }\text{ }\text{for}\text{\hspace{0.17em}}\text{ }-\infty (1)

is called the normal probability density function of a random variable X with parameters $\mu$ and $\sigma$.

Both in theories and applications, without element of equivocation, the Gaussian distribution function is the most essential and widely referencing distribution in statistics.

The well-known method of deriving this distribution first appeared in the second edition of the Doctrine of Chances by Abraham de Moivre (hence, de Moivre’s Laplace limit theorem) published in 1738 (     ). The mathematical statement of the popular de Moivre’s theorem follows.

Theorem 1.1 (de Moivre’s Laplace limit theorem) As n grows large ( $n\to \infty$ ), for x in the neighborhood of np, for moderate values of p ( $p\ne 0$ and $p\ne 1$ ), we can approximate

$\left(\begin{array}{c}n\\ x\end{array}\right){p}^{x}{q}^{n-x}\approx \frac{1}{\sqrt{2\pi npq}}{\text{e}}^{-\frac{{\left(x-np\right)}^{2}}{2npq}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}p+q=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}p,q>0.$ (2)

Explicitly, the theorem asserts that suppose $n\in {ℤ}^{+}$, and let p and q be probabilities, with $p+q=1$. The function

$b\left(x;n,p\right)=\left(\begin{array}{c}n\\ x\end{array}\right){p}^{x}{\left(1-p\right)}^{n-x}\text{ }\text{ }\text{for}\text{ }\text{\hspace{0.17em}}x=0,1,2,\cdots ,n$ (3)

called the binomial probability function converges to the probability density function of the normal distribution as $n\to \infty$ with mean np and standard deviation $\sqrt{np\left(1-p\right)}$.

Although, De Moivre proved the result for $p=\frac{1}{2}$ (  ).  extended and generalized the proof to all values of p (probability of success in any trial) such that p is not too small and not too big. Feller result was expounded by .     used uniqueness property of moment generating function technique to proof the same theorem.

In this paper, we attempt to find an answer to the question: is there any alternative procedure to the derivation of Gaussian probability density function apart from de Moivre’s Laplace limit theorem approach which relies heavily on many Lemmas and Theorems (Stirling approximation formula, Maclaurin series expansion etc.), as evidenced by the work of  and  ?

2. Existing Technique

This section presents the summary proof of the existing de Moivre’s Laplace limit theorem. First and foremost, the study state with proof, the most important lemma of the de-Moivre’s Laplace limit theorem, Stirling approximation principle.

Lemma 2.1 (Stirling Approximation Principle) Given an integer $n;n>0$, the factorial of a large number n can be replaced with the approximation

$n!\approx \sqrt{2\pi n}{\left(\frac{n}{\text{e}}\right)}^{n}$

Proof 2.1 This lemma can be derived using the integral definition of the factorial,

$n!=\Gamma \left(n+1\right)={\int }_{0}^{\infty }\text{ }{x}^{n}{\text{e}}^{-x}\text{d}x$ (4)

Note that the derivative of the logarithm of the integrand can be written

$\frac{\text{d}}{\text{d}x}\mathrm{ln}\left({x}^{n}{\text{e}}^{-x}\right)=\frac{\text{d}}{\text{d}x}\left(n\mathrm{ln}x-x\right)=\frac{n}{x}-1$ (5)

The integrand is sharply peaked with the contribution important only near $x=n$. Therefore, let $x=n+\delta$ where $\delta \le n$, and write

$\begin{array}{c}\mathrm{ln}\left({x}^{n}{\text{e}}^{-x}\right)=n\mathrm{ln}\left(n+\delta \right)-\left(n+\delta \right)=n\mathrm{ln}\left[n\left(1+\frac{\delta }{n}\right)\right]-\left(n+\delta \right)\\ =n\left[\mathrm{ln}\left(n\right)+\mathrm{ln}\left(1+\frac{\delta }{n}\right)\right]-\left(n+\delta \right)\end{array}$ (6)

Recall that the Maclaurin series of $f\left(x\right)=\mathrm{ln}\left(1+x\right)=x-\frac{1}{2}{x}^{2}+0\left(n\right)$. Therefore,

$\mathrm{ln}\left({x}^{n}{\text{e}}^{-x}\right)=n\left[\mathrm{ln}\left(n\right)+\frac{\delta }{n}-\frac{1}{2}\left(\frac{{\delta }^{2}}{{n}^{2}}\right)+\cdots \right]-\left(n+\delta \right)=\mathrm{ln}\left({n}^{n}\right)-n-\frac{{\delta }^{2}}{2n}+\cdots$ (7)

Taking the exponential on both sides of the preceding Equation (7) gives

${x}^{n}{\text{e}}^{-x}\approx {\text{e}}^{\mathrm{ln}\left({n}^{n}\right)-n-\frac{{\delta }^{2}}{2n}}={\text{e}}^{\mathrm{ln}\left({n}^{n}\right)}{\text{e}}^{-n}{\text{e}}^{-\frac{{\delta }^{2}}{2n}}={\left(\frac{n}{\text{e}}\right)}^{n}{\text{e}}^{-\frac{{\delta }^{2}}{2n}}$ (8)

Plugging (8) into the integral expression for $n!$, that is, (4) gives

$n!\approx {\int }_{0}^{\infty }\text{ }{\left(\frac{n}{\text{e}}\right)}^{n}{\text{e}}^{-\frac{{\delta }^{2}}{2n}}\text{d}\delta ={\left(\frac{n}{\text{e}}\right)}^{n}{\int }_{-\infty }^{\infty }\text{ }{\text{e}}^{-\frac{{\delta }^{2}}{2n}}\text{d}\delta$ (9)

From (9), let $I={\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{\delta }^{2}}{2n}}\text{d}\delta$ and considering $\delta$ and $\kappa$ as a dummy variable such that

${I}^{2}={\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{\delta }^{2}}{2\alpha }}\text{d}\delta ×{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{\kappa }^{2}}{2n}}\text{d}\kappa ={\int }_{0}^{\infty }{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{1}{2n}\left({\delta }^{2}+{\kappa }^{2}\right)}\text{d}\delta \text{d}\kappa$ (10)

Transforming ${I}^{2}$ from algebra to polar coordinates yields $\delta =\rho \mathrm{cos}\left(\theta \right)$, $\kappa =\rho \mathrm{sin}\left(\theta \right)$ which implies ${\delta }^{2}+{\kappa }^{2}={\rho }^{2}$ with Jacobian (J) of the transformation as

$J=|\begin{array}{cc}\frac{\partial \delta }{\partial \rho }& \frac{\partial \delta }{\partial \theta }\\ \frac{\partial \kappa }{\partial \rho }& \frac{\partial \kappa }{\partial \theta }\end{array}|=|\begin{array}{cc}\mathrm{cos}\left(\theta \right)& -\rho \mathrm{sin}\left(\theta \right)\\ \mathrm{sin}\left(\theta \right)& \rho \mathrm{cos}\left(\theta \right)\end{array}|=\rho$ (11)

Hence,

$\begin{array}{c}{I}^{2}={\int }_{0}^{2\pi }{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{\rho }^{2}}{2n}}|J|\text{d}\rho \text{d}\theta ={\int }_{0}^{2\pi }{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{\rho }^{2}}{2n}}\rho \text{d}\rho \text{d}\theta \\ =-n{\int }_{0}^{2\pi }{\left[{\text{e}}^{-u}\right]}_{0}^{\infty }\text{d}\theta =n{\int }_{0}^{2\pi }\text{ }\text{d}\theta =2\pi n\end{array}$ (12)

Therefore, $I=I=\sqrt{2\pi \text{ }n}$. Substituting for I in (9) gives

$n!\approx \sqrt{2\pi \text{ }n}{\left(\frac{n}{\text{e}}\right)}^{n}$ $⊡$ (13)

We now begin with proof of theorem (1.1) using the popular existing technique.

Proof 2.2 Using the result of lemma (2.1), Equation (3) can be rewritten as

$\begin{array}{c}f\left(x;n,p\right)\approx \frac{\sqrt{2\pi n}{\left(\frac{n}{\text{e}}\right)}^{n}}{\sqrt{2\pi x}{\left(\frac{x}{\text{e}}\right)}^{x}\sqrt{2\pi \left(n-x\right)}{\left(\frac{n-x}{\text{e}}\right)}^{n-x}}{p}^{x}{\left(1-p\right)}^{n-x}\\ =\frac{1}{\sqrt{2\pi }}\frac{{n}^{n+\frac{1}{2}}}{{x}^{x+\frac{1}{2}}{\left(n-x\right)}^{n-x+\frac{1}{2}}}{p}^{x}{\left(1-p\right)}^{n-x}\end{array}$ (14)

Multiplying both numerator and denominator of Equation (14) by $\sqrt{n}\equiv {n}^{\frac{1}{2}}$ to get

$\begin{array}{c}f\left(x;n,p\right)\approx \frac{1}{\sqrt{2\pi n}}\frac{{n}^{n+\frac{1}{2}+\frac{1}{2}+x-x}}{{x}^{x+\frac{1}{2}}{\left(n-x\right)}^{n-x+\frac{1}{2}}}{p}^{x}{\left(1-p\right)}^{n-x}\\ =\frac{1}{\sqrt{2\pi n}}{\left(\frac{x}{n}\right)}^{-x-\frac{1}{2}}{\left(\frac{n-x}{n}\right)}^{-n+x-\frac{1}{2}}{p}^{x}{\left(1-p\right)}^{n-x}\end{array}$ (15)

Since x is in the neighborhood of np, change variables $x=np+\epsilon$, where $\epsilon$ measures the distance from the mean, np, of the binomial and the measured quantity x. Re-write (15) in terms of $\epsilon$ and further simplify as follow

$\begin{array}{l}f\left(x;n,p\right)\approx \frac{1}{\sqrt{2\pi n}}{\left(\frac{np+\epsilon }{n}\right)}^{-x-\frac{1}{2}}{\left(\frac{n-np-\epsilon }{n}\right)}^{-n+x-\frac{1}{2}}{p}^{x}{\left(1-p\right)}^{n-x}\\ \approx \frac{1}{\sqrt{2\pi n}}{\left[p\left(1+\frac{\epsilon }{np}\right)\right]}^{-x-\frac{1}{2}}{\left[\left(1-p\right)\left(1-\frac{\epsilon }{n\left(1-p\right)}\right)\right]}^{-n+x-\frac{1}{2}}{p}^{x}{\left(1-p\right)}^{n-x}\\ \approx \frac{1}{\sqrt{2\pi n}}{\left(1+\frac{\epsilon }{np}\right)}^{-x-\frac{1}{2}}{\left(1-\frac{\epsilon }{n\left(1-p\right)}\right)}^{-n+x-\frac{1}{2}}{p}^{-\frac{1}{2}}{\left(1-p\right)}^{-\frac{1}{2}}\end{array}$

to get

$f\left(x;n,p\right)\approx \frac{1}{\sqrt{2\pi np\left(1-p\right)}}{\left(1+\frac{\epsilon }{np}\right)}^{-x-\frac{1}{2}}{\left(1-\frac{\epsilon }{n\left(1-p\right)}\right)}^{-n+x-\frac{1}{2}}$ (16)

Note that $x=\mathrm{exp}\left(\mathrm{ln}x\right)$. Therefore, rewriting (16) in exponential form to have

$\begin{array}{l}f\left(x;n,p\right)\approx \frac{1}{\sqrt{2\pi np\left(1-p\right)}}\mathrm{exp}\left[\mathrm{ln}\left\{{\left(1+\frac{\epsilon }{np}\right)}^{-x-\frac{1}{2}}{\left(1-\frac{\epsilon }{n\left(1-p\right)}\right)}^{-n+x-\frac{1}{2}}\right\}\right]\\ \approx \frac{1}{\sqrt{2\pi np\left(1-p\right)}}\mathrm{exp}\left[\left(-x-\frac{1}{2}\right)\mathrm{ln}\left(1+\frac{\epsilon }{np}\right)+\left(-n+x-\frac{1}{2}\right)\mathrm{ln}\left(1-\frac{\epsilon }{n\left(1-p\right)}\right)\right]\end{array}$

$\begin{array}{c}f\left(x;n,p\right)\approx \frac{1}{\sqrt{2\pi np\left(1-p\right)}}\mathrm{exp}\left[\left(-np-\epsilon -\frac{1}{2}\right)\mathrm{ln}\left(1+\frac{\epsilon }{np}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(-n\left(1-p\right)+\epsilon -\frac{1}{2}\right)\mathrm{ln}\left(1-\frac{\epsilon }{n\left(1-p\right)}\right)\right]\end{array}$ (17)

Suppose $f\left(x\right)=\mathrm{ln}\left(1+x\right)$, using Maclaurin series $f\left(x\right)=x-\frac{1}{2}{x}^{2}+0\left(n\right)$ and similarly $f\left(x\right)=\mathrm{ln}\left(1-x\right)=-x-\frac{1}{2}{x}^{2}+0\left(n\right)$. So that, $\mathrm{ln}\left(1+\frac{\epsilon }{np}\right)\approx \frac{\epsilon }{np}-\frac{1}{2}{\left(\frac{\epsilon }{np}\right)}^{2}$ and $\mathrm{ln}\left(1-\frac{\epsilon }{n\left(1-p\right)}\right)\approx -\frac{\epsilon }{n\left(1-p\right)}-\frac{1}{2}{\left(\frac{\epsilon }{n\left(1-p\right)}\right)}^{2}$. As a result,

$\begin{array}{l}f\left(x;n,p\right)\\ \approx \frac{1}{\sqrt{2\pi np\left(1-p\right)}}\mathrm{exp}\left[-\epsilon +\frac{{\epsilon }^{2}}{2np}-\frac{{\epsilon }^{2}}{np}+\epsilon +\frac{{\epsilon }^{2}}{2n\left(1-p\right)}-\frac{{\epsilon }^{2}}{n\left(1-p\right)}\right]\\ =\frac{1}{\sqrt{2\pi np\left(1-p\right)}}\mathrm{exp}\left[-\frac{1}{2}\left(\frac{{\epsilon }^{2}}{np\left(1-p\right)}\right)\right]\end{array}$ (18)

Recall that $x=np+\epsilon$ which implies that ${\epsilon }^{2}={\left(x-np\right)}^{2}$. From binomial distribution $np=\mu$, and $np\left(1-p\right)={\sigma }^{2}$ which implies that $\sqrt{np\left(1-p\right)}=\sigma$. Making appropriate substitution of these in the Equation (18) yields

$f\left(x;n,p\right)\approx \frac{1}{\sqrt{2\pi {\sigma }^{2}}}\mathrm{exp}\left[-\frac{1}{2}\frac{{\left(x-\mu \right)}^{2}}{{\sigma }^{2}}\right]=\frac{1}{\sigma \sqrt{2\pi }}{\text{e}}^{-\frac{1}{2}{\left(\frac{x-\mu }{\sigma }\right)}^{2}};\text{ }\text{ }\text{for}\text{\hspace{0.17em}}-\infty (19)

The theorem confirmed.

We recommend that readers interested in the detailed proof of the theorem to consult the study expounded by .

3. The Proposed Technique

Suppose a random experiment of throwing needle or any other dart related objects at the origin of the cartesian plane is performed with the aim of hitting the centre (see Figure 1).

Due to human nature of inconsistency or lack of perfection, varying results in the throwing generate random errors. To make the derivation possible and less rigorous, we make the following assumptions:

1) The errors are independent of the orientation of the coordinate system.

2) Errors in perpendicular directions are independent. This means that being too high doesn’t alter the probability of being off to the right.

Figure 1. The possible results of the dart experiment.

3) Small errors are more likely than large errors. That is, throwings are more likely to land in region P than either Q or R, since region P is closer to the target (origin). Similarly, for the same reason, region Q is more likely than region R. Furthermore, there is higher possibility or tendency of hitting region V than either S or T, since V has the wider or bigger surface area and the distances from the origin are approximately the same.

From Figure 2, let the probability of the needle falling in the vertical strip from x to $x+\Delta \text{ }x$ be denoted as $p\left(x\right)\Delta x$. Similarly, the probability of the needle falling in the horizontal strip from y to $y+\Delta \text{ }y$ be $p\left(y\right)\Delta y$. Obviously, the function cannot be constant, due to the stochastic nature of the experiment. In this study, our interest is to know and obtain the form and characteristics of the function $p\left(x\right)$. From second assumption, the probability of the needle falling in the shaded region ABCD (see Figure 2) is

$p\left(x\right)\Delta x\cdot p\left(y\right)\Delta y$

Note that any regions r unit from the origin with area $\Delta \text{ }x\Delta \text{ }y$ has the same probability which is a consequence of the assumption that errors do not depend on the orientation. We can say that

$p\left(x\right)\Delta x\cdot p\left(y\right)\Delta y=p\left(x\right)p\left(y\right)\Delta x\Delta y=g\left(r\right)\Delta x\Delta y$ (20)

where

$g\left(r\right)=p\left(x\right)p\left(y\right)$ (21)

from fundamental rule of Calculus, differentiating (using product rule) both sides of Equation (21) with respect to $\theta$ gives

$0=p\left(x\right)\frac{\text{d}}{\text{d}\theta }p\left(y\right)+p\left(y\right)\frac{\text{d}}{\text{d}\theta }p\left(x\right)$ (22)

Here, ${g}^{\prime }\left(r\right)=0$ since $g\left(.\right)$ is independent of orientation. By transformation to polar coordinates, $x=r\mathrm{cos}\theta$ and $y=r\mathrm{sin}\theta$, we can rewrite the derivatives in Equation (22) as

Figure 2. The typical example of the experiment.

$0=p\left(x\right)\frac{\text{d}}{\text{d}\theta }p\left(y=r\mathrm{sin}\theta \right)+p\left(y\right)\frac{\text{d}}{\text{d}\theta }p\left(x=r\mathrm{cos}\theta \right)$ (23)

Using chain rule of differentiation, (23) becomes

$0=p\left(x\right){p}^{\prime }\left(y\right)r\mathrm{cos}\theta -p\left(y\right){p}^{\prime }\left(x\right)r\mathrm{sin}\theta$ (24)

Rewriting Equation (24) again by replacing $r\mathrm{cos}\theta$ with x and $r\mathrm{cos}\theta$ with y yields

$0=p\left(x\right){p}^{\prime }\left(y\right)x-p\left(y\right){p}^{\prime }\left(x\right)y$ (25)

The above differential equation can be put in a form such that it can be solved using variable separable technique as

$\frac{{p}^{\prime }\left(x\right)}{p\left(x\right)x}=\frac{{p}^{\prime }\left(y\right)}{p\left(y\right)y}$ (26)

This differential equation can only be true for any x and y, x and y are

independent, if and only if the ratio $\frac{{p}^{\prime }\left(x\right)}{p\left(x\right)x}=\frac{{p}^{\prime }\left(y\right)}{p\left(y\right)y}$ defined by (26) is a constant. That is, if

$\frac{{p}^{\prime }\left(x\right)}{p\left(x\right)x}=\frac{{p}^{\prime }\left(y\right)}{p\left(y\right)y}=c.$ (27)

Consider $\frac{{p}^{\prime }\left(x\right)}{p\left(x\right)x}=c$ in (27) and rearrange to have

$\frac{{p}^{\prime }\left(x\right)}{p\left(x\right)}=cx.$ (28)

Integrating Equation (28) gives

$\mathrm{ln}p\left(x\right)=\frac{c{x}^{2}}{2}+{k}_{1}\text{ }\text{ }\text{so}\text{\hspace{0.17em}}\text{that}\text{ }\text{\hspace{0.17em}}\text{ }p\left(x\right)=k{\text{e}}^{\frac{c{x}^{2}}{2}};\text{ }\text{ }\text{where}\text{\hspace{0.17em}}k={\text{e}}^{{k}_{1}}.$ (29)

By third assumption, c must be negative so that we write the probability function (29)

$p\left(x\right)=k{\text{e}}^{-\frac{c}{2}{x}^{2}};\text{ }\text{ }\text{where}\text{ }\text{\hspace{0.17em}}c\in {ℝ}^{+}$ (30)

If there is a horizontal shift of target from the origin to an arbitrary point $\mu$ which now mark the new center/target, then the probability function in (30) becomes

$p\left(x\right)=k{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}$ (31)

Differentiating (31) and set the derivative equal to zero gives

${p}^{\prime }\left(x\right)=-ck\left(x-\mu \right){\text{e}}^{-\frac{c{\left(x-\mu \right)}^{2}}{2}}=0$ (32)

since ${\text{e}}^{-\frac{c{\left(x-\mu \right)}^{2}}{2}}\ne 0$ implies $x=\mu$. Therefore, Equation (31) has maximum value at $x=\mu$ and point of inflexion at $x=\mu ±\frac{1}{\sqrt{k}}$. Obviously, (31) has given

us the basic form of the Gaussian distribution with constants k and c, and domain of X as $-\infty$ to $\infty$. Therefore, for Equation (31) to be regarded as a proper probability density function, the total area under the curve must be 1. That is

${\int }_{-\infty }^{\infty }\text{ }k{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x=1$ (33)

For a symmetric function $f\left(x\right)$, ${\int }_{-\infty }^{\infty }\text{ }f\left(x\right)\text{d}x=2{\int }_{0}^{\infty }\text{ }f\left(x\right)\text{d}x$. Applying this property to Equation (33) yields

${\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x=\frac{1}{2k}.$ (34)

Squaring both sides of (34) to get

${\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x\cdot {\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}{\left(y-\mu \right)}^{2}}\text{d}y=\frac{1}{2k}×\frac{1}{2k}$ (35)

This is possible since x and y are just dummy variables. Recall that x and y are also independent, so we can write the product in LHS of (35) as a double integral to produce

${\int }_{0}^{\infty }{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}\left[{\left(x-\mu \right)}^{2}+{\left(y-\mu \right)}^{2}\right]}\text{d}x\text{d}y=\frac{1}{4{k}^{2}}.$ (36)

Putting $x-\mu =z⇒\text{d}x=\text{d}z$ and $y-\mu =w⇒\text{d}y=\text{d}w$ in the preceding Equation (36) gives

${\int }_{0}^{\infty }{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}\left[{z}^{2}+{w}^{2}\right]}\text{d}z\text{d}w=\frac{1}{4{k}^{2}}.$ (37)

The double integral (37) can be evaluated using polar coordinates as $z=r\mathrm{cos}\theta$ and $y=r\mathrm{sin}\theta$ with Jacobian (J) of the transformation as

$J=|\frac{\partial \left(z,w\right)}{\partial \left(r,\theta \right)}|=|\begin{array}{cc}\frac{\text{d}z}{\text{d}r}& \frac{\text{d}z}{\text{d}\theta }\\ \frac{\text{d}w}{\text{d}r}& \frac{\text{d}w}{\text{d}\theta }\end{array}|=|\begin{array}{cc}\mathrm{cos}\theta & -r\mathrm{sin}\theta \\ \mathrm{sin}\theta & r\mathrm{cos}\theta \end{array}|=r,$ (38)

and

${z}^{2}+{w}^{2}={\left(r\mathrm{cos}\theta \right)}^{2}+{\left(r\mathrm{sin}\theta \right)}^{2}={r}^{2}.$ (39)

So, Equation (37) now becomes

${\int }_{0}^{\frac{\pi }{2}}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c{r}^{2}}{2}}|J|\text{d}r\text{d}\theta ={\int }_{0}^{\frac{\pi }{2}}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c{r}^{2}}{2}}r\text{d}r\text{d}\theta =\frac{1}{4{k}^{2}}.$ (40)

Evaluating the double integral ${\int }_{0}^{\frac{\pi }{2}}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c{r}^{2}}{2}}r\text{d}r\text{d}\theta$ in Equation (40) by first letting $u=\frac{c{r}^{2}}{2}$, and solving for k in the resulting equation yields

$k=\sqrt{\frac{c}{2\pi }}$ (41)

Putting (41) in (31), the probability density function, $p\left(x\right)$, becomes

$p\left(x\right)=\sqrt{\frac{c}{2\pi }}{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}$ (42)

Again, integration of probability function over its domain gives 1. Therefore, from (42)

${\int }_{-\infty }^{\infty }\text{ }p\left(x\right)\text{d}x={\int }_{-\infty }^{\infty }\sqrt{\frac{c}{2\pi }}{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x=2{\int }_{0}^{\infty }\sqrt{\frac{c}{2\pi }}{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x=1$ (43)

Further simplification of the preceding Equation (43) gives

${\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}=\sqrt{\frac{\pi }{2c}}$ (44)

One of the important goals in mathematical theory of statistics is to obtain the mean and variance of any probability function under study. The mean, $\mu$, is defined to be the value of the integral ${\int }_{-\infty }^{\infty }\text{ }xp\left(x\right)\text{d}x$. The variance, ${\sigma }^{2}$, is the value of the integral ${\int }_{-\infty }^{\infty }{\left(x-\mu \right)}^{2}p\left(x\right)\text{d}x$. Therefore, using Equation (42),

${\sigma }^{2}={\int }_{-\infty }^{\infty }{\left(x-\mu \right)}^{2}\sqrt{\frac{c}{2\pi }}{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x=2{\int }_{0}^{\infty }{\left(x-\mu \right)}^{2}\sqrt{\frac{c}{2\pi }}{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x$ (45)

or equivalently as

${\sigma }^{2}=\sqrt{\frac{2c}{\pi }}{\int }_{0}^{\infty }\left(x-\mu \right)\left[\left(x-\mu \right){\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\right]\text{d}x$ (46)

consider Equation (46) and using integration by part ( $\int \text{ }u\text{d}v=uv-\int \text{ }v\text{d}u$ ) with $u=x-\mu ⇒\text{d}u=\text{d}x$ and $\text{d}v=\left(x-\mu \right){\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x⇒v=-\frac{1}{c}{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}$, we have

$\begin{array}{c}{\sigma }^{2}=\sqrt{\frac{2c}{\pi }}\left[-\frac{x-\mu }{c}{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}-{\int }_{0}^{\infty }-\frac{1}{c}{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x\right]\\ =\sqrt{\frac{2c}{\pi }}\left[-\frac{1}{c}{\underset{n\to \infty }{\mathrm{lim}}\left(x-\mu \right){e}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}|}_{0}^{n}+\frac{1}{c}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x\right]\\ =\sqrt{\frac{2c}{\pi }}\left[-\frac{1}{c}\underset{n\to \infty }{\mathrm{lim}}\left(n-\mu \right){\text{e}}^{-\frac{c}{2}{\left(n-\mu \right)}^{2}}+\frac{1}{c}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x\right]\\ =\sqrt{\frac{2c}{\pi }}\left[0+\frac{1}{c}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x\right]=\frac{1}{c}\sqrt{\frac{2c}{\pi }}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{c}{2}{\left(x-\mu \right)}^{2}}\text{d}x\end{array}$

putting (44) in the preceding equation above, gives

${\sigma }^{2}=\frac{1}{c}×\sqrt{\frac{2c}{\pi }}×\sqrt{\frac{\pi }{2c}}\text{ }⇒c=\frac{1}{{\sigma }^{2}}$ (47)

Substituting (47) in (42), the derived probability density function has form

$p\left(x\right)=\sqrt{\frac{1}{2\pi {\sigma }^{2}}}{\text{e}}^{-\frac{1}{2{\sigma }^{2}}{\left(x-\mu \right)}^{2}}=\frac{1}{\sigma \sqrt{2\pi }}{\text{e}}^{-\frac{1}{2}{\left(\frac{x-\mu }{\sigma }\right)}^{2}},\text{ }-\infty (48)

Based on the three aforestated basic assumptions, we have easily derived Equation (48) famously known anywhere in the whole world as Normal or Gaussian distribution function with mean $\mu$ and standard deviation $\sigma$.

To verify that Equation (19) is a proper probability density function with parameters $\mu$ and $\sigma$ is to show that the integral

$I={\int }_{-\infty }^{\infty }\frac{1}{\sigma \sqrt{2\pi }}\mathrm{exp}\left[-\frac{1}{2}{\left(\frac{x-\mu }{\sigma }\right)}^{2}\right]\text{d}x$

is equal to 1.

Change variables of integration by letting $z=\frac{x-\mu }{\sigma }$, which implies that $\text{d}x=\sigma \text{d}z$. Then

$I={\int }_{-\infty }^{\infty }\frac{1}{\sigma \sqrt{2\pi }}{\text{e}}^{-\frac{{z}^{2}}{2}}\sigma \text{d}z=\frac{2}{\sqrt{2\pi }}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{z}^{2}}{2}}\text{d}z=\sqrt{\frac{2}{\pi }}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{z}^{2}}{2}}\text{d}z$

so that

${I}^{2}=\left[\sqrt{\frac{2}{\pi }}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{x}^{2}}{2}}\text{d}x\right]\left[\sqrt{\frac{2}{\pi }}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{y}^{2}}{2}}\text{d}y\right]=\frac{2}{\pi }{\int }_{0}^{\infty }{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{x}^{2}+{y}^{2}}{2}}\text{d}x\text{d}y$

Here $x,y$ are dummy variables. Switching to polar coordinate by making the substitutions $x=r\mathrm{cos}\theta$, $y=r\mathrm{sin}\theta$ produces r as the Jacobian of the transformation. So

${I}^{2}=\frac{2}{\pi }{\int }_{0}^{\frac{\pi }{2}}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-\frac{{r}^{2}}{2}}r\text{d}r\text{d}\theta$

Put $a=\frac{{r}^{2}}{2}⇒\text{d}r=\frac{\text{d}a}{r}$. Therefore,

${I}^{2}=\frac{2}{\pi }{\int }_{0}^{\frac{\pi }{2}}{\int }_{0}^{\infty }\text{ }{\text{e}}^{-a}r\frac{\text{d}a}{r}\text{d}\theta =-\frac{2}{\pi }{\int }_{0}^{\frac{\pi }{2}}{\left[{\text{e}}^{-a}\right]}_{0}^{\infty }\text{d}\theta =\frac{2}{\pi }{\int }_{0}^{\frac{\pi }{2}}\text{ }\text{d}\theta =1$

Thus $I=1$, indicating that (48) is a proper probability density function. Other properties of the distribution such as; moments, moments generating function, cumulant generating function, characteristics function, parameter estimation and the likes can be found in   .

4. Conclusion

While working with the outlined objective, we are able to establish that there exists an approach that is not only serving as an alternative proof of derivation of the Gaussian probability density function but also free from rigorous mathematical analysis and independent of Lemmas and Theorems. This paper can be classified as a theoretical study of Gaussian distribution and can serve as an excellent teaching reference in probability and statistics classes where only basic calculus and skills to deal with algebraic expressions, Maclaurin series expansion and Euler distribution of second kind (gamma function) are the only background requirements.

Acknowledgements

The authors are highly grateful to the editor and anonymous referees for reading through the manuscript, constructive comments and suggestions that helped in the improvement of the revised version of the paper.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

  Van der Vaart, A.W. (1998) Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511802256  Blume, J.D. and Royall, R.M. (2003) Illustrating the Law of Large Numbers (And Confidence Intervals). The American Statistician, 57, 51-57. https://doi.org/10.1198/0003130031081  Lesigne, E. (2005) Heads or Tails: An Introduction to Limit Theorems in Probability. American Mathematical Society, Providence, Volume 28 of Student Mathematical Library. https://doi.org/10.1090/stml/028  Walck, C. (2007) Handbook on Statistical Distributions for Experimentalists. Particle Physics Group, Fysikum University of Stockholm, Stockholm.  Proschan, M.A. (2008) The Normal Approximation to the Binomial. The American Statistician, 62, 62-63. https://doi.org/10.1198/000313008X267848  Shao, J. (1999) Mathematical Statistics. Springer Texts in Statistics, Springer Verlag, New York.  Soong, T.T. (2004) Fundamentals of Probability and Statistics for Engineers. John Wiley & Sons Ltd., Chichester.  Feller, W. (1973) An Introduction to Probability Theory and Its Applications. Volume 1, Third Edition, John Wiley and Sons, Hoboken.  Adeniran, A.T., Ojo, J.F. and Olilima, J.O. (2018) A Note on the Asymptotic Convergence of Bernoulli Distribution. Research & Reviews: Journal of Statistics and Mathematical Sciences, 4, 19-32.  Inlow, M. (2010) A Moment Generating Function Proof of the Lindeberg-Lévy Central Limit Theorem. The American Statistician, 64, 228-230. https://doi.org/10.1198/tast.2010.09159  Bagui, S.C., Bhaumik, D.K. and Mehra, K.L. (2013) A Few Counter Examples Useful in Teaching Central Limit Theorem. The American Statistician, 67, 49-56. https://doi.org/10.1080/00031305.2012.755361  Bagui, S.C., Bagui, S.S. and Hemasinha, R. (2013) Non-Rigorous Proof’s Stirling’s Formula. Mathematics and Computer Education, 47, 115-125.  Bagui, S.C. and Mehra, K.L. (2016) Convergence of Binomial, Poisson, Negative-Binomial, and Gamma to Normal Distribution: Moment Generating Functions Technique. American Journal of Mathematics and Statistics, 6, 115-121.  Casella, G. and Berger, R.L. (2002) Statistical Inference. Second Edition, Duxbury Thomson Learning: Integre Technical Publishing Co., Albuquerque.  Young, G.A. and Smith, R.L. (2005) Essentials of Statistical Inference. Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511755392  Podgorski, K. (2009) Lecture Notes on Statistical Inference. Department of Mathematics and Statistics, University of Limerick, Limerick.     customer@scirp.org +86 18163351462(WhatsApp) 1655362766  Paper Publishing WeChat 