Scientific Research

An Academic Publisher

**Risk of Hearing Loss Caused by Multiple Acoustic Impulses in the Framework of Biovariability** ()

*i.e.*, no biovariability), we interpreted the observed injury risk caused by multiple impulses in terms of the immunity effects of preceding impulses on subsequent impulses. In this study, we focus on the case where all sound exposure events act independently in causing injury regardless of whether one is preceded by another (

*i.e.*, no immunity effect). Instead, we explore the possibility of interpreting the observed logistic dose-response relation in the framework of biovariability of the crowd. Here biovariability means that subjects in the crowd have their own individual injury probabilities. That is, some subjects are biologically less or more susceptible to hearing loss injury than others. We derive analytically the distribution of individual injury probability that produces the observed logistic dose-response relation. For several parameter values, we prove that the derived distribution is mathematically a proper density function. We further study the asymptotic approximations for the density function and discuss their significance in practical numerical computation with finite precision arithmetic. Our mathematical analysis implies that the observed logistic dose-response relation can be theoretically explained in the framework of biovariability in the absence of immunity effect.

Keywords

Share and Cite:

*Health*,

**10**, 604-628. doi: 10.4236/health.2018.105048.

1. Introduction

Hearing loss impacts about 360 million people worldwide (over 5 percent of the global population) [1]. One cause of hearing loss is exposure to loud sounds [2]. For example, impulse noise from explosive blasts and weapon firings can cause traumatic injuries to the auditory system [3] [4] [5].

In [4] hearing loss injury experiments were conducted at the University of California, San Diego, using chinchilla as the animal subjects. Each chinchilla was individually exposed to impulse noises generated by a small shock tube placed at various distances to the animal to cover a range of intensity levels; after the exposure, the injury status of the animal was measured and recorded; results from multiple individual animals were assembled into data sets to mimic the collective response of a crowd in responding to impulse noises. The main reason to choose chinchilla as a surrogate animal model for human exposures is that hearing capabilities are very similar between chinchilla and humans. However, the threshold for susceptibility of injury for chinchilla is found to be lower than humans. Thus, the chinchilla data were then shifted by 28 dBA, representing scaling from chinchilla to humans. Based on the scaled data, an empirical injury model was developed for human exposed to multiple sound impulses of equal intensity.

In our recent work [6] we use the framework of immunity to interpret the empirical dose-response relation from [3] for exposure to multiple sound impulses. More specifically, from the observed dose-response relation, we derived the (negative) synergistic effect of a sequence of impulses in causing injury. We revealed that the phenomenological effect of a preceding sound exposure event on the subsequent sound exposure event is always immunity. The study of [6] was based on the assumption that all subjects in the crowd are statistically indistinguishable and thereby it excluded the effect of biological variability. In this paper we focus on the effect of biological variability instead. This approach will provide an alternative view of the empirical dose-response relation from a different theoretical angle.

Biological variability (or “biovariability” in short) is defined as “the natural variability in a lab parameter due to physiologic differences among subjects and within the same subject over time” [7]. There are basically two types of biological variability, inter-individual and intra-individual. Inter-individual biovariability refers to “differences between subjects due to differences in diet, genetics or immune status” whereas intra-individual biovariability applies to “differences in the same subject over time due to diurnal cycles and other rhythms, biological repair mechanisms, dietary variables, aging, etc.” [7]. In this paper, we study the inter-individual biovariability. Prominent examples of inter-individual biological variability include diversities in biologically inherited traits such as height, weight and skin color. Due to biological variability there is always a range of responses displayed in measurements collected from a crowd of animal or human subjects. Therefore, it is reasonable to expect that some people are biologically less or more vulnerable to hearing loss injury than others. For example, one of the physiologic causes for variability is that body temperature is known to affect sensitivity to noise [3].

The presence of biovariability prompts us to wonder: what kind of role does the biological variability play when a crowd of subjects is theoretically exposed to multiple acoustic impulses? To address this question, we investigate the feasibility of explaining the observed logistic dose-response relation in the framework of biovariability. We mathematically derive the distribution density of individual injury probability in a crowd that will reproduce the observed dose-response relation. For several parameter values relevant for applications, we show that the derived distribution is a proper density function. Further, we study the asymptotic approximations of the density function and their usage in numerical computation of density function in finite precision arithmetic. Our mathematical results indicate that it is theoretically possible to interpret the observed logistic dose-response relation in terms of biovariability. This study on biovariability, together with the previous study on immunity effects [6] , offers two complementary views of the observed dose-response relation. These two very different views represent the two extreme theoretical boundaries for understanding the observed dose-response relation.

2. Mathematical Formulation

In experiments [3] , the injury risk of a crowd caused by a sound exposure was measured by conducting separate tests on animal subjects individually and assembling individual results to mimic the collective response of a crowd. A sound exposure event is a complicated process, described by the acoustic waveform, intensity, and duration. In [8] , several candidates were compared as an effective single metric for characterizing the overall injury causing effect of a sound exposure event. It was found that 8-hour Equivalent A-weighted Level ( ${L}_{\text{Aeq8hr}}$ ) is the best predictor of injury [8] [9]. In the injury model developed in [3] , SELA (A-weighted Sound Exposure Level) was selected as the dose, which is mathematically equivalent to ${L}_{\text{Aeq8hr}}$ , up to an additive constant. The observed injury risk follows the logistic dose-response relation:

$p=\frac{1}{1+exp\left(-\alpha \left(\text{SELA}-{D}_{50}\right)\right)}$ (1)

For a crowd, the injury risk p is the average fraction of injured. In the logistic dose response relation (1), α is the shape parameter controlling the steepness of function, and ${D}_{50}$ is the median injury dose. For a crowd, the median injury dose is the dose level at which 50% of the crowd is expected to be injured. We consider the hearing loss injury risk for a crowd of subjects with a wide spectrum of individual injury probabilities due to biovariability. That is, some subjects are biologically less or more susceptible to hearing loss injury than others. At the apparent median injury dose for such a crowd, a particular subject's individual injury probability may be below or above 50% due to biovariability. For injury of permanent threshold shift (PTS) > 25 dB, the measured values of parameters α and ${D}_{50}$ are respectively, $\alpha =0.1$ and ${D}_{50}=161$ dB [3]. It was observed that parameter $\alpha =0.1$ remains the same for PTS injuries of all cut-off levels while the median injury dose ${D}_{50}$ increases with the PTS cut-off level [3].

For a sequence of N identical impulses with SELA value = S, the effective combined dose is given by the dose combination rule [3] :

${S}_{\text{comb}}=S+\lambda lo{g}_{10}N\mathrm{,}\text{\hspace{1em}}\lambda =3.44$ (2)

The injury risk of the crowd caused by the N impulses is

$\begin{array}{c}{p|}_{\left(N\text{impulses}\right)}=\frac{1}{1+exp\left(-\alpha \left({S}_{\text{comb}}-{D}_{50}\right)\right)}\\ =\frac{1}{1+exp\left(-\alpha \left(S-{D}_{50}+\lambda lo{g}_{10}N\right)\right)}\equiv \frac{1}{1+{\left(a{N}^{\eta}\right)}^{-1}}\end{array}$ (3)

where parameters a and η are related to other parameters as

$a\equiv exp\left(\alpha \left(S-{D}_{50}\right)\right)\mathrm{,}\text{\hspace{1em}}\eta \equiv \frac{\alpha \lambda}{ln10}=0.1494$ (4)

We express parameter a in terms of probability ${p|}_{\left(\text{1impulse}\right)}$ by solving ${p|}_{\left(\text{1impulse}\right)}=\frac{1}{1+{a}^{-1}}$ ,

$a=\frac{{p|}_{\left(\text{1impulse}\right)}}{1-{p|}_{\left(\text{1impulse}\right)}}$ (5)

This expression states that parameter a is the odds of injury of a hypothetical subject in the crowd with the average injury probability, which is the ratio of average injury fraction to average non-injury fraction of the crowd. In the subsequent discussion, we shall refer to parameter a simply as the odds of injury of an average subject. We shall avoid the term “average odds” since it is not the average of odds. Rather it is the odds for a subject with the average injury probability.

In this study, we assume that N impulses act independently from each other in causing injury, regardless of whether one is preceded by another (i.e., no immunity effect). Under the assumption of independence, we explore the possibility of interpreting the observed logistic dose-response relation for a crowd in the framework of biovariability.

For mathematical convenience, we consider non-injury probability. When all events are independent of each other, the overall non-injury probability is simply the product of non-injury probabilities of individual events. Let $q\left(\omega \right)$ denote the intrinsic non-injury probability of a random subject in the crowd, in responding to one impulse. Here $q\left(\omega \right)$ is a random variable and we include symbol ω in notation explicitly to indicate the presence of biovariability. Let $\rho \left(q\right)$ be the probability density of random variable $q\left(\omega \right)$ . For one impulse, the average non-injury fraction of the crowd has the expression

$\text{Non-injury}\text{\hspace{0.17em}}\text{fraction}=E\left[q\left(\omega \right)\right]={\displaystyle {\int}_{0}^{1}}\text{\hspace{0.05em}}q\rho \left(q\right)\text{d}q$

For N impulses, the non-injury probability of an individual subject is $q{\left(\omega \right)}^{N}$ and the average non-injury fraction of the crowd is the average of $q{\left(\omega \right)}^{N}$ :

${\left(\text{\hspace{0.05em}}\text{Non-injury}\text{\hspace{0.17em}}\text{fraction}\text{\hspace{0.05em}}\right)|}_{\left(N\text{impulses}\right)}=E\left[q{\left(\omega \right)}^{N}\right]={\displaystyle {\int}_{0}^{1}}\text{\hspace{0.05em}}{q}^{N}\rho \left(q\right)\text{d}q$ (6)

On the other hand, the injury risk follows the observed logistic dose-response relation (3), which relates to the average non-injury fraction as

$\begin{array}{l}{\left(\text{\hspace{0.05em}}\text{Non-injury}\text{\hspace{0.17em}}\text{fraction}\text{\hspace{0.05em}}\right)|}_{\left(N\text{impulses}\right)}\\ ={1-p|}_{\left(N\text{impulses}\right)}=1-\frac{1}{1+{\left(a{N}^{\eta}\right)}^{-1}}=\frac{1}{1+a{N}^{\eta}}\end{array}$ (7)

Combining (6) and (7), we arrive at an equation for $\rho \left(q\right)$ , the distribution density of non-injury probability of a random subject in the crowd.

${\int}_{0}^{1}}\text{\hspace{0.05em}}{q}^{N}\rho \left(q\right)\text{d}q=\frac{1}{1+a{N}^{\eta}}\mathrm{,}\text{\hspace{1em}}N=\mathrm{1,2,}\cdots $ (8)

In the next section, we derive an analytical expression for the distribution density $\rho \left(q\right)$ based on Equation (8).

3. Analytical Results

We solve for density $\rho \left(q\right)$ in Equation (8). To proceed, we make a change of variables $s=-ln\left(q\right)$ . Since the non-injury probability $q\left(\omega \right)$ is constrained to interval $\left(\mathrm{0,1}\right)$ , variable $s=-ln\left(q\right)$ has the domain $\left(\mathrm{0,}+\infty \right)$ . In terms of the new variable s, Equation (8) becomes

$\int}_{0}^{+\infty}}{\text{e}}^{-Ns}\left[\rho \left({\text{e}}^{-s}\right){\text{e}}^{-s}\right]\text{d}s=\frac{1}{1+a{N}^{\eta$ (9)

3.1. Power Series Solution

We expand the right-hand side (RHS) of Equation (9) as a power series of $\frac{1}{N}$ .

$\begin{array}{l}{\displaystyle {\int}_{0}^{+\infty}}{\text{e}}^{-Ns}\left[\rho \left({\text{e}}^{-s}\right){\text{e}}^{-s}\right]\text{d}s\\ =\frac{1}{1+a{N}^{\eta}}=\frac{{\left(a{N}^{\eta}\right)}^{-1}}{1+{\left(a{N}^{\eta}\right)}^{-1}}={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{{a}^{k+1}}\frac{1}{{N}^{\eta \left(k+1\right)}}\end{array}$ (10)

Recall the mathematical identity

$\int}_{0}^{+\infty}}{\text{e}}^{-Ns}{s}^{\beta}\text{d}s=\frac{1}{{N}^{\beta +1}}{\displaystyle {\int}_{0}^{+\infty}}{\text{e}}^{-u}{u}^{\beta}\text{d}u=\frac{\Gamma \left(\beta +1\right)}{{N}^{\beta +1$ (11)

where the gamma function is defined as

$\Gamma \left(z\right)\equiv {\displaystyle {\int}_{0}^{+\infty}}{\text{e}}^{-u}{u}^{z-1}\text{d}u$

We use identity (11) to map each power of $\frac{1}{N}$ on the RHS of (10) back to a power of s.

$\int}_{0}^{+\infty}}{\text{e}}^{-Ns}\left[\frac{{\left(-1\right)}^{k}}{{a}^{k+1}}\frac{1}{\Gamma \left(\eta \left(k+1\right)\right)}{s}^{\eta \left(k+1\right)-1}\right]\text{d}s=\frac{{\left(-1\right)}^{k}}{{a}^{k+1}}\frac{1}{{N}^{\eta \left(k+1\right)$ (12)

Using this mapping, we express $g\left(s\right)\equiv \rho \left({\text{e}}^{-s}\right){\text{e}}^{-s}$ on the left-hand side (LHS) of (10) as a power series.

$\begin{array}{c}\rho \left({\text{e}}^{-s}\right){\text{e}}^{-s}\equiv g\left(s\right)={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{{a}^{k+1}}\frac{1}{\Gamma \left(\eta \left(k+1\right)\right)}{s}^{\eta \left(k+1\right)-1}\\ =\frac{{s}^{\eta -1}}{a}{\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)\right)}{\left(\frac{{s}^{\eta}}{a}\right)}^{k}\end{array}$ (13)

Function $g\left(s\right)\equiv \rho \left({\text{e}}^{-s}\right){\text{e}}^{-s}$ is the probability density of random variable $s\left(\omega \right)\equiv -log\left(q\left(\omega \right)\right)$ . The derivation above tells us that the theoretical prediction matches the observed logistic dose-response relation if and only if the density of $s\left(\omega \right)$ in the theoretical model has the power series in (13). It is not obvious at all whether or not function $g\left(s\right)$ defined by power series (13) is a proper density function. Below we first show that function $g\left(s\right)$ is well defined in its domain ( $s\ge 0$ ). That is, power series (13) converges for all values of $s\ge 0$ .

For large values of index k, the gamma function $\Gamma \left(\eta \left(k+1\right)\right)$ is well approximated by the Stirling formula [10] [11] :

$\Gamma \left(z+1\right)~\sqrt{2\text{\pi}z}{\left(\frac{z}{\text{e}}\right)}^{z}$ (14)

With the help of the Stirling formula, we calculate the radius of convergence using the ratio test:

${R}_{\text{conv}}=\underset{k\to +\infty}{lim}\frac{\Gamma \left(\eta \left(k+2\right)\right)}{\Gamma \left(\eta \left(k+1\right)\right)}=\underset{k\to +\infty}{lim}{\left(\frac{\eta \left(k+2\right)-1}{\text{e}}\right)}^{\eta}{\left(\frac{\eta \left(k+2\right)-1}{\eta \left(k+1\right)-1}\right)}^{\eta \left(k+1\right)-\frac{1}{2}}=+\infty $ (15)

It follows that function $g\left(s\right)$ is well defined by power series (13) for all values of s. To be a proper density function, however, $g\left(s\right)$ must satisfy two more properties:

$g\left(s\right)\ge \mathrm{0,}\text{\hspace{0.05em}}$ (16)

and

${\int}_{0}^{+\infty}}g\left(s\right)\text{d}s=1$ (17)

We will rigorously demonstrate properties (16) and (17) for the cases of

$\eta =\frac{1}{2}$ and $\eta =\frac{1}{3}$ . In addition, we will derive asymptotic approximations of

$g\left(s\right)$ for large s, which are essential for practical computation of $g\left(s\right)$ in finite precision arithmetic. As we will see in the analysis, the proof approach differs quite significantly between these two cases. Thus, it is unlikely that the proof approach of either case can be directly extended to other values of η. Nevertheless, we conjecture that properties (16) and (17) are valid for all values of η.

To facilitate the analysis, we consider the cumulative distribution function (CDF) of random variable $s\left(\omega \right)$ : $G\left(s\right)\equiv {\displaystyle {\int}_{0}^{s}}\text{\hspace{0.05em}}g\left(u\right)\text{d}u$ . Integrating both sides of (13), we express $G\left(s\right)$ as

$G\left(s\right)={\displaystyle {\int}_{0}^{s}}\text{\hspace{0.05em}}g\left(u\right)\text{d}u={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)+1\right)}{\left(\frac{{s}^{\eta}}{a}\right)}^{\left(k+1\right)}$ (18)

We use scaling ${s}_{\text{scaled}}={a}^{\frac{-1}{\eta}}s$ to get rid of parameter a. For simplicity and conciseness of presentation, we still denote the scaled variable by s and set $a=1$ . After scaling, the cumulative distribution function $G\left(s\right)$ and the density function $g\left(s\right)$ have the expressions

$G\left(s\right)={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)+1\right)}{s}^{\eta \left(k+1\right)}$ (19)

$g\left(s\right)={G}^{\prime}\left(s\right)={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)\right)}{s}^{\eta \left(k+1\right)-1}$ (20)

Our general strategy is to derive a differential equation for $G\left(s\right)$ based on the power series around $s=0$ . From the differential equation, we solve for $G\left(s\right)$ and $g\left(s\right)={G}^{\prime}\left(s\right)$ , and then derive their asymptotic approximations at

large s. Building on the theoretical insight gained in the cases of $\eta =\frac{1}{2}$ and $\eta =\frac{1}{3}$ , we then extend the asymptotic analysis to the case of $\eta =\frac{1}{7}$ to derive an

essential numerical formula for practical computation of $g\left(s\right)$ in finite precision arithmetic.

3.2. The Case of $\eta =\frac{1}{2}$

Setting $\eta =\frac{1}{2}$ and differentiating (19) with respect to s, we have

$\begin{array}{c}\frac{\text{d}G\left(s\right)}{\text{d}s}=\frac{\text{d}}{\text{d}s}{\left({\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)+1\right)}{s}^{\eta \left(k+1\right)}\right)|}_{\eta =\frac{1}{2}}\\ ={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\frac{1}{2}\left(k-1\right)+1\right)}{s}^{\frac{1}{2}\left(k-1\right)}\\ =\frac{1}{\Gamma \left(\frac{1}{2}\right)}{s}^{-\frac{1}{2}}-1+{\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\frac{1}{2}\left(k+1\right)+1\right)}{s}^{\frac{1}{2}\left(k+1\right)}\end{array}$ (21)

which leads to a differential equation for $G\left(s\right)$ :

$\frac{\text{d}}{\text{d}s}\left(G\left(s\right)-1\right)=\left(G\left(s\right)-1\right)+\frac{1}{\Gamma \left(\frac{1}{2}\right)}{s}^{\frac{-1}{2}}$ (22)

Using the integrating factor method and applying the condition $G\left(0\right)=0$ , we write the solution $G\left(s\right)$ as

$\left(G\left(s\right)-1\right)={\text{e}}^{s}\left[{\displaystyle {\int}_{0}^{s}}\frac{1}{\Gamma \left(\frac{1}{2}\right)}{\text{e}}^{-u}{u}^{\frac{-1}{2}}\text{d}u-1\right]=-\frac{1}{\Gamma \left(\frac{1}{2}\right)}{\text{e}}^{s}{\displaystyle {\int}_{s}^{+\infty}}{\text{e}}^{-u}{u}^{\frac{-1}{2}}\text{d}u$ (23)

It is straightforward to verify that

• $li{m}_{s\to +\infty}\left(G\left(s\right)-1\right)=0$ ,

• $\left(G\left(s\right)-1\right)$ increases monotonically with s because

$\frac{\text{d}\left(G\left(s\right)-1\right)}{\text{d}s}=\frac{1}{\Gamma \left(\frac{1}{2}\right)}{\text{e}}^{s}{\displaystyle {\int}_{s}^{+\infty}}{\text{e}}^{-u}\left({s}^{\frac{-1}{2}}-{u}^{\frac{-1}{2}}\right)\text{d}u>0$ (24)

These two results lead directly to ${\int}_{0}^{+\infty}}g\left(s\right)\text{d}s=G\left(+\infty \right)=1$ and $g\left(s\right)={G}^{\prime}\left(s\right)\ge 0$ .

Remark: It is fairly unusual for a solution of differential Equation (22) to converge to a constant level as s increases to $+\infty $ . In the homogeneous version of differential Equation (22), solution $G\left(s\right)$ exponentially diverges away from $G=1$ as s increases. It is the unique combination of the condition at $s=0$ and the RHS term in differential Equation (22) that makes $G\left(+\infty \right)=1$ . If we change either or both of these two, $G\left(s\right)$ will diverge to infinity.

To derive the asymptotic approximations of $G\left(s\right)$ for large s, we change the integration variable to z defined by $u=s\left(1+z\right)$ . We write $G\left(s\right)$ as

$\left(G\left(s\right)-1\right)=-\frac{1}{\Gamma \left(\frac{1}{2}\right)}{s}^{\frac{1}{2}}{\displaystyle {\int}_{0}^{+\infty}}{\text{e}}^{-sz}{\left(1+z\right)}^{\frac{-1}{2}}\text{d}z$ (25)

For the integral in (25), at large s, the domain of dominant contribution is a small neighborhood near $z=0$ . We expand ${\left(1+z\right)}^{\frac{-1}{2}}$ around $z=0$ :

${\left(1+z\right)}^{\frac{-1}{2}}=1-\frac{1}{2}z+\frac{3}{8}{z}^{2}+\cdots $

Each term in the power series of ${\left(1+z\right)}^{\frac{-1}{2}}$ gives us

${\int}_{0}^{+\infty}}{\text{e}}^{-sz}{z}^{k}\text{d}z=\Gamma \left(k+1\right){s}^{-\left(k+1\right)$

Putting these results together, we write out the asymptotics of $G\left(s\right)$ for large s:

$\begin{array}{c}G\left(s\right)=1-\frac{1}{\Gamma \left(\frac{1}{2}\right)}{s}^{\frac{1}{2}}\left[\Gamma \left(1\right){s}^{-1}-\frac{\Gamma \left(2\right)}{2}{s}^{-2}+\frac{3\Gamma \left(3\right)}{8}{s}^{-3}+\cdots \right]\\ =1-\frac{1}{\sqrt{\text{\pi}}}{s}^{\frac{-1}{2}}+\frac{1}{2\sqrt{\text{\pi}}}{s}^{\frac{-3}{2}}-\frac{3}{4\sqrt{\text{\pi}}}{s}^{\frac{-5}{2}}+\cdots \end{array}$ (26)

Differentiating with respect to s yields the asymptotics of $g\left(s\right)$ for large s:

$g\left(s\right)=\frac{1}{2\sqrt{\text{\pi}}}{s}^{\frac{-3}{2}}-\frac{3}{4\sqrt{\text{\pi}}}{s}^{\frac{-5}{2}}+\frac{15}{8\sqrt{\text{\pi}}}{s}^{\frac{-7}{2}}+\cdots $ (27)

Interestingly, after we concluded $li{m}_{s\to +\infty}G\left(s\right)=1$ , the asymptotics of $G\left(s\right)$ for large s can be derived in a simple way, directly from differential Equation (22), by assuming that $G\left(s\right)$ has an expansion of the form

$G\left(s\right)=1+{\displaystyle \underset{k=1}{\overset{+\infty}{\sum}}}\text{\hspace{0.05em}}{b}_{k}{s}^{-{\beta}_{k}},\text{\hspace{1em}}{\beta}_{1}<{\beta}_{2}<{\beta}_{3}<\cdots $ (28)

Substituting expansion (28) iteratively into differential Equation (22) gives us

${b}_{1}{s}^{-{\beta}_{1}}+\frac{1}{\Gamma \left(\frac{1}{2}\right)}{s}^{\frac{-1}{2}}=0$

${b}_{2}{s}^{-{\beta}_{2}}=-{\beta}_{1}{b}_{1}{s}^{-{\beta}_{1}-1}$

${b}_{3}{s}^{-{\beta}_{3}}=-{\beta}_{2}{b}_{2}{s}^{-{\beta}_{2}-1}$

$\cdots $

This set of equations yields the solution

${\beta}_{1}=\frac{1}{2},{b}_{1}=-\frac{1}{\Gamma \left(\frac{1}{2}\right)}=-\frac{1}{\sqrt{\text{\pi}}}$

${\beta}_{2}={\beta}_{1}+1=\frac{3}{2},{b}_{2}=-{\beta}_{1}{b}_{1}=\frac{1}{2\Gamma \left(\frac{1}{2}\right)}=\frac{1}{2\sqrt{\text{\pi}}}$

${\beta}_{3}={\beta}_{2}+1=\frac{5}{2},{b}_{3}=-{\beta}_{2}{b}_{2}=-\frac{1\times 3}{{2}^{2}\Gamma \left(\frac{1}{2}\right)}=-\frac{3}{4\sqrt{\text{\pi}}}$

$\cdots $

The result of this simple method agrees with (26), which is rigorously derived without any additional assumption.

Remark: This simple method is quite powerful. By assuming the expansion form, we are able to derive the asymptotics of $G\left(s\right)$ at large s, based upon the power series around $s=0$ .

Next, we study the case of $\eta =\frac{1}{3}$ . We will see that the differential equation for $G\left(s\right)$ in the case of $\eta =\frac{1}{3}$ behaves very differently from that in the case of $\eta =\frac{1}{2}$ although the main conclusions regarding $G\left(s\right)$ and $g\left(s\right)$ remain the same.

3.3. The Case of $\eta =\frac{1}{3}$

At $\eta =\frac{1}{3}$ , differentiating (19) with respect to s leads to

$\begin{array}{c}\frac{\text{d}G\left(s\right)}{\text{d}s}=\frac{\text{d}}{\text{d}s}{\left({\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)+1\right)}{s}^{\eta \left(k+1\right)}\right)|}_{\eta =\frac{1}{3}}\\ ={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\frac{1}{3}\left(k-2\right)+1\right)}{s}^{\frac{1}{3}\left(k-2\right)}\\ =\frac{1}{\Gamma \left(\frac{1}{3}\right)}{s}^{-\frac{2}{3}}-\frac{1}{\Gamma \left(\frac{2}{3}\right)}{s}^{-\frac{1}{3}}+1-{\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\frac{1}{3}\left(k+1\right)+1\right)}{s}^{\frac{1}{3}\left(k+1\right)}\end{array}$ (29)

which gives us a differential equation for $G\left(s\right)$ .

$\frac{\text{d}}{\text{d}s}\left(G\left(s\right)-1\right)=-\left(G\left(s\right)-1\right)+\frac{1}{\Gamma \left(\frac{1}{3}\right)}{s}^{\frac{-2}{3}}-\frac{1}{\Gamma \left(\frac{2}{3}\right)}{s}^{\frac{-1}{3}}$ (30)

Using the integrating factor method and applying the condition $G\left(0\right)=0$ , we write the solution $G\left(s\right)$ as

$\left(G\left(s\right)-1\right)=-{\text{e}}^{-s}+{\displaystyle {\int}_{0}^{s}}\text{\hspace{0.05em}}{\text{e}}^{u-s}\left(\frac{1}{\Gamma \left(\frac{1}{3}\right)}{u}^{\frac{-2}{3}}-\frac{1}{\Gamma \left(\frac{2}{3}\right)}{u}^{\frac{-1}{3}}\right)\text{d}u$ (31)

In Appendix A, we prove that $G\left(s\right)$ given in (31) satisfies

1) $li{m}_{s\to +\infty}\left(G\left(s\right)-1\right)=0$ , and

2) $G\left(s\right)$ increases monotonically.

These two results imply that $g\left(s\right)={G}^{\prime}\left(s\right)$ is a mathematically proper density function.

To derive the asymptotics of $G\left(s\right)$ for large s, we use a change of variable $u=s\left(1-v\right)$ to rewrite solution (31) as

$\left(G\left(s\right)-1\right)=-{\text{e}}^{-s}+\frac{{s}^{\frac{1}{3}}}{\Gamma \left(\frac{1}{3}\right)}{\displaystyle {\int}_{0}^{1}}\text{\hspace{0.05em}}{\text{e}}^{-sv}{\left(1-v\right)}^{\frac{-2}{3}}\text{d}v-\frac{{s}^{\frac{2}{3}}}{\Gamma \left(\frac{2}{3}\right)}{\displaystyle {\int}_{0}^{1}}\text{\hspace{0.05em}}{\text{e}}^{-sv}{\left(1-v\right)}^{\frac{-1}{3}}\text{d}v$ (32)

We expand ${\left(1-v\right)}^{\frac{-2}{3}}$ and ${\left(1-v\right)}^{\frac{-1}{3}}$ around $v=0$ :

${\left(1-v\right)}^{\frac{-2}{3}}=1+\frac{2}{3}v+\frac{5}{9}{v}^{2}+\cdots $

${\left(1-v\right)}^{\frac{-1}{3}}=1+\frac{1}{3}v+\frac{2}{9}{v}^{2}+\cdots $

For large s, each term in the power series of ${\left(1-v\right)}^{c}$ gives us

${\int}_{0}^{1}}\text{\hspace{0.05em}}{\text{e}}^{-sv}{v}^{k}\text{d}v=\Gamma \left(k+1\right){s}^{-\left(k+1\right)}+O\left({\text{e}}^{-s}\right)$

Substituting the power series into (32) and neglecting $O\left({\text{e}}^{-s}\right)$ terms, we write out the asymptotics of $G\left(s\right)$ for large s:

$\begin{array}{c}G\left(s\right)=1+\frac{{s}^{\frac{1}{3}}}{\Gamma \left(\frac{1}{3}\right)}\left[{s}^{-1}+\frac{2\Gamma \left(2\right)}{3}{s}^{-2}+\frac{5\Gamma \left(3\right)}{9}{s}^{-3}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{{s}^{\frac{2}{3}}}{\Gamma \left(\frac{2}{3}\right)}\left[{s}^{-1}+\frac{\Gamma \left(2\right)}{3}{s}^{-2}+\frac{2\Gamma \left(3\right)}{9}{s}^{-3}\right]\\ =1-\frac{{s}^{\frac{-1}{3}}}{\Gamma \left(\frac{2}{3}\right)}+\frac{{s}^{\frac{-2}{3}}}{\Gamma \left(\frac{1}{3}\right)}-\frac{{s}^{\frac{-4}{3}}}{3\Gamma \left(\frac{2}{3}\right)}+\frac{2{s}^{\frac{-5}{3}}}{3\Gamma \left(\frac{1}{3}\right)}-\frac{4{s}^{\frac{-7}{3}}}{9\Gamma \left(\frac{2}{3}\right)}+\frac{10{s}^{\frac{-8}{3}}}{9\Gamma \left(\frac{1}{3}\right)}\end{array}$ (33)

Differentiating with respect to s yields the asymptotics of $g\left(s\right)$ for large s:

$g\left(s\right)=\frac{{s}^{\frac{-4}{3}}}{3\Gamma \left(\frac{2}{3}\right)}-\frac{2{s}^{\frac{-5}{3}}}{3\Gamma \left(\frac{1}{3}\right)}+\frac{4{s}^{\frac{-7}{3}}}{9\Gamma \left(\frac{2}{3}\right)}-\frac{10{s}^{\frac{-8}{3}}}{9\Gamma \left(\frac{1}{3}\right)}+\frac{28{s}^{\frac{-10}{3}}}{27\Gamma \left(\frac{2}{3}\right)}-\frac{80{s}^{\frac{-11}{3}}}{27\Gamma \left(\frac{1}{3}\right)}$ (34)

Again, the asymptotics of $G\left(s\right)$ for large s can be derived directly from differential Equation (30) by assuming the expansion form (28). Below we use this approach to derive the asymptotics of $G\left(s\right)$ for large s in the case of

$\eta =\frac{1}{7}$ . We select the parameter value $\eta =\frac{1}{7}$ to approximate the observed value

of $\eta =0.1494$ in experiments.

3.4. Asymptotics at Large s in the Case of $\eta =\frac{1}{7}$

At $\eta =\frac{1}{7}$ , differentiating (19) with respect to s leads to

$\begin{array}{c}\frac{\text{d}G\left(s\right)}{\text{d}s}=\frac{\text{d}}{\text{d}s}{\left({\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)+1\right)}{s}^{\eta \left(k+1\right)}\right)|}_{\eta =\frac{1}{7}}\\ ={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\frac{1}{7}\left(k-6\right)+1\right)}{s}^{\frac{1}{7}\left(k-6\right)}\\ ={\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\frac{1}{7}\left(7-k\right)\right)}{s}^{\frac{-k}{7}}+1-{\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\frac{1}{7}\left(k+1\right)+1\right)}{s}^{\frac{1}{7}\left(k+1\right)}\end{array}$ (35)

which gives us a differential equation for $G\left(s\right)$ :

$\frac{\text{d}}{\text{d}s}\left(G\left(s\right)-1\right)=-\left(G\left(s\right)-1\right)+{\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\frac{7-k}{7}\right)}{s}^{\frac{-k}{7}}$ (36)

We derive the asymptotics of $G\left(s\right)$ for large s by assuming

• $G\left(s\right)$ satisfies $G\left(+\infty \right)=1$ , and

• $G\left(s\right)$ has the expansion form (28) for large s.

Substituting expansion (28) iteratively into differential Equation (36), we obtain

$\begin{array}{c}G\left(s\right)=1+{\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\frac{7-k}{7}\right)}{s}^{\frac{-k}{7}}+{\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{\frac{k}{7}{\left(-1\right)}^{k}}{\Gamma \left(\frac{7-k}{7}\right)}{s}^{\frac{-\left(k+7\right)}{7}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{\frac{k}{7}\left(\frac{k+7}{7}\right){\left(-1\right)}^{k}}{\Gamma \left(\frac{7-k}{7}\right)}{s}^{\frac{-\left(k+14\right)}{7}}+\cdots \end{array}$ (37)

Differentiating with respect to s yields the asymptotic of $g\left(s\right)$ for large s

$\begin{array}{c}g\left(s\right)={\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{\frac{k}{7}{\left(-1\right)}^{k+1}}{\Gamma \left(\frac{7-k}{7}\right)}{s}^{\frac{-\left(k+7\right)}{7}}+{\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{\frac{k}{7}\left(\frac{k+7}{7}\right){\left(-1\right)}^{k+1}}{\Gamma \left(\frac{7-k}{7}\right)}{s}^{\frac{-\left(k+14\right)}{7}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{\frac{k}{7}\left(\frac{k+7}{7}\right)\left(\frac{k+14}{7}\right){\left(-1\right)}^{k}}{\Gamma \left(\frac{7-k}{7}\right)}{s}^{\frac{-\left(k+21\right)}{7}}+\cdots \end{array}$ (38)

3.5. Significance of Asymptotics at Large s for Computing $g(s)$

In this subsection, we discuss the necessity of using the asymptotic approximations in calculating density
$g\left(s\right)$ for large s. The power series of
$g\left(s\right)$ around
$s=0$ , given in (20), has the nice property that it converges analytically for any s. In practical computation with finite precision arithmetic, however, the numerical convergence of (20) is limited by the finite precision and the limitation is fatal at large s. To see this limitation, we examine the net sum of the power series and compare the net sum with individual terms in the power series. When the largest term is 10^{16} times the net sum or more, implementing power series (20) with double precision arithmetic will lose all numerical accuracy in computing the net sum.

To simplify the discussion, we write power series (20) in a concise and abstract form

$g\left(s\right)={\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\text{\hspace{0.05em}}{T}_{k},\text{\hspace{1em}}{T}_{k}\equiv \frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)\right)}{s}^{\eta \left(k+1\right)-1}$ (39)

We study the case of $\eta =\frac{1}{7}$ as an example. The power series converges for

any s and the exact net sum is $g\left(s\right)$ , which has the asymptotic approximation (38) at large s. We use the leading term in (38) to estimate the net sum for large s:

$\underset{k=0}{\overset{+\infty}{\sum}}}\text{\hspace{0.05em}}{T}_{k}=g\left(s\right)\approx \frac{1}{7\Gamma \left(\frac{6}{7}\right)}{s}^{\frac{-8}{7}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{large}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s$ (40)

This gives us the general magnitude and trend of $g\left(s\right)$ . In particular, $g\left(s\right)$ decreases to zero as s increases. In power series (39), we write the absolute value of the k-th term as

$\left|{T}_{k}\right|={\varphi \left(z\right)|}_{z=\eta \left(k+1\right)-1},\text{\hspace{1em}}\varphi \left(z\right)\equiv \frac{{s}^{z}}{\Gamma \left(z+1\right)}$ (41)

The largest term in power series (39) is found by maximizing function $\varphi \left(\eta \left(k+1\right)-1\right)$ with respect to index k. For mathematical convenience, we approximate this discrete maximization as a continuous maximization process. We maximize $\varphi \left(z\right)$ with respect to z as a continuous variable. Using the Stirling approximation (14), we write $log\left(\varphi \left(z\right)\right)$ as

$\mathrm{log}\left(\varphi \left(z\right)\right)=\mathrm{log}\left(\frac{{s}^{z}}{\Gamma \left(z+1\right)}\right)\approx z\left(1+\mathrm{log}s\right)-\left(z+\frac{1}{2}\right)\mathrm{log}z-\frac{1}{2}\mathrm{log}\left(\text{2\pi}\right)\equiv f\left(z\right)$ (42)

We maximize $f\left(z\right)$ with respect to z as a continuous variable. The derivative of $f\left(z\right)$ is

$\frac{\text{d}}{\text{d}z}f\left(z\right)=\mathrm{log}s-\mathrm{log}z-\frac{1}{2z}$ (43)

The derivative indicates that for large s, the maximum of $f\left(z\right)$ is attained

approximately at ${z}_{c}=s-\frac{1}{2}$ . Consequently, $f\left(s-\frac{1}{2}\right)$ is a fairly tight lower

bound on $ma{x}_{z}f\left(z\right)$ . That is, the true maximum may be a bit larger but not

too much larger than $f\left(s-\frac{1}{2}\right)$ :

$\underset{z}{max}f\left(z\right)\approx f\left(s-\frac{1}{2}\right)=s-\frac{1}{2}log\left(2\text{\pi}s\right)$ (44)

The largest term (in absolute value) in power series (39) is approximately

$\begin{array}{c}\underset{k}{max}\left|{T}_{k}\right|\approx \underset{z}{max}\varphi \left(z\right)\approx \mathrm{exp}\left(\underset{z}{\mathrm{max}}f\left(z\right)\right)\\ \approx \mathrm{exp}\left(s-\frac{1}{2}\mathrm{log}\left(2\text{\pi}s\right)\right)=\frac{{\text{e}}^{s}}{\sqrt{2\text{\pi}s}}\end{array}$ (45)

In summary, for large s, we estimated the net sum and the largest term in power series (39):

• Net sum: $\underset{k=0}{\overset{+\infty}{\sum}}{T}_{k}}\approx \frac{1}{7\Gamma \left(\frac{6}{7}\right)}{s}^{\frac{-8}{7}$ ,

• Largest term: $\underset{k}{max}\left|{T}_{k}\right|\approx \frac{{\text{e}}^{s}}{\sqrt{2\text{\pi}s}}$ .

The ratio of largest to net sum is

$\frac{\underset{k}{\mathrm{max}}\left|{T}_{k}\right|}{{\displaystyle \underset{k=0}{\overset{+\infty}{\sum}}}\text{\hspace{0.05em}}{T}_{k}}\approx \frac{7\text{\hspace{0.05em}}\Gamma \left(\frac{6}{7}\right)}{\sqrt{2\text{\pi}}}{s}^{\frac{9}{14}}{\text{e}}^{s}$ (46)

At $s=40$ , for example, the largest term in (39) is approximately $1.485\times {10}^{16}$ ; the net sum of (39) is $1.907\times {10}^{-3}$ ; and the ratio of largest to net sum is $7.787\times {10}^{18}$ . Recall that the machine precision of the IEEE double precision is about $2.22\times {10}^{-16}$ . Clearly, at $s=40$ , computing function $g\left(s\right)$ by numerically summing terms in power series (39) will not yield any accuracy when implemented in double precision arithmetic. In addition, as s increases, the ratio of largest to net sum, given by Equation (46), grows exponentially with s. As a result, adopting a higher precision arithmetic will only enable us to accommodate a slightly larger range of s. For example, at $s=76$ , the ratio of largest to net sum is $5.072\times {10}^{34}$ which will completely wipe out the numerical accuracy of quadruple precision arithmetic. In conclusion, although power series (39) converges theoretically for any s, it is not a practical numerical method for computing function $g\left(s\right)$ at large s. For large values of s, the asymptotic approximation (38) is absolutely essential in providing a viable way for computing function $g\left(s\right)$ .

Let us calculate ${k}_{c}$ , the index value at which the largest term occurs in power series (39). For large s, index ${k}_{c}$ satisfies approximately

$\left(\eta \left({k}_{c}+1\right)-1\right)={z}_{c}=\left(s-\frac{1}{2}\right)$ (47)

Solving for ${k}_{c}$ from this equation, we obtain

${k}_{c}=\frac{s+\frac{1}{2}}{\eta}-1$

For $s=40$ and $\eta =\frac{1}{7}$ , we have ${k}_{c}=282.5$ .

Figure 1 plots the k-th term of power series (39) vs index k for $s=40$ and $\eta =1/7$ . The largest term occurs at indices k = 282 and k = 283, which correspond very well to the predicted value of ${k}_{c}=282.5$ .

Figure 1. Plot of $log\left|{T}_{k}\right|$ vs k for $s=40$ and $\eta =1/7$ . The inset shows details of the plot near the location of maximum.

It is worthwhile to find ${k}_{f}$ , the cut-off index beyond which all terms in the power series are smaller than a threshold value specified by a relative error tolerance (ε). Specifically, we find index ${k}_{f}$ such that

$\left|{T}_{k}\right|\le \epsilon \cdot g\left(s\right)\mathrm{,}\text{\hspace{1em}}\text{for}\text{\hspace{0.17em}}\text{all}\text{\hspace{0.17em}}\text{\hspace{0.05em}}k\ge {k}_{f}$ (48)

where ε is the relative error tolerance. With the expression of ${T}_{k}$ given in (41), we cast it approximately into a continuous problem

$\mathrm{log}\left(\frac{{s}^{z}}{\Gamma \left(z+1\right)}\right)\le \mathrm{log}\epsilon +\mathrm{log}g\left(s\right),\text{\hspace{1em}}\text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{all}\text{\hspace{0.17em}}\text{\hspace{0.05em}}z\ge {z}_{f}=\eta \left({k}_{f}+1\right)-1$

Applying the Stirling approximation (42) on the LHS and applying the leading term asymptotics of $g\left(s\right)$ for large s (40) on the RHS, the constraint on ${z}_{f}$ becomes

$-\left({z}_{f}+\frac{1}{2}\right)\left(\mathrm{log}\frac{{z}_{f}}{s}-1\right)-\frac{1}{2}\mathrm{log}\left(2\text{\pi e}s\right)\le \mathrm{log}\epsilon -\frac{8}{7}\mathrm{log}s-\mathrm{log}\left(7\text{\hspace{0.05em}}\Gamma \left(\frac{6}{7}\right)\right)$ (49)

For large s, the largest term of power series (39) occurs at ${z}_{c}=s-\frac{1}{2}$ . We expect the cut-off threshold ${z}_{f}$ to be a small multiple of ${z}_{c}$ , which suggests us to re-write the inequality for ${z}_{c}$ approximately in terms of $\frac{{z}_{f}}{s}$ :

$\frac{{z}_{f}}{s}\left(\mathrm{log}\frac{{z}_{f}}{s}-1\right)\ge \frac{1}{s}\left[-\mathrm{log}\epsilon +\frac{9}{14}\mathrm{log}s+\mathrm{log}\left(\frac{7\text{\hspace{0.05em}}\Gamma \left(\frac{6}{7}\right)}{\sqrt{2\text{\pi e}}}\right)\right]$ (50)

Function $u\left(logu-1\right)$ is well approximated by its quadratic expansion.

$u\left(logu-1\right)\approx \frac{\text{e}}{2}\left[{\left(\frac{u}{\text{e}}\right)}^{2}-1\right]$

Using this quadratic approximation, we solve for ${z}_{f}$ from inequality (50).

${z}_{f}=\text{e}s\sqrt{\frac{2}{\text{e}s}\left[-\mathrm{log}\epsilon +\frac{9}{14}\mathrm{log}s+\mathrm{log}\left(\frac{7\text{\hspace{0.05em}}\Gamma \left(\frac{6}{7}\right)}{\sqrt{2\text{\pi e}}}\right)\right]+1}$ (51)

The corresponding index ${k}_{f}$ is

${k}_{f}=\frac{{z}_{f}+1}{\eta}-1$ (52)

For $\eta =\frac{1}{7}$ , $s=40$ , and $\epsilon ={10}^{-16}$ , we have ${k}_{f}=1008$ .

Note that summing the first ${k}_{f}$ terms in the power series (39) is only a necessary condition for achieving a relative accuracy of $\epsilon ={10}^{-16}$ in calculating $g\left(s\right)$ . It is not a sufficient condition. In the numerical computation, the result is polluted by the accumulation of round-off errors caused by finite precision arithmetic. Thus, the numerical accuracy is limited by the machine precision of the number representation system multiplied by the largest term in the power series. As we showed above, with double precision arithmetic, even at a moderate $s=40$ , the round-off errors completely wipe out the numerical accuracy.

3.6. Density of Individual Injury Probability in the Crowd

In this subsection, we write out $\rho \left(p\right)$ , the distribution density of individual injury probability, which shows the biological variability of the crowd in its susceptibility to hearing loss injury.

In all analysis above, function $g\left(s\right)$ is the density of ${s}_{\text{new}}$ after scaling ${s}_{\text{new}}={a}^{\frac{-1}{\eta}}{s}_{\text{old}}$ . Density of ${s}_{\text{old}}$ before the scaling is given by

${g}_{\text{pre-scaling}}\left(s\right)={a}^{\frac{-1}{\eta}}g\left({a}^{\frac{-1}{\eta}}s\right)$ (53)

Going back to non-injury probability $q=exp\left(-{s}_{\text{old}}\right)$ , we write out the density of individual non-injury probability ${\rho}_{q}\left(q\right)$ in terms of $g\left(s\right)$ .

${\rho}_{q}\left(q\right)=\frac{1}{q}{g}_{\text{pre-scaling}}\left(ln\frac{1}{q}\right)={a}^{\frac{-1}{\eta}}\frac{1}{q}g\left({a}^{\frac{-1}{\eta}}ln\frac{1}{q}\right)$ (54)

The density of individual injury probability $p=1-q$ is

$\rho \left(p\right)={\rho}_{q}\left(1-p\right)={a}^{\frac{-1}{\eta}}\frac{1}{1-p}g\left({a}^{\frac{-1}{\eta}}\mathrm{ln}\frac{1}{1-p}\right)$ (55)

where $g\left(s\right)$ is the density of the post-scaling random variable $s\left(\omega \right)=-{a}^{\frac{-1}{\eta}}ln\frac{1}{q\left(\omega \right)}$ . Function $g\left(s\right)$ is calculated as follows.

• For $s\le 16$ , $g\left(s\right)$ is numerically computed using the partial sum of first $N=max\left({k}_{f}\mathrm{,500}\right)$ terms in the power series around $s=0$ given in (20).

$g\left(s\right)\approx {\displaystyle \underset{k=0}{\overset{N}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)\right)}{s}^{\eta \left(k+1\right)-1}\mathrm{,}\text{\hspace{1em}}\text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\le 16$ (56)

Here
${k}_{f}$ is the number of terms to include in summation to make the truncation error smaller than 10^{−16}.
${k}_{f}$ is estimated using (51) and (52). Since the estimate of
${k}_{f}$ is not very reliable for small s, we use at least 500 terms in summation.

• For $s\ge 16$ , $g\left(s\right)$ is evaluated using the first ${N}_{g}=12$ groups of terms in asymptotic approximation (38) for large s. In (38), only the first 3 groups are explicitly shown. Now let us write out a general formula for the partial sum of the first ${N}_{g}$ groups of terms in the asymptotic approximation. The general formula is in the form of double summation and is suitable for numerical implementation.

$g\left(s\right)\approx {\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{{\left(-1\right)}^{k+1}}{\Gamma \left(1-\frac{k}{7}\right)\Gamma \left(\frac{k}{7}\right)}{s}^{\frac{-k}{7}}\left[{\displaystyle \underset{n=1}{\overset{{N}_{g}}{\sum}}}\text{\hspace{0.05em}}\Gamma \left(\frac{k}{7}+n\right){s}^{-n}\right]\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\ge 16$ (57)

The threshold ( $s=16$ ) for switching from power series around $s=0$ to asymptotic approximation at large s is discussed in next section.

We need to point out that in asymptotic approximations (38) and (57), at a fixed value of s, if we include more and more groups of terms in the approximation, eventually the sum diverges. This is caused by the divergence of infinite series

$\underset{n=1}{\overset{+\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\Gamma \left(\frac{k}{7}+n\right){s}^{-n$

In this infinite series, the gamma function coefficient increases much faster than exponential growth. At any fixed value of s, as index n increases, the n-th

term behaves like ${\left(\frac{n}{\text{e}s}\right)}^{n}$ , which diverges to infinity. At a fixed and moderate

value of ${N}_{g}$ , however, asymptotic approximation (57) provides a vital numerical tool for computing function $g\left(s\right)$ at large s when the power series around $s=0$ loses numerical accuracy due to accumulation of round-off errors.

4. Numerical Results and Discussion

In this section, we study the numerical behaviors of the power series around $s=0$ and the asymptotic approximations at large s. Based on the numerical results, we develop a numerical procedure for computing $g\left(s\right)$ that is accurate for all values of s, small or large. The procedure is already mentioned in the previous section and is based on switching between power series (56) and asymptotics (57). Using this numerical procedure, we examine the distribution density $\rho \left(p\right)$ of the individual injury probability in the crowd. We also plot the cumulative distribution function (CDF) of the injury probability because the CDF illustrates the effect of parameter a on the distribution of individual injury probability. Here parameter a is the observed odds of injury of an average subject in the crowd.

All simulations in this section are for the case of $\eta =\frac{1}{7}$ , which is meant to

approximate the observed value of $\eta =0.1494$ in experiments. We first study the errors in numerical implementation of power series (56) in IEEE double precision. When sufficiently large number of terms are included in the power series summation to keep the theoretical truncation error well below the round-off errors, the numerical errors are mainly caused by accumulation of round-off errors. To calculate the numerical errors, we need a very accurate solution. For that purpose, we implemented all operations and functions (including $log(\cdot )$ , $exp(\cdot )$ and $\Gamma (\cdot )$ ) in a very high numerical precision by representing each number using an array of 200 integers to achieve 1600 significant decimal digits. Numerical error is calculated as the difference between the numerical solution in double precision and the high precision solution. Figure 2 plots the absolute error and relative error vs s when the power series around $s=0$ is implemented in IEEE double precision. In the previous section, we derived that the largest term in power series (56) is

$ma{x}_{k}\left|{T}_{k}\right|\approx \frac{{\text{e}}^{s}}{\sqrt{2\text{\pi}s}}$ . Based on that, we expect the absolute numerical error to grow

approximately proportional to ${\text{e}}^{s}$ . In Figure 2, the gray dashed line is a fitting of the form $y=c\cdot {\text{e}}^{s}$ to the numerical absolute errors, confirming the theoretical prediction.

The exponential growth of the accumulated round-off errors with s implies that, for large s, the numerical summation of power series (56) is not a good tool for computing $g\left(s\right)$ . For large s, we turn our attention to asymptotics (57) of $g\left(s\right)$ . The numerical errors in using asymptotics (57) to calculate $g\left(s\right)$ are mainly due to the approximation errors at finite s. In the previous section, we showed that at any fixed value of s, if we include more and more terms in the asymptotics, it eventually diverges. At a finite s, the approximation error of asymptotics cannot be made arbitrarily small by including more terms even if we work in exact arithmetic (i.e., no round-off error). The approximation errors can only be reduced by increasing s. Thus, asymptotics (57) provides a good approximation of $g\left(s\right)$ only when we use a fixed number of terms and when s is at least moderately large. Figure 3 displays the approximation errors in asymptotics (57) vs s when respectively 6 groups of terms and 12 groups of terms are used in approximation. For $s>10$ , the 12-group approximation is more accurate while for $s<10$ , the 6-group approximation is more accurate. With a fixed number of groups of terms, the approximation is more accurate as s is increased.

Figure 2. Numerical errors vs s when power series (56) of $g\left(s\right)$ is implemented in IEEE double precision. Here the numerical errors are caused by the accumulation of round-off errors. The errors grow exponentially with s because the largest term in summation grows exponentially. The gray dashed line is a fitting of the form $y=c\cdot {\text{e}}^{s}$ .

These numerical findings agree with the theoretical predictions in the previous section.

From the results of Figure 2 and Figure 3, we see that a good numerical strategy for calculating
$g\left(s\right)$ is to use the power series around
$s=0$ when s is not too large and switch to the asymptotics when s is large enough. Figure 4 compares the errors of these two formulas: 1) the numerical error when implementing power series summation (56) in IEEE double precision, and 2) the approximation error in asymptotics (57). The accumulation of round-off errors in power series grows with s while the approximation error in asymptotics decays as s is increased. Figure 4 suggests
$s=16$ as an optimal threshold for switching from power series (56) to asymptotics (57). When this switching strategy is implemented in IEEE double precision, it provides a numerical procedure for calculating
$g\left(s\right)$ that is accurate to 10^{−8} for all values of s in
$\left(\mathrm{0,}+\infty \right)$ .

With the switching mechanism developed above for calculating $g\left(s\right)$ , we study the distribution density and cumulative distribution function of the individual injury probability. In Figure 5, we plot density $\rho \left(p\right)$ of the individual injury probability p in the crowd. Density $\rho \left(p\right)$ is given in equation (55), expressed in terms of function $g\left(s\right)$ . Figure 5 indicates that density $\rho \left(p\right)$ diverges to infinity at $p=0$ and $p=1$ . Approximately, distribution $\rho \left(p\right)$ can be viewed as the mix of 3 component distributions:

1) a δ-function distribution at $p=0$ ,

2) a nearly uniform distribution in $\left(\mathrm{0,1}\right)$ , and

3) a δ-function distribution at $p=1$ .

Because of the divergence of density at $p=0$ and at $p=1$ , the weights of the two δ-function components cannot be readily read out from the plot of density

Figure 3. Approximation errors in asymptotics (57) vs s when respectively 6 groups of terms and 12 groups of terms are used in approximation. With each fixed number of groups of terms, the asymptotic approximation becomes more accurate at larger s.

Figure 4. Comparison of two errors: 1) the error in numerical summation of power series around $s=0$ , and 2) the approximation error in asymptotics for large s. The intersection of the two suggests a threshold for switching from one to the other.

$\rho \left(p\right)$ . To see the weights, we examine $F\left(p\right)$ , the cumulative distribution function (CDF) of p, which clearly illustrates the weight of a δ-function component as a jump in function value of the same magnitude.

The CDF, $F\left(p\right)$ , has the expression

$F\left(p\right)={\displaystyle {\int}_{0}^{p}}\rho \left(\stackrel{\u02dc}{p}\right)\text{d}\stackrel{\u02dc}{p}={{\displaystyle {\int}_{0}^{s}}g\left(\stackrel{\u02dc}{s}\right)\text{d}\stackrel{\u02dc}{s}|}_{s=-{a}^{-1/\eta}\mathrm{log}\left(1-p\right)}={G\left(s\right)|}_{s=-{a}^{-1/\eta}\mathrm{log}\left(1-p\right)}$ (58)

where $G\left(s\right)$ is CDF of the post-scaling random variable $s\left(\omega \right)=-{a}^{-1/\eta}log\left(1-p\left(\omega \right)\right)$ , expressed as a power series around $s=0$ in (19). While theoretically, power series (19) converges for any s, the practical numerical convergence is limited

Figure 5. Density $\rho \left(p\right)$ of the individual injury probability p for $a=1$ where parameter a is the observed odds of injury of an average subject in the crowd. The logarithmic scale is used in the vertical direction because $\rho \left(p\right)$ diverges to infinity at $p=0$ and $p=1$ .

by the precision of the number representation system. In IEEE double precision, we adopt a switching numerical procedure for calculating $G\left(s\right)$ , similar to that for calculating $g\left(s\right)$ .

• For $s\le 16$ , $G\left(s\right)$ is numerically computed using the partial sum of first $N=max\left({k}_{f}\mathrm{,500}\right)$ terms in the power series around $s=0$ given in (19).

$G\left(s\right)\approx {\displaystyle \underset{k=0}{\overset{N}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(\eta \left(k+1\right)+1\right)}{s}^{\eta \left(k+1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\le 16$ (59)

• For $s\ge 16$ , $G\left(s\right)$ is evaluated using the first ${N}_{g}=12$ groups of terms in the asymptotics for large s given in (37). We rewrite it as a general formula below.

$G\left(s\right)\approx 1+{\displaystyle \underset{k=1}{\overset{6}{\sum}}}\frac{{\left(-1\right)}^{k}}{\Gamma \left(1-\frac{k}{7}\right)\Gamma \left(\frac{k}{7}\right)}{s}^{\frac{-k}{7}}\left[{\displaystyle \underset{n=1}{\overset{{N}_{g}}{\sum}}}\text{\hspace{0.05em}}\Gamma \left(\frac{k}{7}+\left(n-1\right)\right){s}^{-\left(n-1\right)}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s\ge 16$ (60)

Figure 6 compares plots of $F\left(p\right)$ respectively for $a=1/3$ , $a=1$ , and $a=3$ . Parameter a is the observed odds of injury of an average subject in the crowd. As parameter a increases from small to one to large, the δ-function component at $p=0$ decreases and the δ-function component at $p=1$ increases by the same amount while the distribution in between is approximately unchanged.

5. Conclusion

In this study, we examined the observed logistic dose-response relation for hearing loss injury caused by exposure to multiple acoustic impulses, in the mathematical framework of biovariability. Previously, we interpreted the

Figure 6. Plots of $F\left(p\right)$ , CDF of the individual injury probability, respectively for $a=1/3$ , $a=1$ , and $a=3$ . Parameter a is the observed odds of injury of an average subject in the crowd. As parameter a increases, its most prominent effect is to shift F(p) down nearly uniformly while keeping $F\left(0\right)=0$ and $F\left(1\right)=1$ .

observed dose-response relation based solely on the immunity effect under the assumption that all individual subjects are statistically identical (i.e., no biovariability). In the current study, we view the problem from a completely different angle; we explored the possibility of understanding the observed dose-response relation based solely on the biovariability of the crowd under the assumption that all sound exposure events act independently from each other in causing injury regardless of whether one event is preceded by another. We found that theoretically the biovariability alone is indeed capable of explaining the observed dose-response relation. We derived an analytical expression for the distribution density of individual injury probability in the crowd that produces the observed results. We also derived asymptotic approximations to the distribution density and the cumulative distribution function. The asymptotic approximations make it possible to compute the distribution density in finite precision arithmetic, in parameter regions where the analytical expression suffers catastrophically from loss of accuracy due to the accumulation of round-off errors. A robust numerical procedure for calculating the distribution density was developed based on switching between the analytical expression and the asymptotics. The resulting numerical procedure is accurate in all parameter regions, providing a practical tool for computing the distribution density. The mathematical framework constructed in the current study, along with the theoretical results and the numerical tools obtained, paves a pathway for further investigating the presence and effects of biovariability.

6. Disclaimer

The authors would like to thank the Joint Non-Lethal Weapons Directorate of US Department of Defense for supporting this work. The views expressed in this document are those of the authors and do not reflect the official policy or position of the Department of Defense or the US Government.

Appendix A

We prove that $G\left(s\right)$ given in (31) satisfies

1) $\underset{s\to +\infty}{lim}\left(G\left(s\right)-1\right)=0$ , and

2) $G\left(s\right)$ increases monotonically.

Item 1) is obtained directly by applying L’Hospital’s rule.

$\begin{array}{c}\underset{s\to +\infty}{lim}\left(G\left(s\right)-1\right)=\underset{s\to +\infty}{lim}\frac{{\displaystyle {\int}_{0}^{s}}{\text{e}}^{u}\left(\frac{1}{\Gamma \left(\frac{1}{3}\right)}{u}^{\frac{-2}{3}}-\frac{1}{\Gamma \left(\frac{2}{3}\right)}{u}^{\frac{-1}{3}}\right)\text{d}u}{{\text{e}}^{s}}\\ =\underset{s\to +\infty}{lim}\frac{{\text{e}}^{s}\left(\frac{1}{\Gamma \left(\frac{1}{3}\right)}{s}^{\frac{-2}{3}}-\frac{1}{\Gamma \left(\frac{2}{3}\right)}{s}^{\frac{-1}{3}}\right)}{{\text{e}}^{s}}=0\end{array}$ (61)

To prove 2), we introduce function

${h}_{1}\left(s\right)\equiv \frac{1}{\Gamma \left(\frac{2}{3}\right)}{s}^{\frac{-1}{3}}-\frac{1}{\Gamma \left(\frac{1}{3}\right)}{s}^{\frac{-2}{3}}$

We write ${G}^{\prime}\left(s\right)$ in terms of ${h}_{1}\left(s\right)$ and define function ${w}_{1}\left(s\right)$ :

${G}^{\prime}\left(s\right)={\text{e}}^{-s}\left[1+{\displaystyle {\int}_{0}^{s}}\text{\hspace{0.05em}}{\text{e}}^{u}{h}_{1}\left(u\right)\text{d}u-{\text{e}}^{s}{h}_{1}\left(s\right)\right]\equiv {\text{e}}^{-s}{w}_{1}\left(s\right)$ (62)

We only need to show ${w}_{1}\left(s\right)\ge 0$ . The derivative of ${w}_{1}\left(s\right)$ has the expression

${{w}^{\prime}}_{1}\left(s\right)={\text{e}}^{s}{h}_{1}\left(s\right)-{\text{e}}^{s}{h}_{1}\left(s\right)-{\text{e}}^{s}{{h}^{\prime}}_{1}\left(s\right)=-{\text{e}}^{s}{{h}^{\prime}}_{1}\left(s\right)$ (63)

From the definition, ${h}_{1}\left(s\right)$ and ${{h}^{\prime}}_{1}\left(s\right)$ satisfy

${h}_{1}\left(s\right)=\frac{1}{\Gamma \left(\frac{2}{3}\right)}{s}^{\frac{-2}{3}}\left({s}^{\frac{1}{3}}-\frac{\Gamma \left(\frac{2}{3}\right)}{\Gamma \left(\frac{1}{3}\right)}\right)=\{\begin{array}{ll}<0,\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s<{s}_{0}\hfill \\ >0,\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s>{s}_{0}\hfill \end{array}$ (64)

${{h}^{\prime}}_{1}\left(s\right)=\frac{1}{3\Gamma \left(\frac{2}{3}\right)}{s}^{\frac{-5}{3}}\left(\frac{2\Gamma \left(\frac{2}{3}\right)}{\Gamma \left(\frac{1}{3}\right)}-{s}^{\frac{1}{3}}\right)=\{\begin{array}{ll}>0,\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s<8{s}_{0}\hfill \\ <0,\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s>8{s}_{0}\hfill \end{array}$ (65)

where

${s}_{0}={\left(\frac{\Gamma \left(\frac{2}{3}\right)}{\Gamma \left(\frac{1}{3}\right)}\right)}^{3}=0.12915$

The sign of ${{h}^{\prime}}_{1}\left(s\right)$ tells us that

${{w}^{\prime}}_{1}\left(s\right)=-{\text{e}}^{s}{{h}^{\prime}}_{1}\left(s\right)=\{\begin{array}{ll}<0,\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s<8{s}_{0}\hfill \\ >0,\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}s>8{s}_{0}\hfill \end{array}$ (66)

In other words, ${w}_{1}\left(s\right)$ attains its minimum at $s=8{s}_{0}$ . Thus, to show ${w}_{1}\left(s\right)\ge 0$ , we only need to show ${w}_{1}\left(8{s}_{0}\right)\ge 0$ . We use integration by parts to rewrite ${w}_{1}\left(8{s}_{0}\right)$ as

$\begin{array}{c}{w}_{1}\left(8{s}_{0}\right)={\left[1+{\displaystyle {\int}_{0}^{s}}\text{\hspace{0.05em}}{\text{e}}^{u}{h}_{1}\left(u\right)\text{d}u-{\text{e}}^{s}{h}_{1}\left(s\right)\right]|}_{s=\left(8{s}_{0}\right)}\\ ={\left[1+{\text{e}}^{s}{\displaystyle {\int}_{0}^{s}}{h}_{1}\left(z\right)\text{d}z-{\displaystyle {\int}_{0}^{s}}\text{\hspace{0.05em}}{\text{e}}^{u}\left({\displaystyle {\int}_{0}^{u}}{h}_{1}\left(z\right)\text{d}z\right)\text{d}u-{\text{e}}^{s}{h}_{1}\left(s\right)\right]|}_{s=\left(8{s}_{0}\right)}\end{array}$ (67)

Function ${\int}_{0}^{u}}\text{\hspace{0.05em}}{h}_{1}\left(z\right)\text{d}z$ satisfies

${\int}_{0}^{u}}\text{\hspace{0.05em}}{h}_{1}\left(z\right)\text{d}z=\left(\frac{3{u}^{\frac{2}{3}}}{2\Gamma \left(\frac{2}{3}\right)}-\frac{3{u}^{\frac{1}{3}}}{\Gamma \left(\frac{1}{3}\right)}\right)=\{\begin{array}{ll}<0,\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}u<8{s}_{0}\hfill \\ =0,\hfill & \text{\hspace{0.05em}}\text{at}\text{\hspace{0.17em}}\text{\hspace{0.05em}}u=8{s}_{0}\hfill \\ >0,\hfill & \text{\hspace{0.05em}}\text{for}\text{\hspace{0.17em}}\text{\hspace{0.05em}}u>8{s}_{0}\hfill \end{array$

Using the sign of ${\int}_{0}^{u}}\text{\hspace{0.05em}}{h}_{1}\left(z\right)\text{d}z$ , we have

$\begin{array}{c}{w}_{1}\left(8{s}_{0}\right)\ge {\left[1-{\text{e}}^{s}\left({\displaystyle {\int}_{0}^{s}}{\displaystyle {\int}_{0}^{u}}\text{\hspace{0.05em}}{h}_{1}\left(z\right)\text{d}z\text{d}u+{h}_{1}\left(s\right)\right)\right]|}_{s=\left(8{s}_{0}\right)}\\ ={\left[1-{\text{e}}^{s}\left(\frac{9{s}^{\frac{5}{3}}}{10\Gamma \left(\frac{2}{3}\right)}-\frac{9{s}^{\frac{4}{3}}}{\Gamma \left(\frac{1}{3}\right)}+\frac{{s}^{\frac{-1}{3}}}{\Gamma \left(\frac{2}{3}\right)}-\frac{{s}^{\frac{-2}{3}}}{\Gamma \left(\frac{1}{3}\right)}\right)\right]|}_{s=\left(8{s}_{0}\right)}\\ =0.4667>0\end{array}$ (68)

Combining these results, we conclude that ${G}^{\prime}\left(s\right)={\text{e}}^{-s}{w}_{1}\left(s\right)\ge {\text{e}}^{-s}{w}_{1}\left(8{s}_{0}\right)>0$ and thus, function $G\left(s\right)$ increases monotonically.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | Glodsmith, W. (2015) Sound: A Very Short Introduction. Oxford University Press, Oxford. https://doi.org/10.1093/actrade/9780198708445.001.0001 |

[2] | Kramer, S. and Brown, D.K. (2018) Audiology: Science to Practice. 3rd Edition, Plural Publishing, San Diego. |

[3] |
Chan, P., Ho, K. and Ryan, A.F. (2016) Impulse Noise Injury Model. Military Medicine, 181, 59-69. https://doi.org/10.7205/MILMED-D-15-00139 |

[4] | Hamernik, R.P., Patterson, J.H., Ahroon, W.A. and Stuhmiller, J.H. (1998) A Health Hazard Assessment for Blast Overpressure Exposures Subtitle-Use of Animal Test Data in the Development of a Human Auditory Hazard Criterion for Impulse Noise (Part 1). Auditory Research Laboratory, State University of New York at Plattsburgh, Fort Detrick. http://www.dtic.mil/dtic/tr/fulltext/u2/a393522.pdf |

[5] | Hamernik, R.P., Patterson, J.H., Ahroon, W.A. and Stuhmiller, J.H. (1998) A Health Hazard Assessment for Blast Overpressure Exposures Subtitle-Use of Animal Test Data in the Development of a Human Auditory Hazard Criterion for Impulse Noise (Part 2). Auditory Research Laboratory, State University of New York at Plattsburgh, Fort Detrick. http://www.dtic.mil/dtic/tr/fulltext/u2/a392410.pdf |

[6] | Wang, H., Burgei, W.A. and Zhou, H. (2017) Interpreting Dose-Response Relation for Exposure to Multiple Sound Impulses in the Framework of Immunity. Health, 9, 1817-1842. https://doi.org/10.4236/health.2017.913132 |

[7] | https://medical-dictionary.thefreedictionary.com/biological+variability |

[8] |
Murphy, W.J., Khan, A. and Shaw, P.B. (2011) Analysis of Chinchilla Temporary and Permanent Threshold Shifts Following Impulsive Noise Exposure. EPHB Report 338-05c, US Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, Cincinnati. http://www.cdc.gov/niosh/surveyreports/pdfs/338-05c.pdf |

[9] |
Weightman, F.L., Flamme, G., Campenella, A.J. and Luz, G.A. (2010) American Institute of Biological Sciences Peer Review of Injury Prevention and Reduction Research Task Area-Impulse Noise Injury Models. http://www.arl.army.mil/www/pages/343/AHAAH_AIBS_revew_Public_Release_11Aug14.pdf |

[10] | Bush, A.W. (1992) Perturbation Methods for Engineers and Scientists. CRC Press, Boca Raton. |

[11] | Hinch, E.J. (1991) Perturbation Methods. Cambridge University Press, New York. https://doi.org/10.1017/CBO9781139172189 |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.