High Accuracy When Measuring Physical Constants: From the Perspective of the Information-Theoretic Approach ()

Boris Menin^{}

Refrigeration Consultancy Ltd., Beersheba, Israel.

**DOI: **10.4236/jamp.2020.85067
PDF HTML XML
342
Downloads
882
Views
Citations

Refrigeration Consultancy Ltd., Beersheba, Israel.

The practical value of high-precision models of the studied physical phenomena and technological processes is a decisive factor in science and technology. Currently, numerous methods and criteria for optimizing models have been proposed. However, the classification of measurement uncertainties due to the number of variables taken into account and their qualitative choice is still not given sufficient attention. The goal is to develop a new criterion suitable for any groups of experimental data obtained as a result of applying various measurement methods. Using the “information-theoretic method”, we propose two procedures for analyzing experimental results using a quantitative indicator to calculate the relative uncertainty of the measurement model, which, in turn, determines the legitimacy of the declared value of a physical constant. The presented procedure is used to analyze the results of measurements of the Boltzmann constant, Planck constant, Hubble constant and gravitational constant.

Keywords

Amount of Information, CODATA, Boltzmann Constant, Gravitational Constant, Hubble Constant, Planck Constant, International System of Units, Mathematical Modeling, Measurement, Relative Uncertainty

Share and Cite:

Menin, B. (2020) High Accuracy When Measuring Physical Constants: From the Perspective of the Information-Theoretic Approach. *Journal of Applied Mathematics and Physics*, **8**, 861-887. doi: 10.4236/jamp.2020.85067.

1. Introduction

Any modern scientific research is based on physical laws in which there are numerical constants having specific and universally used symbols. Their exact values are necessary to make reliable, verifiable forecasts about the structure of the world around us. Second, checking their numerical values through complex experiments allows us to identify the consistency and acceptability of a particular physical theory. These quantities are called physical constants.

When scientists take measurements or calculate some physical constant based on their data, they usually indicate the range of values within which this “true value” is located with a given probability. The result is not only a number, but also a number with a measurement uncertainty [1] [2]. In this case, the analysis of experimental data requires a careful selection of the mathematical apparatus for a balanced objective assessment of the available results from the position of their consistency. For this, a metric is selected that can represent the quantitative value of this consistency. Moreover, it is well known that with an incorrectly chosen metric for evaluating the available data on the measurement of physical constants, the expected analysis efficiency will turn out to be low, which will lead to erroneous conclusions. To increase the credibility of a suitable metric, various statistical methods are used.

As an example, we can draw attention to existing statistical methods used to estimate the Hubble constant. As shown in [3], these methods only partially consider random measurement errors, which leads to a situation in which estimates of the Hubble constant, *H*_{0}, are statistically inconsistent and systematically too low. In particular, one of the methods used to calculate the Hubble constant is based on the study of type Ia supernovae using a remote ladder. Supernovae are known as “standard candles” because they produce constant peak brightness values. Because the magnitude of the observed brightness depends on the distance from the supernova, this can be used to measure its distance from the Earth. The process of measuring distance is very complicated. It is based on the calibration of the distance ladder, which leads to significant uncertainty, both systematic and random. The systematic error is the larger of the two, and depending on whether the error is positive or negative, the Hubble constant is underestimated or overestimated, respectively. The random part of the error leads to the fact that some objects at measured distances are too large, while others are too small. Contrary to what one might think, these errors on average do not cancel but lead to a systematically too low a value of *H*_{0} [3].

More specifically, various statistical methods are used to estimate the Hubble constant, including weighted regression analysis and Bayesian analysis, which allow you to include other available data sources. These methods are a continuation of the usual least-squares model, in which speed regresses at a distance and where the estimate is found by minimizing the standard error. Common to the three methods—least-squares model, weighted regression, and Bayesian analysis—is that the error in the estimation does not disappear and does not even decrease when there are more speed/distance measurements [3].

Another example closed to applying statistical methods for verifying a magnitude of the physical constant is a realization of the International System of Units (SI). One of the outstanding scientific achievements of the 21^{st} century is the approval of a new version of SI [4]. From 2019, this system includes seven base quantities, variables derived from them, and several physical constants with a fixed value. This was made possible only thanks to unique methods of measuring physical constants, by which an evaluation of the results and expert analysis of the obtained data is carried out.

In the СODATA (the Committee on Data for Science and Technology) procedure, the selected experimental results of measurements of the physical constants are combined with their individual relative and standard errors in the least-squares adjustments (LSA) procedure. However, when the data and the model are incompatible, considering the indicated uncertainties, this procedure does not give adequate results [5]. A detailed presentation and explanation of the CODATA methodology are presented in [6].

The main target to use LSA is to fit models to measurements that are accompanied by quoted uncertainties. The weights are chosen in dependence on these uncertainties. The advantage of LSA is that its score corresponds to a decision on maximum probability. This allows us to obtain guarantees for estimating the maximum probability (consistency, asymptotic normality). This then allows us to build hypothesis tests and obtain confidence intervals for the estimated regression coefficients.

A distinctive feature of the LSA method is that it is aimed at checking the consistency of the results, and for this, the initial experimental values are “adjusted,” that is, changed to optimize the final dispersion of the set. In the case of conflicting results, the associated uncertainties are increased in the CODATA analysis [7].

In this case, there is presented a biased statistical expert, motivated by personal convictions or preferences [8]. It means that the method involves an element of subjective judgment [9]. In other words, the CODATA concept is not without drawbacks: a statistically significant trend, the aggregate value of consensus, statistical control, underestimated uncertainties, or the significance of an expert judgment. Perhaps the CODATA values have not yet stabilized [5]. Moreover, in the case of conflicting results, the associated uncertainties increase in the CODATA analysis [7].

However, one should not underestimate the significant efforts of scientists to avoid the above effects. The fact is that the determination of each physical constant using a special CODATA adjustment usually includes the results of measurements of various independent research groups working on the problem of measuring the physical constant for decades. The goal of the coordinated efforts of scientists was to guarantee a situation where systematic effects were not missed.

To summarize the above, it is necessary to pay attention to one important feature inherent in all methods of analysis of experimental data and uncertainties in the measurement of physical constants. Systematic uncertainties arising from the idealization of modeling and due to philosophical and scientific preferences of researchers are completely ignored. In other words, the choice of the model of the measuring process is subjective in nature, depending on the consciousness of the researcher with his preferences in choosing a quantitative and qualitative set of variables taken into account. This fact complicates the already complex process of checking the model by creating an uncertain target—a situation in which neither the simulated nor the observed behavior of the system is precisely known.

Therefore, when we talk about the level of accuracy of measuring physical constants, we must understand that modern measuring models, test benches, and calculation algorithms have become a very powerful and accurate tool since 2010 [10]. This is true with large reservations precisely because they are based on a large number of assumptions. As a result, to understand carefully obtained results, it is necessary to find a theoretically substantiated method that does not use “weighted estimates and coefficients”.

The fact is that some uncertainties in the experimental results are due to the philosophy of researchers. They either report unjustifiably large errors so that they are not blamed for a wrong approach, or underestimate errors, unconsciously wanting to present the best result (the author is far from suspecting research teams of a scientific adjustment of facts). That is life. Therefore, a method is needed that excludes the subjective component of the measurement process.

We show that with the help of concepts and the mathematical apparatus of information theory, it is possible, theoretically and *without any additional assumptions and simplifications*, to calculate the amount of information contained in the measurement model of the physical constant. This circumstance allows us to establish the value of relative uncertainty, which, in turn, determines the legitimacy of the declared value of the physical constant. We also present specific examples of the application of the described information approach. The presented procedure for calculating relative uncertainty is used to analyze the results of measurements of the Boltzmann constant, Planck constant, Hubble constant, and gravitational constant.

Analysis of publications and all necessary calculations were carried out in the office of Mechanical & Refrigeration Consultation expert (Beer-Sheba, Israel).

2. Information Approach

2.1. Preliminary Notes

It may seem strange that before starting the experiment you need to list all the base quantities used and the total number of variables considered in the model. Moreover, this important point is completely ignored in the canonical CODATA method for calculating the target value of the physical constant and its relative uncertainty. The need for this requirement is explained as follows.

The fact is that any measurement of a variable by itself implies the presence of an already formulated model. As mentioned in [11], a measurement model constitutes a relationship between the output quantities or measurands (the quantities intended to be measured) and the input quantities known to be involved in the measurement. In this case, the researcher, based on his own knowledge, experience, and intuition, uses, as a rule, dimensional and dimensionless variables from the International System of Units (SI). This means, on the one hand, that accounting for one variable or another is equally likely: to describe the phenomenon being studied, the scientist or engineer selects a qualitative and quantitative set of variables as he wishes. The most famous example of such a situation is the possibility of studying an electron as both a particle and a wave. Although two qualitatively different sets of variables are used to describe the motion of an electron, as it turned out, both have the right to life, which led to the concept of electron dualism. On the other hand, by choosing specific variables, the scientist thereby declares the type of process by which she or he intends to measure the physical constant, for example, mechanical, thermal, electromagnetic, combined heat–mass–electromagnetic, etc. Thus, the model contains variables with different dimensions determined by the seven (ξ = 7) base quantities in SI: *L* is the length, *M* is mass, *Т* is time, *I* is electric current, *θ* is thermodynamic temperature, *J* is luminous intensity, *F* is the amount of substance. These seven main quantities can be combined into different groups, the structure of which depends on the qualitative set of variables considered in the model. Each group is called a class of phenomenon (CoP). In other words, CoP is a combination of physical phenomena and processes described by a finite number of base and derived quantities (defined as products of powers of the base units) that characterize the features of the process of measuring the physical constant in terms of qualitative and quantitative aspects [12]. For example, during the gravitational constant measurements with a torsion balance, the following base quantities are typically used: *L*, *M*, *Т* (CoP_{SI} ≡ *LMT*). Measuring the Boltzmann constant is usually realized by CoP_{SI} ≡ *LMТθF* or CoP_{SI} ≡ *LMТθI*. It should be noted that SI is a product of human thinking and does not exist in nature. At the same time, SI is used in science and technology in accordance with the developed consensus [6] [13].

Refining the process of formulating the model from the perspective of choosing a specific CoP may offer a new interpretation of the results of measurements of physical constants, which will be discussed later in the article.

In addition, it should be noted that a researcher, choosing a specific CoP, in practice discards possible potential hidden relationships between the variables considered in the model and the ignored variables. Thus, of course, this can affect the accuracy of the proposed model and even lead to an increase in its uncertainty. This is explained by the fact that, although in the opinion of a significant part of the scientific community, the model error can be reduced by using a large number of variables thanks to improved algorithms and supercomputers, each variable introduces its own uncertainty into the total integral error that affects the desired result. However, as the dimension of the model increases, only the reliability of the model results improves [14].

To assess the magnitude of the threshold mismatch [1] between the model and the measurement process under study, due to the choice of CoP, we will give the following reasoning and calculations.

2.2. The Amount of Information Contained in a Model

In science and technology, a wide variety of unit systems can be used that are most suitable for a particular application, for example, Imperial and US customary units or Natural units [15]. However, the most widely used system is the international standard metric system—the International System of Units (SI). Therefore, further reasoning and calculations are given as applied to SI, especially since SI units are also used in the CODATA methodology. However, since SI is an Abelian group [16] [17], like any other system of units, the final conclusions do not depend on the choice of a specific system of units.

It can be proved using the concepts and mathematical apparatus of the theory of similarity [12] that SI includes a large but finite number of dimensionless variables [16]:

${\mu}_{\text{SI}}=\text{38265}$ . (1)

μ_{SI} cannot be simultaneously taken into account in the model. Typically, a researcher uses 10, 20, or even 130 variables with CoP_{SI} ≡ *LMTθF* [18] to describe the process being studied.

For further reasoning, we indicate that information entropy [19] is manifested through the interaction of the studied physical system and the formulated model. This model is an information channel between the physical system and the observer. As a result, information entropy is subjective, depending on the consciousness of the researcher with his preferences in choosing a quantitative and qualitative set of variables taken into account.

We will use an analogy with a theory of signals transmission. Imagine that the observed measurement process has a huge number of properties (quantities, criteria) that characterize its content and interaction with the environment. Then, we assume that each dimensionless complex represents the original readout (reading [20] [21]), through which some information on the dimensionless researched field *u* (researched process) can be obtained by the observer. In other words, the researcher observing a physical phenomenon, analyzing the process or designing the device, selects—according to his experience, knowledge and intuition—certain characteristics of the object. With this selecting of the object, connections of the actual object with the environment enveloping it are destroyed. In addition, the modeler takes into account the relatively smaller number of quantities than the current reality due to constraints of time, and technical and financial resources. Therefore, the “image” of the object being studied is shown in the model with a certain uncertainty, which depends primarily on the number of quantities taken into account. In addition, the object can be addressed by different groups of researchers, who use different approaches for solving specific problems and, accordingly, different groups of variables, which differ from each other in quality and quantity. Thus, for any physical or technical problem, the occurrence of a particular variable in the model can be considered as a random process.

Then, let there be a situation where all *µ*_{SI} of SI values can be taken into account, provided that the choice of these quantities is a priori considered equally probable. In this case, we are guided by the idea of Brillouin connecting amount of information obtained in the simulation (observation) without making any disturbance in the measurement process, and the uncertainty inherent in the selected model [20].

Comparing the number of variables in SI with the selected number of variables in a particular model, it turns out that you can calculate the amount of information ΔA_{e} contained in it [16]

$\Delta {A}_{\text{e}}=\kappa \cdot \mathrm{ln}\left[{\mu}_{\text{SI}}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right]$ (2)

where ΔA_{e} is expressed in units of entropy, *μ*_{SI} includes dimensionless criteria/variables that are considered equally probable when selected by the researcher in the model,
${z}^{\u2033}$ and
${\beta}^{\u2033}$ are the number of all and base quantities registered in the chosen model, respectively,
$\gamma ={z}^{\u2033}-{\beta}^{\u2033}$ , *k* is the Boltzmann constant.

Obviously, in practice, researchers can use dimensionless criteria that are not included in *μ*_{SI}. It is easy to show that a value of *μ*_{2} (number of dimensionless criteria and numbers in an extended system of units numbered “2”) does not dramatically influence on a final result. Let us suppose that
$2{\mu}_{\text{SI}}={\mu}_{\text{2}}$ . Taking into account that
$\mathrm{ln}{\mu}_{\text{SI}}\gg \mathrm{ln}{\left({z}^{\u2033}-{\beta}^{\u2033}\right)}_{\text{SI}}$ ,
$\mathrm{ln}{\mu}_{\text{2}}\gg \mathrm{ln}{\left({z}^{\u2033}-{\beta}^{\u2033}\right)}_{\text{2}}$ , and
$\mathrm{ln}{\mu}_{\text{SI}}\gg \mathrm{ln}2$ , we can obtain the following relation

$\begin{array}{c}\Delta {A}_{\text{eSI}}/\Delta {A}_{\text{e}}{}_{2}=\left[\mathrm{ln}{\mu}_{\text{SI}}-\mathrm{ln}{\left({z}^{\u2033}-{\beta}^{\u2033}\right)}_{\text{SI}}\right]/\left[\mathrm{ln}{\mu}_{2}-\mathrm{ln}{\left({z}^{\u2033}-{\beta}^{\u2033}\right)}_{2}\right]\\ =\mathrm{ln}{\mu}_{\text{SI}}/\left[\mathrm{ln}2+\mathrm{ln}{\mu}_{\text{SI}}\right]\approx 1.\end{array}$ (3)

The physical content of Equation (2) is very important. For example, two research groups analyze the process of measuring a physical constant. The results are different from each other. Who presented the most respectable option? Obviously, the choice of the class of the phenomenon and the number of variables considered will affect the information content of the model and will cause a different amount of information contained in it [22]:

$\Delta {A}_{\text{b}1\gamma}=\left(\mathrm{ln}\left({\gamma}_{\text{CoP}}/{\gamma}_{1}\right)\right)/\mathrm{ln}2\text{\hspace{0.17em}}\left(\text{bits}\right)$ , (4)

$\Delta {A}_{\text{b}2\gamma}=\left(\mathrm{ln}\left({\gamma}_{\text{CoP}}/{\gamma}_{2}\right)\right)/\mathrm{ln}2\text{\hspace{0.17em}}\left(\text{bits}\right)$ , (5)

where ΔA_{b1γ} and ΔA_{b2γ} are an amount of information of the model formulated by the first research team and the second team, respectively, compared with the model that takes into account the optimal number of dimensionless criteria γ_{CoP} inherent to a particular CoP; γ_{1} and γ_{2} are the number of dimensionless criteria in the first and second models, respectively.

Let us suppose that
${\gamma}_{\text{1}}<{\gamma}_{\text{CoP}}<{\gamma}_{\text{2}}$ and
$\left|{\gamma}_{\text{1}}-{\gamma}_{\text{CoP}}\right|<\left|{\gamma}_{\text{CoP}}-{\gamma}_{\text{2}}\right|$ . By analyzing Equations (4) and (5), some readers may suggest that it is preferable to use a model with a large number of variables when modeling a physical process. However, this is a wrong conclusion, and here is why. By comparing ΔA_{b1γ} and ΔA_{b2γ} in absolute terms, the researcher can “instantly” determine which one is smaller. This means that the number of dimensionless criteria considered is closer to the optimal one γ_{CoP} corresponding to the minimum comparative uncertainty [20] (for its detailed calculation, see below). Thus, a project with a lower absolute value of
$\left|{\gamma}_{\text{i}}-{\gamma}_{\text{CoP}}\right|$ is more informative. Therefore, the information approach will significantly reduce the time spent by researchers on the analysis of publications.

It is also advisable to emphasize the importance of introducing the concept of “information content” of the model, ΔA, from the point of view of choosing a specific model of the measurement process.

First, information content can provide a natural explanation for the preferred choice of a particular measurement method. Until now, it was almost impossible to recommend scientists to focus their efforts on a specific method. However, with the introduction of the concept of information content (3)-(5), it is quite possible to state which of the models describing the same method of measuring the physical constant is most preferable.

Secondly, the content of the information may mean that some models of measuring the physical constant are less preferable. Specific examples and detailed explanations are presented in Section 3.

Third, the information content implies that many models are unsuitable for measuring a particular physical constant. In these models, the number of variables does not correspond to the recommended number inherent in the selected class of phenomena. The accuracy of the model, usually associated with the number of variables considered, is seen in a different light when implementing the information approach (Section 3).

Fourth, an accurate description of the experimental setup in terms of an information approach requires some knowledge of the future. We know very well that the experiment itself never allows the experimenter to look into the future, but if we try to interpret what is happening, some expectation of the future experiment seems necessary. We suspect that this approach may allow us to reflect a state where some hidden variables that can influence the result are not considered by the researcher’s conscious decision (Section 3).

2.3. Comparative Uncertainty

The amount of information contained in the model (Equation (2)) is only a sufficient condition for choosing the preferred option. In addition to this, we can formulate a necessary condition. Using Equation (2), we can get an expression for calculating the absolute uncertainty of the model Δ_{pmm} [16], due to the choice of CoP and the number of variables considered in the model:

${\Delta}_{\text{pmm}}/S=\left({z}^{\prime}-{\beta}^{\prime}\right)/{\mu}_{\text{SI}}+\left({z}^{\u2033}-{\beta}^{\u2033}\right)/\left({z}^{\prime}-{\beta}^{\prime}\right)$ , (6)

where *S* is the interval in which the dimensionless quantity *u* is located, *z*¢ and *β*¢ are the total number of dimensional quantities and the number of base quantities in the CoP, respectively, ε = Δ_{pmm}/*S* is the comparative uncertainty [20].

Four features of Equation (6), called the *µ*-rule, should be noted. First of all, this equation is applicable both to models with dimensional variables and with dimensionless variables, due to the following relations:

$\begin{array}{l}\Delta U/{S}^{*}=\left(\Delta U/a\right)/\left({S}^{*}/a\right)=\Delta u/S\\ r/R=\left(\Delta U/U\right)/\left(\Delta u/u\right)=\left(\Delta U/U\right)/\left[\left(\Delta U/a\right)/\left(U/a\right)\right]=1\end{array}$ (7)

where *S* and Δ*u* are the dimensionless quantities, respectively, the range of variations and the total absolute uncertainty in determining the dimensionless* *quantity *u*; *S ^{*}* and Δ

Secondly, Equation (6) is a kind of correspondence principle for the model development process and can be related to the Heisenberg principle. During measuring a physical constant, the model must satisfy Equation (6). In other words, changing the level of detailed description of the test bench by choosing the class of the phenomenon (
${z}^{\prime}-{\beta}^{\prime}$ ) and the specific number of variables to be taken into account (
${z}^{\u2033}-{\beta}^{\u2033}$ ) causes a change in the smallest value of the comparative uncertainty Δ_{pmm}/*S* of the main studied function (main variable). Thus, the correspondence principle uniquely determines the achievable accuracy limit (for a given class of phenomena), while simultaneously revealing a pair of quantities observed by a conscious researcher, in particular, absolute uncertainty Δ_{pmm} in the measurement of the studied quantity and the interval of its change *S*.

Third, Equation (6) has the property of equivalence. This means that it is true for other measurement systems. Models formulated in other systems of units of measure, for example, in yards and pounds or centimeter-gram-second (CGS), will also have to comply with Equation (6) to maintain the basic relationships between physical variables. Equivalence ensures that physical models of reality remain consistent, regardless of units.

Fourth, the development of measuring equipment, improving the accuracy of measuring instruments, and improving existing and newly created measurement methods in the aggregate lead to an increase in knowledge about the object under study and, therefore, the value of achievable relative uncertainty decreases. However, this process is not infinite and limited by Equation (6). The reader should keep in mind that this principle is not a lack of measuring equipment or an engineering device, but the way the human brain works. Predicting the behavior of any physical process, physicists actually predict a tangible yield of instrumentation. It is true that, according to the *µ*-rule, observation is not a measurement but a process that creates a unique physical world in relation to each specific observer.

In addition, using Equation (6), one can find the necessary conditions for approaching the smallest relative uncertainty of each CoP, *r*_{CoP}, the fulfillment of which can *confirm the legitimacy of the declared measured value of the physical constant*. For this, it is necessary to take the derivative of Δ_{pmm}/*S* with respect to
${z}^{\prime}-{\beta}^{\prime}$ and equate it to zero:

$\left({z}^{\u2033}-{\beta}^{\u2033}\right)={\left({z}^{\prime}-{\beta}^{\prime}\right)}^{2}/{\mu}_{\text{SI}}$ (8)

For example, for the thermal–electromechanical process (*CoP*_{SI} ≡ *LMТθI*), which is used in measuring the Boltzmann constant, it is necessary to consider the following statement. The dimension of any derived quantity q can be expressed as a unique combination of dimensions of the main base quantities to the different powers [23]:

$q\ni {L}^{l}\cdot {M}^{m}\cdot {T}^{t}\cdot {I}^{i}\cdot {\Theta}^{\Theta}\cdot {J}^{j}\cdot {F}^{f}$ , (9)

$\begin{array}{l}-3\le l\le +3,\text{\hspace{0.17em}}-1\le m\le +1,\text{\hspace{0.17em}}-4\le t\le +4,\text{\hspace{0.17em}}-2\le i\le +2,\\ -4\le \theta \le +4,\text{\hspace{0.17em}}-1\le j\le +1,\text{\hspace{0.17em}}-1\le f\le +1.\end{array}$ (10)

${\left({z}^{\prime}-{\beta}^{\prime}\right)}_{LMT\theta I}=\left({\u0435}_{l}\cdot {\u0435}_{m}\cdot {\u0435}_{t}\cdot {\u0435}_{\theta}\cdot {\u0435}_{i}-1\right)/2-5=4247,$ (11)

${\gamma}_{LMT\theta I}={\left({z}^{\u2033}-{\beta}^{\u2033}\right)}_{LMT\theta I}={\left({z}^{\prime}-{\beta}^{\prime}\right)}_{LMT\theta I}^{2}/{\mu}_{\text{SI}}={4247}^{2}/38265\approx 471,$ (12)

where
$l,m,\cdots ,f$ are the exponents of the base quantities, taking only integer values and which vary in certain intervals, Equation (10); γ* _{LMTθI}* is an optimal number of criteria in a model inherent in CoP

Then, one can calculate the minimum achievable comparative uncertainty ε_{LMTθI}:

${\epsilon}_{LMT\theta I}={\left(\Delta u/S\right)}_{LMT\theta I}=4247/38265+471/4247=\mathrm{0.222.}$ (13)

Using calculations similar to (10)-(13), it is possible to calculate achievable comparative uncertainties ε_{CoP} and the recommended number of quantities γ_{CoP} corresponding to different classes of phenomena (Table 1).

Thus, there is provided an amazing opportunity to calculate *r*_{CoP} by two methodologies in the framework of the information-based approach.

2.4. Two μ-Rule Methodologies

The first, dictated by the *μ*-rule, is* to analyze the data on the value of the achievable relative uncertainty at the moment*, considering the *latest *measurement results. In this case, the possible interval of placement of the physical constant *S* is selected as the difference between its maximum and minimum values

Table 1. Comparative uncertainties and recommended number of dimensionless criteria.

measured by various scientific groups over a certain period of time. Thus, using the achievable comparative uncertainty inherent in the selected class of phenomena when measuring the physical constant, we can calculate the recommended minimum relative uncertainty, which is compared with the relative uncertainty of each published study. Moreover, the apparent randomness of the choice of the interval value *S*, depending on the dataset, does not ultimately affect the final result: an extended range of variation of the value of *S* only indicates the imperfection of the measuring instruments, which leads to a significant increase in relative uncertainty. This can be illustrated by Equation (14), which indicates that the value of the relative uncertainty is finite and not equal to zero.

$\begin{array}{c}{r}_{1}/{r}_{2}=\left({\Delta}_{1}/{A}_{1}\right)/\left({\Delta}_{2}/{A}_{2}\right)=\left(\left({\epsilon}_{1}\cdot {S}_{1}\right)/{A}_{1}\right)/\left(\left({\epsilon}_{2}\cdot {S}_{2}\right)/{A}_{2}\right)\\ =\left({\epsilon}_{1}\cdot {S}_{1}\cdot {A}_{2}\right)/\left({\epsilon}_{2}\cdot {S}_{2}\cdot {A}_{1}\right)\\ =\left({\left({z}^{\prime}-{\beta}^{\prime}\right)}_{1}/\mu +{\gamma}_{\text{CoP}}/{\left({z}^{\prime}-{\beta}^{\prime}\right)}_{1}\right)\cdot {S}_{1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\cdot {A}_{2}/\left(\left({\left({z}^{\prime}-{\beta}^{\prime}\right)}_{2}/\mu +{\gamma}_{\text{CoP}}/{\left({z}^{\prime}-{\beta}^{\prime}\right)}_{2}\right)\cdot {S}_{2}\cdot {A}_{1}\right)\\ \equiv \text{finitevalue}\end{array}$ (14)

where *r*_{1}, *r*_{2}, Δ_{1}, Δ_{2}, ε_{1}, ε_{2}, *S*_{1}, *S*_{2}, A_{1}, A_{2} are the relative, absolute, and comparative uncertainties, intervals of placement of the physical constant, and magnitudes of physical constants, respectively; index 1 corresponds to a larger interval, index 2 corresponds to a shorter interval, *S*_{2} < *S*_{1}.

Assuming that ${\left({z}^{\prime}-{\beta}^{\prime}\right)}_{1}={\left({z}^{\prime}-{\beta}^{\prime}\right)}_{2}$ and ${\gamma}_{\text{CoP}}\cdot \mu \gg \text{1}$ (look at Table 1 and (1)), then

${r}_{1}/{r}_{2}\approx {S}_{1}\cdot {A}_{2}/\left({S}_{2}\cdot {A}_{1}\right)$ (15)

Equation (15) indicates that the ratio *r*_{1}/*r*_{2} does not tend to infinity or to zero, its value is finite and reflects an increase in the accuracy of instruments when measuring a physical constant. An important advantage of this approach is the independence of the real instability of the results of experimental measurements.

Although the goal of our work is to obtain the main restriction on the accuracy of measuring physical constants, we may also ask whether it is possible to achieve this limit in a physically correctly formulated model. Because our estimate is given by optimization in comparison with the achieved comparative uncertainty and observation interval, it is clear that in the practical case the limit cannot be reached. This is because there is an *inevitable* *primordial uncertainty* of the model, depending on the preferences of the researcher, based on his intuition, knowledge, and experience. The magnitude of this uncertainty is an indication of how likely it is that your personal philosophical inclinations will influence the outcome of this process. When a person mentally builds a model, at each stage of its construction there is some probability that the model will not correspond to this phenomenon with a high degree of accuracy.

In what follows, this method is denoted as *IARU* and is represented by the below-mentioned procedure [24].

1) From the published data of each experiment, the value *α*, relative uncertainty r* _{α}*, and standard uncertainty u

2) The experimental absolute uncertainty Δ* _{α}* is calculated by multiplying the physical constant value

3) The maximum *α*_{max} and minimum *α*_{min} values of the measured physical constant are selected from the list of measured values *α*_{i} of the physical constant mentioned in different studies;

4) As a possible interval for placing the observed constant *S _{α}*, the difference between the maximum and minimum values is calculated,
${S}_{\alpha}={\alpha}_{\text{max}}-{\alpha}_{\text{min}}$ ;

5) The selected comparative uncertainty ε_{CoP} (Table 1) inherent in the model describing the measurement of the constant is multiplied by the possible interval of placement of the observed constant *S _{α}* to obtain the absolute experimental uncertainty value Δ

6) To calculate the relative uncertainty *r _{IARU}* in accordance with the IARU, this absolute uncertainty Δ

7) The relative uncertainty obtained, *r _{IARU}*, is compared with the experimental relative uncertainties

8) According to *IARU*, a comparative experimental uncertainty of each study ε_{IARUi} is calculated by dividing the experimental absolute uncertainty of each study Δ* _{α}* by the difference between the maximum and minimum values of the measured constant

As follows from the presented description of the step-by-step procedure, the results do not depend on the complex, difficult to fulfill requirements inherent in statistical-expert methods (SEM), such as, for example, the CODATA method [6]. Moreover, the physical meaning of *IARU* is to assess the suitability of a method for measuring a specific physical constant. *IARU* *can also be used to compare achieved measurement accuracy with various methods for different constants* (Section 3.2).

In the second technique, *S* *is determined by the limits of the used measuring instruments* [20] in *each* particular experiment. This is confirmed by the fact that in experimental physics, unlike other areas of technology (for example, when studying the processes of heat and mass transfer in refrigeration equipment [22]), the researchers present measurement data with the obligatory indication of the *standard* uncertainty. At the same time, it is obvious that this uncertainty of a particular measurement is subjective because the observer is simply not able to consider all the uncertainties. The standard uncertainty is calculated considering the uncertainties *observed by the experimenters*.

Then, the ratio between the absolute uncertainty achieved in the experiment and *the standard uncertainty, which acts as a possible interval for the placement of the physical constant*, is calculated. Thus, in the framework of the information approach, the comparative uncertainties achieved in the studies are calculated, which, in turn, are compared with the theoretically achievable comparative uncertainty inherent in the chosen class of phenomena. This method is hereinafter referred to as* IACU* and includes the following steps:

1) From the published data of each experiment, the value *α*, relative uncertainty *r _{α}*, and standard uncertainty

2) The experimental absolute uncertainty Δ* _{α}* is calculated by multiplying the physical constant value

3) The achieved experimental comparative uncertainty of each published study ε_{IACUi} is calculated by dividing the experimental absolute uncertainty Δ* _{α}* by the standard uncertainty

4) The experimental calculated comparative uncertainty of each published study ε_{IACUi} is compared with the selected comparative uncertainty ε_{CoP} inherent in the model (Table 1), which describes the measurement of the physical constant.

It should be noted that this methodology also does not require consistent experimental results. From the point of view of its physical content, *the IACU reflects the situation, how thoroughly all possible sources of uncertainties for a certain class of phenomena were identified and considered in calculations using different methods of measuring a specific physical constant* (Section 3.2).

In the next section, we will present the results of applying the information approach to analyze the measurement data of various physical constants using different methods. In the proposed analysis, only publications are considered that contain data on the value of a physical constant, its relative, and standard uncertainties.

3. Results Obtained Using the Information Approach

3.1. Boltzmann Constant

As an example of the visual step-by-step application of the information approach, we consider the results of measuring the Boltzmann constant using the method of an acoustic gas thermometer (CoP_{SI} ≡ *LMTθF*). One of the many datasets can be found in [25], which consists of measurements taken in seven laboratories (Table 2) from 2009 to 2017.

We will apply *IARU* and *IACU* to calculate the estimated observation interval of *k*, *S _{k}*, according to

${S}_{k}={k}_{\mathrm{max}}-{k}_{\mathrm{min}}=2.4\times {10}^{-29}{\text{m}}^{2}\cdot \text{kg}/\left({\text{s}}^{2}\cdot \text{K}\right).$ (16)

One can calculate the comparative uncertainty ε_{LMTθF} and the lowest relative uncertainty *r _{LMTθF}* taking into account Equations (1), (6), (8), (10), and (16):

$\begin{array}{c}{\left({z}^{\prime}-{\beta}^{\prime}\right)}_{LMT\theta F}=\left({\u0435}_{l}\cdot {\u0435}_{m}\cdot {\u0435}_{t}\cdot {\u0435}_{\theta}\cdot {\u0435}_{f}-1\right)/2-5\\ =\left(7\times 3\times 9\times 9\times 3-1\right)/2-5=2546,\end{array}$ (17)

${\gamma}_{LMT\theta F}={\left({z}^{\u2033}-{\beta}^{\u2033}\right)}_{LMT\theta F}={\left({z}^{\prime}-{\beta}^{\prime}\right)}^{2}/{\mu}_{\text{SI}}={2546}^{2}/38265\approx 169,$ (18)

${\epsilon}_{LMT\theta F}={\left(\Delta u/S\right)}_{LMT\theta F}=2546/38265+169.4/2546=\mathrm{0.1331.}$ (19)

${\Delta}_{LMT\theta F}={\epsilon}_{LMT\theta F}\cdot {S}_{k}=0.1331\times 2.4\times {10}^{-29}=3.2\times {10}^{-30}{\text{m}}^{2}\cdot \text{kg}/\left({\text{s}}^{2}\cdot \text{K}\right).$ (20)

$\begin{array}{c}{r}_{LMT\theta F}={\Delta}_{LMT\theta F}/\left(\left({k}_{\mathrm{max}}+{k}_{\mathrm{min}}\right)/2\right)\\ =3.2\times {10}^{-30}/1.38064961\times {10}^{-23}\\ =2.3\times {10}^{-7}.\end{array}$ (21)

where “−1” corresponds to the case where all the exponents of the base quantities are zero in Equation (9); 5 corresponds to the five base quantities *L*, *M*, *T*, Q, and *F*; *Δ _{LMTθF}* is the absolute uncertainty.

The value of *r _{LMTθF}* = 2.3 × 10

Table 2. The Boltzmann constant and achieved relative and comparative uncertainties using an acoustic gas thermometer.

*Data are introduced in [6] [13] [33].

*μ*-rule, when the experimentally achieved relative uncertainty is always greater than that calculated by the information approach (Section 2.4). Furthermore, data introduced in Table 2 allows formulation of the following conclusions:

1) Although the authors of publications declared that they considered all the possible sources of uncertainty, the values of absolute and relative uncertainties can still differ by more than a factor of two. A similar situation exists in the spread of the values of comparative uncertainties (*IARU*). This reflects the existence of hidden uncertainties that have eluded the attention of researchers.

2) The results from the use of *IACU* indicate a relative agreement between the magnitude of the experimental comparative uncertainties and their significant discrepancy (more than 3 - 4 times) compared with the recommended one (0.1331). This situation is explained by the fact that, on the one hand, research teams learn from each other in the search and elimination of undetected or unaccounted for uncertainties, thereby ensuring the relative uniformity of the magnitude of experimental comparative uncertainty. On the other hand, it should be considered that the idea of an acoustic gas thermometer method is based on the concept of an ideal gas, although the interaction between gas particles is not well understood. An additional difficulty is associated with measuring the molar concentration of gas per unit volume and volume itself with a competitive degree of accuracy. It should also be noted that the total volume includes the volume of the connecting pipes to the pressure gauges. Therefore, there may be significant unaccounted uncertainties due to both the formulation of the experimental model and the achievable accuracy of the values considered in the calculation. Moreover, the proximity of the acoustic mode to shell resonance leads to an unacceptably large degree of data violation for this mode. In addition, experimenters consider a much smaller number of variables compared with the recommended ones (see Table 1). These reasons lead to a large difference between the theoretically calculated comparative uncertainty and the experimental values of the comparative uncertainties achieved in measuring *k*.

3.2. Summarized Data

Because the step-by-step procedure for applying the information approach was described in detail in Section 3.1, generalized information on the data sets of measurements of the Planck constant, Boltzmann constant, Hubble constant, and gravitational constant is presented below (Table 3).

Looking closer at the data entered, we can make the following comments.

1) In measuring the Planck constant, *h*, when moving from a model (*LMТF*) to CoP_{SI} with a large number of dimensionless criteria (*LMТI*), the comparative uncertainty increases. This change is due to the potential effects of the interaction between an increased number of variables that may or may not be considered by the researcher. At the same time, the *r*_{exp}/*r*_{SI} ratio for CoP_{SI} ≡ *LMТI* is much smaller, which indicates the advantage of the Kibble balance method for measuring the Planck constant. This is also confirmed by the significant difference in the

Table 3. Summary data on the measurement of physical constants by various methods.

^{1}KB—Kibble balance. Data include results of measurements taken in seven laboratories from 2014 to 2017. ^{2}XRCD—X-ray crystal density. Data include results of measurements taken in seven laboratories from 2011 to 2018. ^{3}AGT—acoustic gas thermometer. Data include results of measurements taken in seven laboratories from 2009 to 2017. ^{4}DCGT – dielectric constant gas thermometer. Data include results of measurements taken in six laboratories from 2012 to 2018. ^{5}JNT—Johnson noise thermometer. Data include results of measurements taken in six laboratories from 2011 to 2017. ^{6}DBT—Doppler broadening thermometer. Data include results of measurements taken in six laboratories from 2007 to 2015. ^{7}BDL—brightness of distance ladder. Data include results of measurements taken in seven laboratories from 2011 to 2019. ^{8}CMB—cosmic microwave background. Data include results of measurements taken in six laboratories from 2009 to 2018. ^{9}BAO—baryonic acoustic oscillations. Data include results of measurements taken in four laboratories from 2014 to 2018. ^{10}Data include results of measurements taken in seven laboratories from 2000 to 2014. ^{11}Data include results of measurements taken in five laboratories from 2001 to 2018.

value of the comparative uncertainty CoP_{SI} ≡ *LMТF* (XRCD - 0.0146) compared with CoP_{SI} ≡ *LMTI* (KB – 0.0245) with almost equal experimental relative uncertainties achieved (1.3 × 10^{−8} ≈ 1.2 × 10^{−8}).

As stated in [46], the implementation of the measurement of *h* using the Kibble balance or the XRCD methods allowed us to achieve a consistent reliable value for the latest results. In addition, the calculated relative uncertainty does not exceed the uncertainty due to the current implementations of primary and secondary units of mass. However, given the *r*_{exp}/*r*_{SI} ratio (*LMТI*: 2.9, *LMТF*: 9.1), there is an urgent need to reduce the influence of sources of uncertainty for XRCD.

It should be noted that, in the framework of the information approach, the statement that “after the Planck constant is constant (an exact number with zero uncertainty...)” [47], is unacceptable because the relative uncertainty of measurement of the Planck constant always varies depending on the selected CoP inherent in the selected model.

2) The data from Table 3 clearly show that the minimum achievable relative uncertainties, *r*_{SI}, calculated in accordance with the information approach, differ by two orders of magnitude for different methods of measuring the Boltzmann constant *k*! That is why, in the framework of the information approach, in contrast to the concept approved by CODATA,* it is not recommended to determine and declare only one value of relative uncertainty when measuring the Boltzmann constant (and other constants) by various methods*.

Using an information-oriented approach, both a respected scientist and a simple engineer can easily identify the advantages or disadvantages of a particular measurement method. Thus, analyzing the data of Table 3, it is obvious that the greatest success in achieving high accuracy of measurement of *k* in recent years was achieved using JNT and DBT, considering the smallest values of the ratio *r*_{exp}/*r*_{SI} (1.9) and (1.1). At the same time, the achieved experimental least relative uncertainty of 3.7 × 10^{−7}, realized using DCGT, is doubtful. This is explained by the requirement of the *μ*-rule, according to which the theoretically calculated relative uncertainty (4.3 × 10^{−7}) is always less than the experimental one (3.7 × 10^{−7}). Therefore, researchers of [13] [38] should reanalyze all possible sources of uncertainty.

3) From Table 3, it is obvious that in measuring *H*_{0} using BDL and BAO (CoP_{SI} ≡ *LMТ*), the experimental relative uncertainties (0.01 [41] and 0.01 [43]) calculated according to *IARU* are many times greater than the recommended 0.00023 and 0.00018, respectively. This situation indicates that hidden variables are not considered and CoP_{SI} ≡ *LMT* cannot be used in the future. Therefore, the conviction of scientists in accounting for all possible sources of uncertainties is far from providing a guarantee of achieving the true value of *Н*_{0} by these two methods.

Following the logic of the information approach, it is again necessary to recognize that the method of measuring *H*_{0} using the cosmic microwave background is the most promising, theoretically justified, and implements the most reliable experimental data. This conclusion can be confirmed by calculating the ratio ε_{SI}/*r*_{exp} considering the data in Table 3

$\begin{array}{l}{\left({\epsilon}_{\text{SI}}/{r}_{\text{exp}}\right)}_{\text{BAO}}=0.0048/0.01=0.05,\\ {\left({\epsilon}_{\text{SI}}/{r}_{\text{exp}}\right)}_{\text{BDL}}=0.0048/0.01=0.05,\\ {\left({\epsilon}_{\text{SI}}/{r}_{\text{exp}}\right)}_{\text{CMB}}=0.0442/0.007=6.3,\\ {\left({\epsilon}_{\text{SI}}/{r}_{\text{exp}}\right)}_{\text{BAO}}={\left({\epsilon}_{\text{SI}}/{r}_{\text{exp}}\right)}_{\text{BDL}}<{\left({\epsilon}_{\text{SI}}/{r}_{\text{exp}}\right)}_{\text{CMB}}.\end{array}$ (22)

Relation (22) reflects the fact that the best accuracy in measuring the Hubble constant can be achieved for the class of phenomena with a large number of base quantities.

Data of Table 3 show that the experimental minimum relative uncertainties *r*_{exp} exceed the recommended *r*_{SI} by 43 and 56 times for BDL and BAO, although when measuring *H*_{0} with CMB, *r*_{exp}/*r*_{SI} = 2.4. Because consistency is one of the basic requirements for analyzing results, the current situation needs to be explained. The information approach declares that the inevitable primordial absolute uncertainty of the model already exists in the process of developing a method for measuring a physical constant. That is why, when making predictions to increase the accuracy of the Hubble constant, great caution should be exercised. The fact is that with an increase in the number of observed space objects, according to most astronomers using various methods of calculating *H*_{0}, absolute (ideal) statistical stability of the observed parameters and characteristics of any physical phenomena (real events, processes, and fields) is achieved. However, as was proved [48], the nonideal character of statistical stability (statistical predictability), which manifests itself in the absence of convergence (mismatch) of statistical estimates, plays a key role in limiting accuracy. At small temporal, spatial, or spatiotemporal intervals of observation, an increase in the amount of statistical data leads to a decrease in the level of fluctuations in statistical estimates, which creates the illusion of ideal statistical stability. However, starting with a certain critical amount of data, the decrease in the level of fluctuations stops. A further increase in the amount of data either practically does not affect the level of fluctuations in the estimates, or even leads to their growth.

4) The huge difference between the achieved experimental relative uncertainty in measuring the gravitational constant by means of mechanical methods and the theoretically recommended (*r*_{exp}/*r*_{SI} = 12.7) confirms the thesis of the information approach about the inappropriateness of their use in determining the true value of the gravitational constant. At the same time, a higher measurement accuracy of *G* was achieved using electromechanical methods: *r*_{exp}/*r*_{SI} = 1.9. From the point of view of the information approach, further clarification of the true value of the gravitational constant and a decrease in the experimental relative uncertainty is possible when using models and measurement methods with a large number of base quantities, for example, CoP_{SI} ≡ *LMТθI*.

In addition to the comments made in 1) - 4), one can make the following comments and conclusions, which are sometimes not obvious and do not coincide with the provisions of the generally accepted CODATA methodology.

a) The values of the minimum attainable comparative and relative uncertainties calculated according to the information approach depend on the choice of class of phenomena. Theory can predict their value. It is important to note that during the transition from the mechanistic model (*LMТ*) to CoP_{SI} with a large number of base variables, the uncertainty increases. This is explained by a change in the number of potential interaction effects between an increased number of quantities that may or may not be considered by the researcher.

b) You may notice large differences in the level of consistency between ε_{SI} and ε_{exp} calculated according to *IACU*. This level can be called a “coefficient of consistency” for a physical constant measured by various methods. In particular, when measuring *Н*_{0}, the ratio ε_{exp}/ε_{SI} is 710 (BDL) and 104 (BAO), while using CMB this ratio is only 4.1. A similar situation exists for measuring the gravitational constant: ε_{exp}/ε_{SI} = 100 when implementing mechanical methods, and ε_{exp}/ε_{SI} = 7.9 using electromechanical methods. At the same time, when measuring the Planck constant with KB and XRCD and using AGT and DCGT to calculate the Boltzmann constant, the values of the ε_{exp}/ε_{SI} ratios are very close to each other. As part of the information approach, this situation indicates that the BDL, BAO, and mechanical methods for *G* have limited use, and it can even be argued that they are not recommended for use. Moreover, using simple relationships calculated in accordance with a theoretically sound approach, we can draw very serious and far-reaching conclusions. It is important to emphasize once again that, using the *IACU*, researchers can find out for which method of measuring the physical constant it is necessary to continue the search for all possible sources of uncertainties. Thus, *the ratio ε _{exp}/ε_{SI} is an objective criterion for assessing the achieved accuracy when comparing different methods of measuring one specific physical constant.*

c) The introduction of comparative uncertainty, through the *IARU*, to evaluate the accuracy of measurements of physical constants allows the calculation of the *r*_{exp}/*r*_{SI} ratio. From the data in Table 3, a very obvious trend is clearly traced: models of measurements of physical constants with a small number of base quantities (*LMT*) and (*LMTF*) have clearly overestimated values of this ratio: 9.1, 12.7, 44, and 56. This is due to insufficient consideration of the effect of unaccounted base quantities and possible relationships between variables in calculating the value of the physical constant. At the same time, for models with a large number of base quantities, for example, *LMTI* or *LMTθF*, the *r*_{exp}/*r*_{SI} ratio varies from 0.9 to 2.9. Thus, in the framework of the information approach, we can consider the *r*_{exp}/*r*_{SI} ratio as a *universal indicator of the achievements of scientists in measuring any physical constant using a variety of methods*.

4. Discussion

Thanks to an amazing combination of information theory, which is strictly thought out and equipped with an excellent mathematical apparatus, with a carefully selected and verified database of experimental physics, it became possible to calculate the accuracy limit for measuring physical constants. This approach is realized without any statistical methods, weighted coefficients, and criteria of consistency.

Being unsatisfied with the statistical evaluation of measurements of physical constants, the author looked for an approach in which mathematical and logical difficulties are solved by simple definitions and calculations that are easy to understand. The author suspects that the information approach may also shed new light on old difficulties. It is generally taken for granted that if there is already a method that has been tested and accepted by the scientific community, there is nothing to look for better than good. You may need to look for descriptions where the situation is simpler. Perhaps this should not be much more complicated than SEM.

One of the key concepts of the information approach is the application of the concept of complexity using the theory of information to the International System of Units, which is the result of the intellectual activity of scientists and does not exist in nature. We use the concept of complexity to measure the amount of information contained in the measurement model of a specific physical variable, and then use SI with seven base quantities to classify the classes of phenomena inherent in a particular measurement method. The proposed informational approach has the advantage that it takes into account both the physical nature of the experiment (a qualitative set of base quantities) and information content due to the specific number of variables taken into account in the model. In addition, the proposed measure of the proximity of the model to a real object (comparative uncertainty) can be used for any data set without requiring consistent results.

Comparative uncertainty is when traditional statistical methods used to process data sets of physical measurement results fail! Compared to the CODATA technique, the information approach has two obvious additional advantages. The first is that the information approach has the predictability property (studying the extent to which events can be predicted [49]). Today, CODATA uses LSA as the most preferred measure of predictability in a dataset for measuring physical constants. However, the calculations performed using the LSA (standard uncertainty of the predictive model) depend on the data set in which the results are presented [50]. Whereas an informational approach can handle any conflicting results. Secondly, the information approach also has the property of transparency [51], which is a key requirement for any information systems, including the process of modeling the measurement act. Any method for calculating the accuracy of a model with the necessary calculations should be available to engineers and scientists. The presented procedure for implementing the information approach using two methodologies is simple in perception and is easily implemented by a sufficiently qualified user.

The author notes that it is likely, at least philosophically, more acceptable, that the value of the relative uncertainty of the measurement of the physical constant is really clearly defined by a theoretically proven and simply implemented information method, as opposed to a statistical and expert assessment. It would be premature to argue that this would contradict, in everyday life, the Ockham principle (entities should not be introduced except when strictly necessary [52]), or in theory, when it is very difficult to avoid losing valuable information when describing the results of measurements of a physical constant using statistical methods. It is hard to imagine how these methods can be related to the real world. Unfortunately, the statistics are similar to experts who are witnesses in court—they will testify in favor of either side. Supporters of SEM must solve one difficulty—this is the creation of a “correct” distribution of results. Currently, many SEMs have been proposed. Can the CODATA method be considered “true and impeccable” by refusing to consider an alternative? Of course, it would be wrong to deny that the CODATA method allowed the implementation of the new SI structure. It has at least as many parameters as necessary to determine the values of several fundamental physical constants. At the same time, the search for truth leaves the opportunity, on the one hand, for criticism, and on the other hand, for revealing new approaches.

From the above it follows that comparative uncertainty is inherent in any data sets when analyzing measurements of physical constants, which is an additional justification for clarifying standard practice. This uncertainty is always present and cannot be eliminated by standard data analysis, so measurements of physical constants may be misinterpreted in future experiments with greater accuracy.

When considering mathematical modeling of the process of measuring a physical constant, the question is whether physics should obey mathematical SEM or adhere more closely to observations and data [53]. According to the author, the information approach having a deep physical content, in particular, *IARU*, allows us to calculate with high accuracy the relative uncertainty, which is in good agreement with the CODATA recommendations but *with a very short time*. The fundamental difference between the proposed method and the existing CODATA statistical-expert methodology (in fact, all statistical methods are unreliable—some more and some less [54]) is that the information approach is theoretically justified without using any assumptions. It does not include such concepts as a statistically significant trend, aggregate consensus values, or statistical control, which are characteristic of a statistical-expert tool adopted in CODATA. We sought to show how the mathematical and, apparently, rather arbitrary expert formalism can be replaced by a simple, theoretically substantiated postulate about the use of information in measurements.

Thus, it turns out that the problem that researchers face in the process of calculating relative uncertainty, which allows us to confirm the true value of a physical constant, ultimately boils down to the problem of choosing a model of the class of a phenomenon for the measurement process. With this formulation of the question, limitations arise due to the human mind, namely the knowledge, experience, and intuition of the researcher. The elimination of such limitations, as we have seen, can be successfully implemented using the information approach, which can be considered the main tool for assessing the accuracy of measuring a physical constant.

5. Conclusions

In this study, we presented the possibility of applying the concept of information to the problem of assessing the accuracy of measuring a physical constant. One of the important conclusions is that the amount of information in the model is the key to understanding the physical meaning of the threshold mismatch between the result of the experiment and the mathematical representation of the measurement process. This conclusion is consistent with the idea that the fundamental task of evaluating calculation accuracy is to select a channel for transmitting information through a model that developers choose in accordance with their experience, knowledge and intuition. The choice of the structure of the model and its class of phenomena leads to a situation where there is an inevitable measurement uncertainty. Researchers can no longer ignore or eliminate it, since future studies on the measurement of physical constants may incorrectly interpret the results.

A reliable, information-oriented, theoretically substantiated approach is proposed for calculating the relative uncertainty when measuring a physical constant. This approach uses the comparative uncertainty inherent in any measurement model of the measurement process, the value of which is due to a qualitative set of base quantities and the total number of derived variables. The approach is not based on the assumption of a Gaussian distribution and is applicable to the analysis of results obtained both for a long and a short period of time.

Calculated in accordance with the *IARU* for CoP = *LMTθF*, the relative uncertainty (2.3 × 10^{−7}) of the Boltzmann constant measurement using the acoustic gas thermometer method is close to the smallest achieved experimental uncertainty of 3.7 × 10^{−7} [13] and recognized by CODATA. This confirms the *µ*-rule, according to which the experimentally achieved relative uncertainty is always greater than that calculated using the information approach (Section II.4). It should be noted that the calculation of *r*_{SI} is carried out for a very short period of time, incomparably smaller than by the CODATA method.

The proposed approach was used to estimate the relative and comparative uncertainties when measuring the Planck constant, Boltzmann constant, Hubble constant, and gravitational constant according to the results of studies published for the period 2000-2019.

The ratio of the minimum achieved experimental comparative uncertainty to the theoretically calculated one revealed the unsuitability of using a model with a small number of base quantities (*LMТ* and *LMТF*) for measuring the Planck constant, the Hubble constant, and the gravitational constant. *The ratio ε _{exp}/ε_{SI} is an objective criterion for assessing the achieved accuracy when comparing various methods of measuring one specific physical constant*.

When using models with a large number of base quantities, for example, *LMTI* or *LMTθF*, the ratio of the minimum experimental relative uncertainty achieved to the theoretically calculated *r*_{exp}/*r*_{SI} varies from 0.9 to 2.9, which indicates the suitability of these methods for measuring physical constants. At the same time, *r*_{exp}/*r*_{SI} varies from 9 to 56 for models with a low number of base quantities, which is unacceptably high for practical use. Thus, in the framework of the information approach, *r*_{exp}*/r*_{SI}* can be considered as a universal metric for assessing the practical level of accuracy when measuring any physical constants using various methods*.

It should be noted that the application of the information approach allows us to make a very non-trivial conclusion: *when measuring physical constants using various methods, it is not recommended to state only one value of relative uncertainty*.

The author understands that the stated conclusions can be hardly accepted by part of the scientific community, since they do not fit into the generally accepted point of view. However, the author hopes that readers will find the time and desire to identify possible contradictions or fundamental shortcomings of the proposed method. At the same time, the presented results do not in any way abolish the basic principles of measurement theory, which always remain valid, but must be used separately at the further stage of the model implementation.

A rigorous analysis of the data presented, confirmed by numerical results, shows that the proposed method is not only reliable and robust, but also effective. The results of the study do not reject the possibility of applying an information-oriented approach to the calculation of the relative uncertainty in the measurement of physical constants, and the constantly obtained new evidence is exclusively in its favor.

In this time of uncertainty, it is very important to understand the origins of the human “fuzzy” perception of the world around us. According to the author, it is the information-theoretic approach that allows us to understand the physical reasons why we, whether we want it or not, see the object under study in the “fog” of errors and doubts.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

[1] | Rabinovich, S.G. (2005) Measurement Errors and Uncertainties, Theory and Practice. 3rd Edition, Springer Science and Media, Inc., New York. |

[2] |
(2008) BIPM Guide to the Expression of the Uncertainty in Measurement (the GUM). 1-134. https://www.bipm.org/utils/common/documents/jcgm/JCGM_100_2008_E.pdf |

[3] |
Petersen, J.H., Holst, K.K. and Budtz-Jørgensen, E. (2010) Correcting a Statistical Artifact in the Estimation of the Hubble Constant Based on Type IA Supernovae Results in a Change in Estimate of 1.2%. The Astrophysical Journal, 723, 966-978. https://iopscience.iop.org/article/10.1088/0004-637X/723/1/966/pdf https://doi.org/10.1088/0004-637X/723/1/966 |

[4] |
Newell, D.B. and Tiesinga, E. (2019) The International System of Units (SI). NIST Special Publication 330, 1-138. https://doi.org/10.6028/NIST.SP.330-2019 |

[5] |
Wübbeler, G., Bodnar, O. and Elster, C. (2017) Robust Bayesian Linear Regression with Application to an Analysis of the CODATA Values for the Planck Constant. Metrologia, 55, 20-28. https://sci-hub.tw/10.1088/1681-7575/aa98aa https://doi.org/10.1088/1681-7575/aa98aa |

[6] |
Mohr, P.J., et al. (2018) Data and Analysis for the CODATA 2017 Special Fundamental Constants Adjustment. Metrologia, 55, 125-146. https://iopscience.iop.org/article/10.1088/1681-7575/aa99bc/pdf https://doi.org/10.1088/1681-7575/aa99bc |

[7] |
Pavese, F. (2018) The New SI and the CODATA Recommended Values of the Fundamental Constants 2017 Compared with 2014, with a Comment to Possolo et al. Metrologia, 55, 1-11. https://arxiv.org/ftp/arxiv/papers/1512/1512.03668.pdf |

[8] |
Dodson, D. (2013) Quantum Physics and the Nature of Reality (QPNR) Survey: 2011. https://goo.gl/z6HCRQ |

[9] |
Henrion, M. and Fischhoff, B. (1986) Assessing Uncertainty in Physical Constants. American Journal of Physics, 54, 791-798. https://goo.gl/WFjryK https://doi.org/10.1119/1.14447 |

[10] |
Karshenboim, S.G. (2005) Fundamental Physical Constants: Looking from Different Angles. Canadian Journal of Physics, 83, 767-811. https://goo.gl/OD26ZN https://doi.org/10.1139/p05-047 |

[11] | (2018) Guide to the Expression of Uncertainty in Measurement—Developing and Using Measurement Models. JCGM 103 CD 2018-10-04, 1-79. |

[12] | Sedov, L.I. (1993) Similarity and Dimensional Methods in Mechanics, CRC Press, Florida. |

[13] |
Newell, D.B., et al. (2018) The CODATA 2017 Values of h, e, k, and N _{A} for the Revision of the SI. Metrologia, 55, 13-16. http://sci-hub.tw/10.1088/1681-7575/aa950a |

[14] |
Hensen, J.L.M. (2011) Building Performance Simulation for Sustainable Building Design and Operation. Proceedings of the 60th Anniversary Environmental Engineering Department, Czech Technical University, Prague, 1-8. http://goo.gl/yYYhLW |

[15] |
System of Measurement. Wikipedia. https://en.wikipedia.org/wiki/System_of_measurement |

[16] |
Menin, B. (2017) Information Measure Approach for Calculating Model Uncertainty of Physical Phenomena. American Journal of Computational and Applied Mathematics, 7, 11-24. https://goo.gl/m3ukQi |

[17] | Menin, B. (2019) Fundamental Constants: Evaluating Measurement Uncertainty. Cambridge Scholars Publishing, UK. |

[18] |
D. Bose, D., Wright, M.J. and Palmer, G.E. (2006) Uncertainty Analysis of Laminar Aeroheating Predictions for Mars Entries. Journal of Thermophysics and Heat Transfer, 20, 652-662. https://sci-hub.tw/10.2514/1.20993 https://doi.org/10.2514/1.20993 |

[19] |
Shannon, C. (1949) Communication in the Presence of Noise. Proceedings of the IRE, 37, 10-21. https://doi.org/10.1109/JRPROC.1949.232969 |

[20] |
Brillouin, L. (1964) Scientific Uncertainty and Information. Academic Press, New York. https://goo.gl/tAewRu |

[21] |
Kotelnikov, V.A. (1933) On the Transmission Capacity of “Ether” and Wire in Electro-Communications. First All-Union Conf. Questions of Communications, 1-23. https://goo.gl/wKvBBs |

[22] |
Menin, B. (2020) Uncertainty Estimation of Refrigeration Equipment Using the Information Approach. Journal of Applied Mathematics and Physics, 8, 23-37. https://www.scirp.org/journal/Paperabs.aspx?PaperID=97483 https://doi.org/10.4236/jamp.2020.81003 |

[23] |
Sonin, A.A. (2001) The Physical Basis of Dimensional Analysis. 2nd Edition, Department of Mechanical Engineering, MIT, Cambridge. http://web.mit.edu/2.25/www/pdf/DA_unified.pdf |

[24] |
Menin, B. (2019) Precise Measurements of the Gravitational Constant: Revaluation by the Information Approach. Journal of Applied Mathematics and Physics, 7, 1272-1288. http://file.scirp.org/pdf/JAMP_2019062614403787.pdf https://doi.org/10.4236/jamp.2019.76087 |

[25] |
Menin, B. (2019) The Boltzmann Constant: Evaluation of Measurement Relative Uncertainty Using the Information Approach. Journal of Applied Mathematics and Physics, 7, 486-504. https://doi.org/10.4236/jamp.2019.73035 |

[26] |
Pitre, L., et al. (2009) An Improved Acoustic Method for the Determination of the Boltzmann Constant at LNE-INM/CNAM. C. R. Physique, 10, 835-848. https://doi.org/10.1016/j.crhy.2009.11.001 |

[27] |
Sutton, G., Underwood, R., Pitre, L., de Podesta, M. and Valkiers, S. (2010) Acoustic Resonator Experiments at the Triple Point of Water: First Results for the Boltzmann Constant and Remaining Challenges. International Journal of Thermophysics, 31, 1310-1346. https://doi.org/10.1007/s10765-010-0722-z |

[28] |
Pitre, L. (2015) Determination of the Boltzmann Constant k from the Speed of Sound in Helium Gas at the Triple Point of Water. Metrologia, 52, 263-273. http://sci-hub.tw/10.1088/0026-1394/52/5/S263 https://doi.org/10.1088/0026-1394/52/5/S263 |

[29] |
Gavioso, R.M. (2015) A Determination of the Molar Gas Constant R by Acoustic Thermometry in Helium. Metrologia, 52, 274-304. http://sci-hub.tw/10.1088/0026-1394/52/5/S274 https://doi.org/10.1088/0026-1394/52/5/S274 |

[30] |
Pitre, L., et al. (2017) New Measurement of the Boltzmann Constant k by Acoustic Thermometry of Helium-4 Gas. Metrologia, 54, 856-873. https://ws680.nist.gov/publication/get_pdf.cfm?pub_id=923465 https://doi.org/10.1088/1681-7575/aa7bf5 |

[31] |
de Podesta, M., et al. (2017) Re-Estimation of Argon Isotope Ratios Leading to a Revised Estimate of the Boltzmann Constant. Metrologia, 54, 683-692. http://sci-hub.tw/10.1088/1681-7575/aa7880 https://doi.org/10.1088/1681-7575/aa7880 |

[32] |
Feng, X.J., et al. (2017) Determination of the Boltzmann Constant with Cylindrical Acoustic Gas Thermometry: New and Previous Results Combined. Metrologia, 54, 748-762. http://sci-hub.tw/10.1088/1681-7575/aa7b4a https://doi.org/10.1088/1681-7575/aa7b4a |

[33] |
Pitre, L., Plimmer, M.D., Sparasci, F. and Himbert, M.E. (2018) Determinations of the Boltzmann Constant. C. R. Physique, 1-11. https://sci-hub.tw/10.1016/j.crhy.2018.11.007 https://doi.org/10.1109/CPEM.2018.8501168 |

[34] |
Menin, B. (2019) Progress in Reducing the Uncertainty of Measurement of Planck’s Constant in Terms of the Information Approach. Physical Science International Journal, 21, 1-11. http://www.journalpsij.com/index.php/PSIJ/article/view/30104/56478 https://doi.org/10.9734/psij/2019/v21i230104 |

[35] |
Menin, B. (2019) Hubble Constant Tension in Terms of Information Approach. Physical Science International Journal, 23, 1-15. https://doi.org/10.9734/psij/2019/v23i430165 |

[36] |
Haddad, D., et al. (2017) Measurement of the Planck Constant at the National Institute of Standards and Technology from 2015 to 2017. Metrologia, 54, 633-641. http://iopscience.iop.org/article/10.1088/1681-7575/aa7bf2/pdf https://doi.org/10.1088/1681-7575/aa7bf2 |

[37] |
Wood, B.M., Sanchez, C.A., Green, R.G. and Liard, J.O. (2017) A Summary of the Planck Constant Determinations Using the NRC Kibble Balance. Metrologia, 54, 399-409. http://iopscience.iop.org/article/10.1088/1681-7575/aa70bf/pdf |

[38] |
Fischer, J., et al. (2018) The Boltzmann Project. Metrologia, 55, 1-36. http://sci-hub.tw/10.1088/1681-7575/aaa790 https://doi.org/10.1088/1681-7575/aaa790 |

[39] |
Qu, J. (2017) An Improved Electronic Determination of the Boltzmann Constant by Johnson Noise Thermometry. Metrologia, 54, 549-558. http://sci-hub.tw/10.1088/1681-7575/aa781e https://doi.org/10.1088/1681-7575/aa781e |

[40] |
Fasci, E., et al. (2015) The Boltzmann Constant from the H218O Vibration-Rotation Spectrum: Complementary Tests and Revised Uncertainty Budget. Metrologia, 52, 233-241. https://doi.org/10.1088/0026-1394/52/5/S233 |

[41] |
Riess, A.G., Casertano, S., Yuan, W., Macri, L.M. and Scolnic, D. (2019) Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics Beyond ΛCDM. 1-25. https://arxiv.org/pdf/1903.07603.pdf https://doi.org/10.3847/1538-4357/ab1422 |

[42] |
Planck Collaboration. Planck 2018 Results. VI. Cosmological Parameters. Astronomy & Astrophysics, 1-71. https://www.cosmos.esa.int/documents/387566/387653/Planck_2018_results_L06.pdf/38659860- 210c-ffac-3921-e5eac3ae4101 |

[43] |
Bennett, C.L., Larson, D., Weiland, J.L. and Hinshaw, G. (2014) The 1% Concordance Hubble Constant. The Astrophysical Journal, 794, 1-8. http://iopscience.iop.org/article/10.1088/0004-637X/794/2/135/pdf https://doi.org/10.1088/0004-637X/794/2/135 |

[44] |
Newman, R., Bantel, M., Berg, E. and Cross, W. (2014) A Measurement of G with a Cryogenic Torsion Pendulum. Phil. Trans. R. Soc. A, 372, 20140025, 1-24. http://sci-hub.tw/10.1098/rsta.2014.0025 https://doi.org/10.1098/rsta.2014.0025 |

[45] |
Tan, W.H., et al. (2018) Measurements of the Gravitational Constant Using Two Independent Methods. Nature, 560, 582-588. http://sci-hub.tw/10.1038/s41586-018-0431-5 https://doi.org/10.1038/s41586-018-0431-5 |

[46] |
Possolo, A., Schlamminger, S., Stoudt, S., Pratt, J.R. and Williams, C.J. (2018) Evaluation of the Accuracy, Consistency, and Stability of Measurements of the Planck constant Used in the Redefinition of the International System of Units. Metrologia, 55, 29-37. https://doi.org/10.1088/1681-7575/aa966c |

[47] |
Shi-Song, L., et al. (2015) Progress on Accurate Measurement of the Planck Constant: Watt Balance and Counting Atoms. Chinese Physics B, 24, 1-15. https://doi.org/10.1088/1674-1056/24/1/010601 |

[48] |
Gorban, I.I. (2017) The Physical-Mathematical Theory of Hyper-Random Phenomena. Computer Science Journal of Moldova, 25, 145-194. http://www.math.md/files/csjm/v25-n2/v25-n2-(pp145-194).pdf https://doi.org/10.1007/978-3-319-60780-1_8 |

[49] |
DelSole, T. (2004) Theory of Predictability and Information. Part I: Measures of Predictability. American Meteorological Society, 1-16. https://journals.ametsoc.org/doi/pdf/10.1175/1520-0469%282004%29061%3C2425%3APAITPI %3E2.0.CO%3B2 |

[50] |
Schneider, T. and Griffies, S.M. (1999) A Conceptual Framework for Predictability Studies. Journal of Climate, 12, 3133-3155. https://doi.org/10.1175/1520-0442(1999)012<3133:ACFFPS>2.0.CO;2 |

[51] |
Hosseini, M., et al. (2018) Four Reference Models for Transparency Requirements in Information Systems. Requirements Engineering, 23, 251-275. https://doi.org/10.1007/s00766-017-0265-y |

[52] |
Bais, F.A. and Farmer, J.D. (2008) The Physics of Information. 617-691. https://staff.fnwi.uva.nl/f.a.bais/boeken/PhysicsOfInfo.pdf https://doi.org/10.1016/B978-0-444-51726-5.50020-0 |

[53] |
Jarvis, S.H. (2019) Decision of the Cosmological Constant Task, 1-33. https://www.researchgate.net/publication/338159068_Solving_the_Cosmological_Constant_ Problem |

[54] |
Burgin, M. (2003) Information Theory: A Multifaceted Model of Information. Entropy, 5, 146-160. http://www.mdpi.org/entropy/papers/e5020146.pdf https://doi.org/10.3390/e5020146 |

Journals Menu

Contact us

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2023 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.