Unleashing the Power of Information Theory: Enhancing Accuracy in Modeling Physical Phenomena

Abstract

When building a model of a physical phenomenon or process, scientists face an inevitable compromise between the simplicity of the model (qualitative-quantitative set of variables) and its accuracy. For hundreds of years, the visual simplicity of a law testified to the genius and depth of the physical thinking of the scientist who proposed it. Currently, the desire for a deeper physical understanding of the surrounding world and newly discovered physical phenomena motivates researchers to increase the number of variables considered in a model. This direction leads to an increased probability of choosing an inaccurate or even erroneous model. This study describes a method for estimating the limit of measurement accuracy, taking into account the stage of model building in terms of storage, transmission, processing and use of information by the observer. This limit, due to the finite amount of information stored in the model, allows you to select the optimal number of variables for the best reproduction of the observed object and calculate the exact values of the threshold discrepancy between the model and the phenomenon under study in measurement theory. We consider two examples: measurement of the speed of sound and measurement of physical constants.

Share and Cite:

Menin, B. (2023) Unleashing the Power of Information Theory: Enhancing Accuracy in Modeling Physical Phenomena. Journal of Applied Mathematics and Physics, 11, 760-779. doi: 10.4236/jamp.2023.113051.

1. Introduction

For the last 400 years, since the great Newton, scientists, using acquired knowledge, to the best of their ability and with limited resources, have represented idealized laws of nature in the form of a dependence on a small number of variables, such as Einstein’s formula or Heisenberg’s inequality. Although the laws of nature are important and useful, they are only models; they are based on assumptions, are sensitive to a set of variable data, and are valid only within the experimental accuracy achieved. That is why it is difficult to consider their simplicity as an unconditional criterion of their inviolability and immutability for future generations of scientists.

In the modern scientific community, there is an opinion that with a larger number of variables in the model, its accuracy increases. A striking example of such a position is [1]. NASA engineers calculated the heating of a spacecraft’s skin as it entered the Martian atmosphere using 130 variables. Numerous successful vehicle landings testify to the unconditional success of engineering thought in building the model. However, in most cases, an increase in the number of variables taken into account is accompanied by a complication of the model, an increase in the integral uncertainty of the variable under study, and a decrease in its accuracy. It can be assumed that the amount of variables in the model and the accuracy achieved are antagonistic factors, which leads to the idea of the existence of an optimal set of variables when studying an object. If the assumption is correct, then the possibility of selecting the model that is most preferable to the specific phenomenon under study opens up. At present, no generally accepted criterion for constructing an optimal model of a physical phenomenon of interest or technological process has been proposed.

2. Short Review of Methods to Optimize a Model

Numerous research papers are devoted to model optimization, including articles on model verification and validation (V & V) [2]. However, these methods are not without drawbacks:

1) Time-consuming: Validation and verification methods require a lot of time and effort to conduct, especially when using experimental data. This can delay the model optimization process and increase the overall project timeline.

2) Costly: V & V methods can be expensive, particularly when using experimental data or conducting simulations. The cost of equipment, materials, and software can add up quickly and may be prohibitive for some projects.

3) Limited scope: V & V methods are often limited in scope, as they rely on a finite amount of data or simulations. This can result in a lack of accuracy and reliability in the model, particularly when extrapolating beyond the tested conditions.

4) Uncertainty: V & V methods are subject to uncertainty and variability, particularly when using experimental data. This can lead to errors and inaccuracies in the model, particularly if the data is incomplete or inconsistent.

5) Complexity: V & V methods can be complex and difficult to implement, particularly when dealing with large and complex models. This can require specialized knowledge and expertise, which may be difficult to acquire or expensive to hire.

6) Subjectivity: V & V methods may be subjective, particularly when dealing with qualitative data or subjective opinions. This can introduce bias and inaccuracies into the model, particularly if the opinions or preferences of the modeler or stakeholders are not fully understood or accounted for.

7) Specific area of application: All methods are focused on the analysis of data obtained as a result of the experiments. The analysis of uncertainties associated with the structure of the model is outside the scope of their application.

In recent decades, the application of information theory has become a promising direction in the search for the optimal structure of the model.

An information criterion was suggested [3] in order to select the most appropriate model describing the researched MO. The model, chosen according to the smallest value of the information criterion, is “closest” to the unknown reality that generated the data among all the candidate models considered.

In [4] a framework for using information theory to compare models of physical phenomena based on their ability to fit the data and their complexity is presented. The authors introduce the Akaike information criterion (AIC) and the Bayesian information criterion (BIC), which can be used to compare the goodness-of-fit of different models to the data.

In a classic research [5] Jaynes applies information theory to the problem of parameter estimation in physical models. He argues that the maximum entropy principle, which seeks to find the probability distribution that maximizes the entropy subject to constraints, can be used to derive the maximum likelihood estimator.

An interesting approach was proposed in the study of quantum gates, which are, in essence, physical devices [6]. Therefore, they are subject to random errors. The reliability of quantum gates is considered from the perspective of information complexity. In turn, the complexity of gate operation is defined in terms of the difference between the entropy of variables associated with the initial and final states of computation. The approach explained that the gate operation can be associated with unbounded entropy, implying an impossibility of implementation under some conditions.

In research by [7], three criteria (robustness, fidelity and prediction-looseness) were used in order to assess the credibility of mathematical or numerical models. It is shown that these criteria are mutually antagonistic. The recommended main strategy is to explore the trade-offs between robustness and uncertainty, fidelity and data, and tightness of predictions.

The alternative model selection methods based on information criteria, multimodel inference, and relative variable importance are described [8]. Authors demonstrate their application using an illustrative example and present results from a simulation study to compare the performance of the various model selection methods for identifying the true model across a wide variety of conditions. Whether information-theoretic approaches can also be used not only in combination with maximum likelihood but also restricted maximum likelihood estimation was also examined.

Information theory is not strictly necessary to understand physical phenomena, but it can be a useful tool for analyzing and modeling complex systems. Information theory provides a framework for quantifying the amount of information conveyed by signals or data, and for measuring the entropy or randomness of probability distributions. These concepts can be applied to a wide range of physical phenomena, from molecular dynamics simulations to astrophysics.

In many cases, information theory can provide insights that are difficult to obtain using other methods. For example, in statistical mechanics, information theory is used to derive thermodynamic relationships and to analyze the behavior of complex systems with many degrees of freedom. In machine learning and data analysis, information theory provides tools for feature selection, model selection, and regularization.

To check the existence of an optimal set of the base quantities and derived variables for each case, an information approach was proposed based on the use of variables with a finite amount of information (FIQ) [9]. Its starting points seem to be very simple: it is the consciousness of the researcher, his philosophical perception of the world that gives rise to uncertainty in the study of any phenomenon. The amount of information contained in the model depends on how familiar the researcher is with the observed object.

The two examples presented in the article complement each other and highlight aspects of the model optimization problem, emphasizing the possibility of using the information method in various scientific and technical applications.

3. Beginnings of Informational Approach

Below, we consider the physical perspective of the modelling process. For the purposes of this article, a model is a communication channel between the object under study and the researcher using a system of units to select variables [9]. Here it is appropriate to make one important remark. When we implement the measurement process in accordance with a pre-formulated model, the main role is assigned to the “observer”. The terminology is unfortunate, since most people have the illusion that the observer is passive and does not interact with the observed phenomenon. However, he/she introduces some perturbation into the system under study with the help of the measuring instruments used. At the same time, at the stage of building a model (a mental act is performed), the object of interest to us is not subjected to any energy disturbances. Therefore, it would be more correct to use the term “thinker”.

This channel has some properties that depend on the thinker’s will. The axiomatic structure of the communication channel when modelling a physical object includes the following statements [9] :

Age 1. The variables in the model are selected by the thinker from a system of units, such as Anglo-American units, Planck units, or centimeter-gram-second (CGS). The most commonly employed system is the International System of Units (SI) [10].

Age 2. According to [11], any model includes various finite information quantities (FIQs), which may be “a scalar parameter, universal constant, a one-dimensional component of position or momentum, or a dimensionless number that acquires values from the set of real numbers R.”

Axiom 3. When reproducing the observed object, each researcher, based on his own opinion, selects certain variables in order to make an objective picture of the object under study. In this case, the base quantities taken from any system of units dictate the “group of processes” (GoP) to which the constructed model belongs. As stated in [12], a GoP is “a set of physical phenomena and processes described by a finite number of base quantities and derived variables that characterize the specific properties of the object under study from a qualitative and quantitative point of view.” For the specific case of measuring the magnitude of the current according to the Kirchhoff law, variables are used that have the dimensions of length (L), mass (M), time (T) and current (I). Then the model refers to GoPSI ≡ LMTI.

Age 4. Since q (FIQ) carries a limited amount of information [11] [13], a model consisting of a finite amount of variables contains a limited information quantity.

Axiom 5. The thinker selects variables in the model with equal probability. Although the researcher is convinced of the correctness of his judgement about the nature of the phenomenon, another observer, based on his philosophical views, can present a completely different construction of the model of the same object of observation.

These axioms may not be sufficient to represent all the features of the modelling process. Thus, the question of “accurate” reproduction of the observed phenomenon has not been fully addressed. In the following text, only aspects related to the magnitude of model uncertainty are considered.

Simultaneously, we believe that most readers are unfamiliar with a given set of axioms. The first three axioms are typically utilized by researchers by default. The fourth axiom may attract the attention and interest of developers familiar with the achievements of information theory. Certainly, the fifth axiom is a negative reaction. If we recall the long-term discussion about the electron, whether it is a particle or a wave-the following becomes clear. Scientists adhering to different philosophical views, having behind them a special store of knowledge, gained experience and individual intuition, have proved the validity of different models. As time has shown, and you cannot argue with him, both approaches have been confirmed. This cannot be disregarded or simply discarded.

Considering a model as a communication channel provides a unique opportunity to consider the information content of the model through physical variables chosen by the thinker in accordance with his perception of the observed phenomenon. Then there are two discrete sets of equiprobable random variables X { x 1 , , x j } and Y { y 1 , , y p } , i p . X is the total number of FIQs in the physical system observed by the researcher. Y is the number of FIQs in the model, a “noisy” version of X, “compressed” by the will of the thinker. In this case (formulation of the model), no energy interference is introduced into the real process, since only a thought experiment is carried out. However, it is obvious that some information is lost during modeling due to the subjective thinking of the thinker, caused by erroneous or inaccurate knowledge of the object, preconceived philosophical views, or lack of intuition. It is impossible to exclude the case when the researcher selects variables that are completely different from X. Then Y does not carry information about X, and the formulated model turns out to be completely noisy.

Model coding implies the use of a certain mapping process (GoP) from the initial set X (μSI) into a model structure with a given number of variables Y. Recall that within the framework of the FIQ-based method it is stated that the amount of information about a physical phenomenon is always finite due to the finite amount of information contained in the SI and the model. This situation leads to the existence of uncertainty, the magnitude of which is calculated using the concept of entropy in relation to the modeling process. To calculate the amount of information in the model and calculate the absolute uncertainty inherent in it, we first need to define a probability distribution over the possible outcomes of the model. The following inequality is taken into account: 0 < H(Y) < H(X), where H(Y) and H(X) are the entropies of the model and SI, respectively. To calculate H(X) and H(Y), the formalism presented in [14] [15] is used:

H ( X ) = k b ln μ SI (1)

H CoP ( Y ) = k b ln ( z β ) , H mod ( Y ) = k b ln ( z β ) (2)

where H(Y) is implemented in two stages: HGoP(Y) and Hmod(Y) are the entropies of the chosen GoP and the model itself, respectively, z' is the number of FIQs in the selected GoP, β' is the number of base quantities in the selected GoP, z'' is the number of FIQs recorded in a model, and β'' is the number of base quantities recorded in a model. µSI is the number of FIQs calculated using seven base SI quantities, µSI = 38,265 [9]. Intuitively, μSI seems to represent all the possible connections that exist in nature. At the same time, since µSI is finite, the reflection of the phenomenon under study in the model will always be inaccurate, and it is determined by the thinker’s worldview.

Then, using the theoretical evidence given in [16] about the relationship between absolute uncertainty and entropy in modeling, it is possible to calculate the value of the minimum mismatch threshold between the model and the object of study. For this, the recommended criterion is the comparative uncertainty ε [17] :

ε = Δ / S = [ ( z β ) / μ SI + ( z β ) / ( z β ) ] (3)

where ∆ is the absolute total uncertainty of the target FIQ due to the GoP and FIQs included in the model, and S is the target FIQ change interval, which is chosen by the researcher.

ε is one of the most important concepts of information theory [16] and can be considered as a universal limit of FIQ measurement accuracy. ε has a strictly defined value εopt for each GoP [17] :

ε opt = 2 ( z β ) / μ SI (4)

It is noteworthy that ε is not used in scientific practice to assess the accuracy of models in the study of physical phenomena and technological processes. However, Equation (3) limits the achievement of high measurement accuracy both in experiments using the latest test benches and in numerical calculations using high-speed computers. This is due to the fact that ε is not related to the act of measurement and is due only to the design of the formulated model and the finite amount of information in the model prior to any experiment. It has been proven [18] [19] that Equation (3)—the “ε-equation”, can be applied to models that use dimensional and non-dimensional FIQs derived from any system of units, including different base quantities and derived variables.

The distortion (compression) of the object under study, implemented in the model, is determined by the will of the thinker with his specific philosophical views. The number of selected FIQs other than γmod (the optimal number of FIQs inherent in the model shown in Table 1) results in a distortion value that occurs in the simulation. Using the above axioms, one can calculate the optimal values of εopt and the amount of FIQs contained in a given GoPSI [17]. The results are summarized in Table 1. Refer to [20] [21] for more information on more complex GoPs.

Within the framework of the informational approach, in order to clarify the preference for one or another model in the study of a particular physical phenomenon, one should use a comparison of the achieved experimental comparative uncertainties ε1 and ε2 with εopt. When comparing |ε1εopt| at |εoptε2| the researcher can “instantly” determine a lower value. For this, we assume that γ1 and γ2 are the number of FIQs in the first and second models, respectively. The models refer to the same GoP, and that γ1 < γGoP < γ2 and |γ1γGoP| < |γCoPγ2|. By applying (3), the following equations can be obtained:

| ε 1 ε opt | = | 2 ( γ 1 γ GoP ) / μ SI | , (5)

| ε 2 ε opt | = | 2 ( γ 2 γ GoP ) / μ SI | ,

| ε 1 ε opt | / | ε 2 ε opt | = | γ 1 γ CoP | / | γ 2 γ CoP | < 1 , (6)

Table 1. Data on the characteristic parameters of each Group of Processes (GoP).

where ε1 and ε2 are the comparative uncertainties of the first and the second models, respectively.

For ε1, the number of FIQs taken into account is closer to the optimal γCoP corresponding to the optimal comparative uncertainty εopt. Thus, a more informative model uses γ1 closer to γCoP. It follows that the FIQ-based approach makes it possible to identify the preferred model of the phenomenon under study in order to achieve higher measurement accuracy.

The practical significance of the information approach is determined by the fact that, for a better understanding of the methodology proposed by researchers, it is possible to reformat ε into a relative uncertainty r, which is widely used in scientific and technical research. The theoretical justification for this calculation is presented in detail in [18]. However, note that r is calculated already in the measurement process by the researcher, based on his subjective experience and knowledge. This situation leads to the idea that relative uncertainty includes an element of subjective judgement [22].

An important feature of the representation of modeling in the form of a process of information transfer from the object under study to the thinker occurs due to the unique physical content of the concept of “the amount of information contained in the model”. The FIQ-based approach makes it possible to calculate the initial and inevitable uncertainty of the model ε, which is due precisely to the worldview of the researcher. ε cannot be detected by any statistical methods using concepts such as weighted coefficients or consistency criteria. These tools are simply not intended for estimating ε. This is explained by the fact that statistical methods are focused on the analysis of the results of experiments and computer calculations according to a model that has already been built and implemented in the field.

The results of the theoretical conclusions of the informational method have been applied to several practical problems, including the measurement of a physical constant [13], the determination of the required simplicity of a physical law [9], and the evaluation of the efficiency of a technological process (thermal energy accumulation and ice maker performance) [23] [24] [25] [26].

4. Applications of the FIQ-Based Approach

4.1. Speed of Sound

There have been many studies on measuring the speed of sound. Below is an analysis of the experimental data of only three works, in which the speed of sound propagation in hydrogen chloride [27], in 36 elementary solids [28], and binary mixtures (N2 + H2) [29] is measured.

The generated sound speed measurement data are listed in Table 2. The results were compared in accordance with the information method [20]. Examining the data entered and using the information provided in the articles, the following comments are made.

Table 2. Comparison of research results.

To quantitatively certify the quality of the numerical and experimental analysis of physical systems, the relative uncertainty r is used. However, it is difficult to compare the presented models in terms of the accuracy of measuring the speed of sound using r. In addition, that is why. In the three papers discussed, the researchers are convinced that the measurements were made correctly and the results are reliable because the calculations using the built models and the experimental data are close, as well as the results from calculating the achieved total relative measurement uncertainty r. Notably, in publications on measuring the speed of sound propagation, the achieved total ∆ (EU) was not compared with the discrepancy between theoretical calculations (TC) and experimental results (ER). For example, to calculate EU, concepts such as [30] [31] can be applied. In the case where |TC − ER| < |EU|, the practicality and legitimacy of using the formulated model are debatable [32], and it is risky to apply it to describe the propagation of sound in various media. Who benefits from this situation? For the results of the presented studies to still attract the attention of readers, to confirm the feasibility and preference of a particular model, we will use ε as a universal criterion for choosing the preferred model.

According to Axiom 5, the use of ε implies equiprobable consideration of variables in the model. Researchers, using their knowledge, experience and intuition, choose those variables that, from their point of view, reflect the essence of sound propagation in various environments. As a rule, the number of variables is not large. Therefore, many phenomena that characterize the magnitude of the speed of sound may not be considered.

The closeness of the uncertainty ε1,2,3 obtained in the experiment to the justified uncertainty εopt1,2,3 indicates the preference of the chosen model [23] compared to the other two models [27] [29] : ε1/εopt1 = 0.53 < ε2/εopt2 = 0.69 < ε3/εopt3 = 0.8. This recommendation is also confirmed by calculating the ratio of the number of FIQs taken into account in the model, γ1,2,3 and the optimal γmod1,2,3: γ1 = 1, γmod1 = 19 (Table 1), and γ1/γmod1 = 1/19 = 0.05 [27] ; γ2 = 4, γmod2 = 19 (Table 1), and γ2/γmod2 = 4/19 = 0.21 [29] ; γ3 = 18, γmod3 = 52 (Table 1), γ3/γopt3 = 18/52 = 0.35 [28].

The results of [28] certainly expand the understanding of how taking into account a large number (close to optimal in comparison to others) of variables in the model deepens our knowledge of the true value of the speed of sound and opens up new frontiers in the study of a repeatedly analyzed phenomenon.

4.2. Physical Constants

In 2019, the ICSU Committee on Data for Science and Technology (CODATA) approved a new version of the International System of Units (SI) [33]. This became possible thanks to the efforts of thousands of scientists and engineers who developed advanced methods, built unique experimental stands and achieved unprecedented accuracy in measuring physical constants.

When measuring a physical constant, the CODATA methodology involves the careful construction of tables of data necessary to calculate r using unique methods of statistical processing of experimental results and ultra-fast computers that have been developed. However, uncertainty budget calculations are limited to the knowledge of researchers. A detailed calculation of the uncertainty budget for the experiment excludes any possible systematic effects. This allows you to ensure the consistency of the data at the input and the received values at the output. A necessary element of the methodology is expert data analysis carried out by scientists with their own philosophical outlook [34]. In such a situation, one cannot exclude the possibility of a subjective expert opinion, due to one’s life position and preference.

In the CODATA methodology, as in all other fields of science and technology, the experimental data obtained from measurements of an already built model with fixed mathematical dependencies among the selected variables are subjected to rigorous and thorough verification. However, the modern scientific literature completely lacks an important source of uncertainty that affects the accuracy of the model used to measure the physical variable - GoP.

The purpose of the following presentation is to demonstrate the benefits of using the FIQ-based method to determine the preferred method for measuring fundamental physical constants through comparative uncertainty. This approach does not eliminate or diminish the need to present measurement results using relative uncertainty. In addition, a wide scientific audience should be reminded that the possibility of using comparative uncertainty has a rigorous theoretical justification within the framework of information theory, taking into account the five axioms above [9].

Generalized measurement data for the Planck constant h, Boltzmann constant kb, Hubble constant H0, and gravitational constant G, published by various research centers for 2000-2018, are analyzed in [19] [35] [36] [37] [38] and are briefly presented in Table 3.

Based on the calculation results given in Table 3, several remarks should be made:

Table 3. Comparison of methods for measuring physical constants by means of ε.

1KB—Kibble balance. Data include the results of measurements taken in seven laboratories from 2014 to 2017. 2XRCD—X-ray crystal density. Data include the results of measurements taken in seven laboratories from 2011 to 2018. 3AGT—acoustic gas thermometer. Data include the results of measurements taken in seven laboratories from 2009 to 2017. 4DCGT—dielectric constant gas thermometer. Data include the results of measurements taken in six laboratories from 2012 to 2018. 5DBT—Doppler broadening thermometer. Data include the results of measurements taken in six laboratories from 2007 to 2015. 6BDL—brightness of distance ladder. Data include the results of measurements taken in seven laboratories from 2011 to 2019. 7CMB—cosmic microwave background. Data include the results of measurements taken in six laboratories from 2009 to 2018. 8BAO—baryonic acoustic oscillations. Data include the results of measurements taken in four laboratories from 2014 to 2018. 9Data include the results of measurements taken in seven laboratories from 2000 to 2014. 10Data include the results of measurements taken in five laboratories from 2001 to 2018.

­ Planck’s constant, h: when implementing the KB method, the ratio εexp/εopt = 0.3976/0.0245 = 16.2 is actually two times less compared to that achieved using XRCD: εexp/εopt = 0.4733/0.0145 = 32.6. Obviously, when implementing the KB method (GoP ≡ LMТI, γCoP = z' − β' = 468) it is possible to take into account a much larger number of variables (hidden bonds) compared to XRCD (GoP ≡ LMТF, γCoP = z' − β' = 279). Therefore, in further studies, in order to achieve more accurate measurements of h, according to the FIQ-based method, researchers are recommended to use KB.

­ Constants, kb, H0 and G: the situation is similar to h. When using a measurement model with a more complex GoP (potentially more variables taken into account), the experimentally achieved comparative uncertainty is closer to the recommended value, that is, the ratio εexp/εopt is smaller. Thus, within the framework of the informational approach for measuring these constants, the preferred methods are the dielectric constant gas thermometer (DCGT), the cosmic microwave background (CMB) method (H0) and electromechanical methods (G).

To remove any doubts in the stated conclusions, we present a summary in Table 4, in which the analysis of measurements of physical constants is carried out with relative uncertainty reformatted from a comparative uncertainty.

One of the most significant problems of measuring a physical variable and assessing the achieved uncertainty (confirming the plausibility and reliability of the proposed model) is that all statistical methods without exception, are focused on identifying, calculating the relative uncertainties of all FIQs taken into account in the constructed model, and elucidating the uniformity of the experimental data obtained. Without diminishing the importance and necessity of these steps, attention should be paid to the already mentioned situation, when the uncertainty associated with the GoP of the model is completely ignored. In the modern theory of measurements and metrology, such uncertainty is not considered.

From the results presented in Table 4, it is clear that for any measurement method based on a model with a significant number of base quantities, the ratio of reformatted rexp to justified rexp/ropt is less than for measurement methods that use a small number of base quantities in the model. A pronounced trend is observed: in those methods where εexp/εop is smaller, the ratio rexp/ropt is also smaller.

Table 4. Comparison of methods for measuring physical constants by means of r.

1JNT—Johnson noise thermometer. Data include the results of measurements taken in six laboratories from 2011 to 2017.

In the framework of the FIQ-based method, this can be explained by the fact that for methods of measuring a physical constant based on models with a large number of base quantities, it is possible to take into account a larger number of hidden relationships between variables. Thus, when choosing the preferred method for measuring a particular physical constant, the ratio εexp/εopt can be recommended as a practically justified criterion.

5. Discussion

In this work, new idea is presented to the reader’s judgement: in addition to quantum uncertainty, which is practically not taken into account in everyday life due to the small value of Planck’s constant, when modeling a physical object that precedes any measurement, it is necessary to take into account a new limit. The physical meaning of this limit lies in the finite amount of information embedded in the model and transmitted over the communication channel to the recipient (thinker).

An additional limit is connected not with our ignorance or the current lack of knowledge at our disposal but with the way of thinking with the tool (communication channel) that we are forced to use when observing an object. During the modelling process, which precedes any experiment, physical energy is not introduced into the system under study, and there is no perturbation in it. This finding confirms that the uncertainty associated with the amount of information in the model is epistemological. At a deeper level, the question of uncertainty in quantum mechanics, whether reality is defined and concrete, remains open, since we also use models containing a finite, limited amount of information to describe phenomena in the microworld. Such an approach could have intriguing implications for the ability of scientists to push the limits of existing theories and laws, formulate new concepts, and possibly radically redefine Einstein’s general theory of relativity and quantum physics [49]. Realizing that such a statement is highly controversial, one can hope that it is the theory of information and its mathematical apparatus that will serve as a bridge connecting these two pillars of physics.

Notably, if the axioms of the informational approach [2] are true and there is a limit to the accuracy of describing (modelling) physical objects, which is rough and much larger than the limit set by quantum mechanics in relation to the accuracy of measurements, then the expediency of knowledge is questioned at increasingly deep (shallow) levels of nature. This approach may prove useful for quantum metrology and quantum computing. On the other hand, the question of whether investing large amounts of money and intellectual resources in the development of particle colliders is worthwhile has particular relevance. The situation is aggravated by the notion that the term “measurement” is not defined in the axioms of quantum mechanics [50]. In turn, the act of measurement is preceded by a model—the result of the mental activity of a scientist—whose scientific position may differ and even be directly opposite to the opinion of other researchers. Note that the model is chosen from a certain set of models at the will of the thinker. The researcher must be aware of the possible characteristics of the object under study, although his knowledge may be incomplete and inaccurate.

Following the logic of the informational approach, we can assume that the problem of measurement, after all, is epistemic in origin. For the tasks of the macro world, the information approach clarifies how to achieve the most acceptable level of measurement accuracy. Examples are presented in this work.

Measurement is meaningless beyond the context of the model and is the result of the modelling process. In addition, during modelling (thinking act), there is no transfer of matter/energy, and no perturbation is introduced into the real physical system, in other words, modelling is an energy-free process. Thus, the foregoing leads to the need and expediency of linking the process of building a model and the measurement process via an information approach. It seems that the problem of the act of measurement in physics and technology, especially in quantum mechanics, cannot be solved without taking into account the form and configuration of the model by which the measurement is carried out. The solution to the problem of measurement is translated from a mere philosophical discussion into a concrete technological application that has great promise in science and technology.

Modeling and formation of an optimal model can be identified with the process of synchronizing the real object under study and its display constructed by the observer. For various reasons, they are inconsistent. For example, when measuring a physical constant, the researcher presumably has sufficient knowledge about its nature and the methods for calculating it. However, as different researchers have their own philosophical views on the nature of the existence of this constant, the randomness and equiprobability of the choice of a variable from the applied system of units is inevitably assumed. However, as previously mentioned, the use of any system of units does not affect the results.

Presenting a model as an information channel does not imply that it is true in the last instance and has no flaws. The proposed approach—identification of the modelling process with the result of synchronization—does not introduce any new entities, is based only on the characteristics chosen by the thinker, and facilitates the use of a theoretically justified and pragmatic approach to choosing a plausible, close (less blurry) to the observed object models.

The informational approach is deprived of the opportunity to establish the specific structure of the model and all the necessary variables that allow us to assert the truth of the model. Instead, this method gives the researcher a tool (comparative uncertainty) with which she/he can select the GoP and the amount of derived variables to obtain εexp in the experiment, similar to the theoretical uncertainty. The use of this uncertainty means that, compared and built by different groups of researchers, the models may not be identical but must satisfy Equation (3) and contain a qualitatively quantitative set of variables that is important for measurement.

The closeness of the experimental comparative uncertainties of different models does not require their physical and mathematical identity.

The presented principles of the informational approach are in a huge separation both from the nature of quantum mechanics (QM) and from classical physics (CP). In QM, the actual act of observation, by means of some field (light, electromagnetic waves), interferes with what is being observed. Uncertainty is built into the nature of quantum systems. In CP, a deterministic vision of the world is considered, and the act of measurement is presented as something independent of the constructed model. For the FIQ-based method, the core idea is the information contained in a model and dependent on the will of the thinker. In QM and CP, the structure of the object model does not act as a source of uncertainty and is outside the scope of their study.

The problem is that, despite the deep knowledge, experience, talent and intuition of the researcher, as well as powerful computers, advanced statistical methods and unique measurement setups, the accuracy of calculating the values of physical variables can be clarified at a much more rough level than is dictated by the relation Heisenberg. In other words, knowability, understood as the ability to achieve unprecedented accuracy, is limited by an intangible tool in the hands of the researcher—a model that always contains a finite amount of information about the object of observation. This is true both for QM, CP, and for the theory of relativity. If your model contains fewer or more variables than optimal, in any case, the “fuzziness” of the observed object will be significant, and its reproduction accuracy will be low.

6. Conclusions

This work reflects the trend of scientists’ interest in the search for new possible sources of uncertainty in modeling physical phenomena and technological processes. We have identified an important aspect of measurement related to model building and presented a new perspective on the potential increase in accuracy in the study of a physical object: the need to synchronize the observed phenomenon and its representation (model) using information theory and considering the model as a communication channel between the object and the observer. The motivation for this goal was the desire to realize the information representation of a complex physical phenomenon with high accuracy, which is determined by the philosophical view of the thinker. Unfortunately, the “information” component of the model uncertainty is not taken into account in the current practice of probabilistic analysis of experimental results, which dominates the thinking of scientists and engineers. The researchers conduct a statistical analysis of the results after the formulation of the model preceding the experiment, and do not take into account, important in the opinion of the author, the uncertainties caused by the GoP and the number of variables taken into account.

Presenting the model as an information channel allows you to calculate the amount of information contained in it. This, in turn, helps to calculate the value of the comparative uncertainty, which serves as a criterion for choosing the most preferred model (measurement method) for the selected object of observation. Moreover, the implementation of the FIQ approach does not require fulfillment of the requirements specific to existing statistical methods. Also, this work can be considered as an additional contribution to the understanding of the systematics that influence experiments to measure the exact value of the gravitational constant [51].

The method outlined in this paper presents a conceptual decomposition of the modeling process into its components in terms of sources of uncertainty. It was developed taking into account the presentation of the model as a channel for transmitting information from the object of study to the thinker, and it is applicable to any physical phenomena and technological processes. The method reveals the relationship between the structure of the model and its uncertainty and allows you to identify research problems associated with choosing the most preferable model for a particular object.

The informational approach is to find the simplest solution that works. In essence, the FIQ-based method leads to the idea that in order to achieve high measurement accuracy; in other words, to increase the plausibility of the model that precedes the act of measurement, researchers must be able to take into account a large number of possible interactions of variables. Therefore, when a significant number of base quantities are used in the model and the number of derived variables is close to the recommended γmod, the probability of selecting the optimal model is very high, even if the researchers are not sure about the sufficiency of knowledge about the object under study.

The FIQ-based approach can be considered as an effective tool for eliminating cumbersome or, conversely, simplified assumptions: the investigated simulated physical phenomena and technological processes are always more complex than models. The closer we get to their true complexity, the more accurate the models.

In our difficult time for perception, there is an urgent need to understand the “foggy” vision of the world around us, not forgetting that science is inherently uncertain, regardless of the unique, carefully calibrated and accurate research methods used. In addition, the scientist himself, as a thinker, is the cause of the reliability limit of accuracy [52]. The information method can be one of the effective tools for identifying the causes of inaccurate reproduction of natural and technological processes.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Bose, D., Palmer, G.E. and Wright, M.J. (2006) Uncertainty Analysis of Laminar Aeroheating Predictions for Mars Entries. Journal of Thermophysics and Heat Transfer, 20, 652-662.
https://sci-hub.se/10.2514/1.20993
https://doi.org/10.2514/1.20993
[2] The American Society of Mechanical Engineers Standards (2006) Guide for Verification and Validation in Computational Solid Mechanics: ASME V & V 10-2006. The American Society of Mechanical Engineers, 28.
http://goo.gl/9gjVdA
[3] Akaike, H. (1973) Information Theory as an Extension of the Maximum Likelihood Principle. In: Petrov, B.N. and Csaki, F., Eds., Proceedings of the 2nd International Symposium on Information Theory, Akademiai Kiado, Budapest, 267-281.
[4] Burnham, K.P. and Anderson, D.R. (2002) Model Selection and Multimodel Inference. A Practical Information-Theoretic Approach. 2nd Edition, Springer-Verlag, New York.
https://caestuaries.opennrm.org/assets/06942155460a79991fdf1b57f641b1b4/application/pdf/burnham_anderson2002.pdf
[5] Jaynes, E.T. (2003) Probability Theory: The Logic of Science. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511790423
[6] Kak, S. (2006) Information Complexity of Quantum Gates. International Journal of Theoretical Physics, 45, 933-941.
https://doi.org/10.1007/s10773-006-9086-3
[7] Ben-Haim, Y. and Hemez, F.M. (2012) Robustness, Fidelity and Prediction-Looseness of Models. Proceedings of the Royal Society A, 468, 227-244.
https://doi.org/10.1098/rspa.2011.0050
[8] Cinar, O., Umbanhowar, J., Hoeksema, J.D. and Viechtbauer, W. (2021) Using Information-Theoretic Approaches for Model Selection in Meta-Analysis. Research Synthesis Methods, 12, 537-556.
[9] Menin, B. (2022) Construction of a Model as an Information Channel between the Physical Phenomenon and Observer. Journal of the Association for Information Science and Technology, 72, 1198-1210.
https://doi.org/10.1002/asi.24473
[10] Newell, D.B. and Tiesinga, E. (2019) The International System of Units (SI). NIST Special Publication, 330, 1-138.
https://doi.org/10.6028/NIST.SP.330-2019
[11] Santo, F. and Gisin, N. (2019) Physics without Determinism: Alternative Interpretations of Classical Physics. Physical Review A, 100, Article ID: 062107.
https://sci-hub.tw/10.1103/PhysRevA.100.062107
https://doi.org/10.1103/PhysRevA.100.062107
[12] Sedov, L.I. (1993) Similarity and Dimensional Methods in Mechanics. CRC Press, Boca Raton.
[13] Burgin, M. (2010) Theory of Information: Fundamentality, Diversity and Unification. In: Burgin, M., Ed., World Scientific Series in Information Studies, Vol. 1, World Scientific Publishing, Singapore, 688.
https://doi.org/10.1142/7048
[14] Landsberg, P.T. (1986) Entropy and Order. In: Kilmister, C.W., Ed., Disequilibrium and Self-Organization, Mathematics and Its Applications, Vol. 30, Springer, Dordrecht, 19-21.
https://doi.org/10.1007/978-94-009-4718-4_3
[15] Lloyd, S. (2000) Ultimate Physical Limits to Computation. Nature, 406, 1047-1054.
https://sci-hub.se/10.1038/35023282
https://doi.org/10.1038/35023282
[16] Brillouin, L. (1953) Science and Information Theory. Academic Press, New York.
[17] Menin, B.M. (2022) Simplicity of Physical Laws: Informational-Theoretical Limits. IEEE Access, 10, 56711-56719.
https://doi.org/10.1109/ACCESS.2022.3177274
[18] Menin, B. (2017) Information Measure Approach for Calculating Model Uncertainty of Physical Phenomena. American Journal of Computational and Applied Mathematics, 7, 11-24.
https://goo.gl/m3ukQi
[19] Menin, B. (2019) Hubble Constant Tension in Terms of Information Approach. Physical Science International Journal, 23, 1-15.
https://doi.org/10.9734/psij/2019/v23i430165
[20] Menin, B. (2018) h, k, NA: Evaluating the Relative Uncertainty of Measurement. American Journal of Computational and Applied Mathematics, 8, 93-102.
http://article.sapub.org/10.5923.j.ajcam.20180805.02.html
[21] Menin, B. (2022) Can Information Theory Help to Formulate an Optimal Model of a Physical Phenomenon? Journal of Applied Mathematics and Physics, 10, 2940-2650.
https://www.scirp.org/journal/paperinformation.aspxpaperid120380
https://doi.org/10.4236/jamp.2022.1010197
[22] Henrion, M. and Fischhoff, B. (1986) Assessing Uncertainty in Physical Constants. American Journal of Physics, 54, 791-798.
https://sci-hub.se/10.1119/1.14447
https://doi.org/10.1119/1.14447
[23] Menin, B.M. (2017) Preferred Physical-Mathematical Model of the Cold Energy Storage System. Applied Thermal Engineering, 112, 1020-1026.
https://doi.org/10.1016/j.applthermaleng.2016.10.128
[24] Menin, B. (2018) Information on the Service of Achieving High Accuracy of Models of Cold Energy Storage Systems. European Journal of Advances in Engineering and Technology, 5, 740-744.
https://ejaet.com/PDF/5-9/EJAET-5-9-740-744.pdf?_ga2.37038048.1948495690.1673629609-937174835.1673629609
[25] Menin, B. (2020) Uncertainty Estimation of Refrigeration Equipment Using the information Approach. Journal of Applied Mathematics and Physics, 8, 23-37.
https://www.scirp.org/journal/paperinformation.aspx?paperid97483
https://doi.org/10.4236/jamp.2020.81003
[26] Menin, B. (2021) The Finite Information Quantity-Based Method Is a New Radical Approach to Validate the Accuracy of the Phenomenon and Technological Process Model. (Preprint)
[27] Thol, M., Dubberke, F.H., Baumhögger, E., Span, R. and Vrabec, J. (2018) Speed of Sound Measurements and a Fundamental Equation of State for Hydrogen Chloride. Journal of Chemical & Engineering Data, 63, 2533-2547.
https://sci-hub.se/10.1021/acs.jced.7b01031
https://doi.org/10.1021/acs.jced.7b01031
[28] Trachenko, K., Monserrat, B., Pickard, C.J. and Brazhkin, V.V. (2020) Speed of Sound from Fundamental Physical Constants. Science Advances, 6, eabc8662.
https://sci-hub.se/10.1126/sciadv.abc8662
https://doi.org/10.1126/sciadv.abc8662
[29] Segovia, J.J., Lozano-Martín, D., Tuma, D., Moreau, A., Martín, M. and Vega-Maza, D. (2022) Speed of Sound Data and Acoustic Virial Coefficients of Two Binary (N2+H2) Mixtures at Temperatures between (260 and 350) K and at Pressures between (0.5 and 20). The Journal of Chemical Thermodynamics, 171, Article ID: 106791.
https://doi.org/10.1016/j.jct.2022.106791
[30] Gourgoulias, K., Katsoulakis, M.A., Rey-Bellet, L. and Wang, J. (2020) How Biased Is Your Model? Concentration Inequalities, Information and Model Bias. IEEE Transactions on Information Theory, 66, 3079-3097.
https://sci-hub.se/10.1109/tit.2020.2977067
https://doi.org/10.1109/TIT.2020.2977067
[31] Patra, L.K., Kayal, S. and Kumar, S. (2020) Measuring Uncertainty under Prior Information. IEEE Transactions on Information Theory, 66, 2570-2580.
https://sci-hub.se/10.1109/TIT.2020.2970408
https://doi.org/10.1109/TIT.2020.2970408
[32] Cunha, A. (2017) Modeling and Quantification of Physical Systems Uncertainties in a Probabilistic Framework. In: Ekwaro-Osire, S., Gonçalves, A.C. and Alemayehu, F.M., Eds., Probabilistic Prognostics and Health Management of Energy Systems, Springer, Cham, 127-156.
https://hal.science/hal-01516295/document
https://doi.org/10.1007/978-3-319-55852-3_8
[33] Davis, R. (2019) An Introduction to the Revised International System of Units (SI). IEEE Instrumentation & Measurement Magazine, 22, 4-8.
https://sci-hub.se/10.1109/MIM.2019.8716268
https://doi.org/10.1109/MIM.2019.8716268
[34] Dodson, B. (2013) So You Think YOU’RE Confused about Quantum Mechanics?
https://newatlas.com/confusion-basic-nature-quantum-mechanics/26216/
[35] Menin, B. (2019) Progress in Reducing the Uncertainty of Measurement of Planck’s Constant in Terms of the Information Approach. Physical Science International Journal, 21, 1-11.
https://doi.org/10.1109/MIM.2019.8716268
[36] Menin, B. (2020) High Accuracy When Measuring Physical Constants: From the Perspective of the Information-Theoretic Approach. Journal of Applied Mathematics and Physics, 8, 861-887.
[37] Menin, B. (2019) The Boltzmann Constant: Evaluation of Measurement Relative Uncertainty Using the Information Approach. Journal of Applied Mathematics and Physics, 7, 486-504.
https://www.scirp.org/journal/paperabs.aspx?paperid91062
https://doi.org/10.4236/jamp.2019.73035
[38] Menin, B. (2019) Precise Measurements of the Gravitational Constant: Revaluation by the Information Approach. Journal of Applied Mathematics and Physics, 7, 1272-1288.
https://www.scirp.org/(S(351jmbntvnsjt1aadkozje))/journal/paperinformation.aspx?paperid100314
https://doi.org/10.4236/jamp.2019.76087
[39] Haddad, D., et al. (2017) Measurement of the Planck Constant at the National Institute of Standards and Technology from 2015 to 2017. Metrologia, 54, 633-641.
http://iopscience.iop.org/article/10.1088/1681-7575/aa7bf2/pdf
https://doi.org/10.1088/1681-7575/aa7bf2
[40] Wood, B.M., Sanchez, C.A., Green, R.G. and Liard, J.O. (2017) A Summary of the Planck Constant Determinations Using the NRC Kibble Balance. Metrologia, 54, 399-409.
https://sci-hub.se/10.1088/1681-7575/aa70bf
https://doi.org/10.1088/1681-7575/aa70bf
[41] Pitre, L., et al. (2017) New Measurement of the Boltzmann Constant k by Acoustic Thermometry of Helium-4 Gas. Metrologia, 54, 856-873.
https://sci-hub.se/10.1088/1681-7575/aa7bf5
https://doi.org/10.1088/1681-7575/aa7bf5
[42] Fischer, J., et al. (2018) The Boltzmann Project. Metrologia, 55, R1-R20.
https://sci-hub.se/10.1088/1681-7575/aaa790
https://doi.org/10.1088/1681-7575/aaa790
[43] Qu, J., Benz, S.P., Coakley, K., Rogalla, H., Tew, W.L., White, R., Zhou, K. and Zhou, Z. (2017) An Improved Electronic Determination of the Boltzmann Constant by Johnson Noise Thermometry. Metrologia, 54, 549-558.
https://sci-hub.se/10.1088/1681-7575/aa781e
https://doi.org/10.1088/1681-7575/aa781e
[44] Fasci, E., De Vizia, M.D., Merlone, A., Moretti, L., Castrillo, A. and Gianfrani, L. (2015) The Boltzmann Constant from the H2 18O Vibration-Rotation Spectrum: Complementary Tests and Revised Uncertainty Budget. Metrologia, 52, S233-S241.
https://sci-hub.se/10.1088/0026-1394/52/5/S233
https://doi.org/10.1088/0026-1394/52/5/S233
[45] Riess, A.G., Casertano, S., Yuan, W., Macri, L.M. and Scolnic, D. (2019) Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics Beyond ΛCDM. The Astrophysical Journal, 876, 85.
https://sci-hub.se/10.3847/1538-4357/ab1422
https://doi.org/10.3847/1538-4357/ab1422
[46] Collaboration, P., Aghanim, N., et al. (2020) Planck 2018 Results. VI. Cosmological Parameters. Astronomy & Astrophysics Manuscript, 1-73.
https://arxiv.org/abs/1807.06209
[47] Newman, R., Bantel, M., Berg, E. and Cross, W. (2014) A Measurement of G with a Cryogenic Torsion Pendulum. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 372, Article ID: 20140025.
https://sci-hub.se/10.1098/rsta.2014.0025
https://doi.org/10.1098/rsta.2014.0025
[48] Li, Q., et al. (2018) Measurements of the Gravitational Constant Using Two Independent Methods. Nature, 560, 582-588.
https://sci-hub.se/10.1038/s41586-018-0431-5
https://doi.org/10.1038/s41586-018-0431-5
[49] Piran, T. and Jimenez, R. (2022) Black Holes as “Time Capsules”: A Cosmological Graviton Background and the Hubble Tension. Astronomische Nachrichten, e20230033. (Preprint)
https://doi.org/10.1002/asna.20230033
[50] Hance, J.R. and Hossenfelder, S. (2022) What Does It Take to Solve the Measurement Problem? Journal of Physics Communications, 6, Article ID: 102001.
https://iopscience.iop.org/article/10.1088/2399-6528/ac96cf/pdf
https://doi.org/10.1088/2399-6528/ac96cf
[51] Rinaldi, S., Middleton, H., Del Pozzo, W. and Gair, J. (2022) On the Determination of the Constant of Gravitation. 1-8.
https://arxiv.org/abs/2209.07416
[52] Aksentijevic, A., Mihailović, D.T., Kapor, D., Crvenković, S., Nikolić-Djorić, E. and Mihailović, A. (2020) Complementarity of Information Obtained by Kolmogorov and Aksentijevic-Gibson Complexities in the Analysis of Binary Time Series. Chaos, Solitons & Fractals, 130, Article ID: 109394.
https://doi.org/10.1016/j.chaos.2019.109394

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.