Bekenstein-Bound and Information-Based Approach ()

Boris Menin^{}

Refrigeration Consultancy Ltd., Beer-Sheba, Israel.

**DOI: **10.4236/jamp.2018.68143
PDF
HTML XML
659
Downloads
1,405
Views
Citations

Refrigeration Consultancy Ltd., Beer-Sheba, Israel.

Scientists have already undertaken experimental attempts to find a grain of space. In this article, the Bekenstein formula and the information-oriented approach are combined for the first time to theoretically calculate the smallest achievable grain length, as well as the energy and quantity of information. This became possible due to the fact that the information approach is based on the calculation of the amount of information contained in the model of the physical phenomenon. The results show very good agreement between theory and experiment, at least with respect to the scale of the length and the minimum resolution of energy. This concept can be important for a reliable interpretation of the forthcoming cosmological and quantum dimensions.

Share and Cite:

Menin, B. (2018) Bekenstein-Bound and Information-Based Approach. *Journal of Applied Mathematics and Physics*, **6**, 1675-1685. doi: 10.4236/jamp.2018.68143.

1. Introduction

In the age of the Internet, the Big Bang, the colonization of Mars, the pervasive computerization, concepts and methods of information theory are widely used in different varieties of the areas of human activities, such as physics, chemistry, biology, physiology, technology, and so on. Of course, information theory plays a fundamental role in the modelling of various processes and phenomena. This is because modelling is an information process, wherein information about the state and behavior of an observed object is obtained from the developed model. During the modelling process, information is increased, and information entropy is reduced because of the increased knowledge about the object [1] .

In the 1980s, a brilliant elegant formula was developed, and the upper limit of the amount of information (called the Bekenstein boundary) was calculated [2] . It is contained in a body of limited volume and has the maximum amount of information needed to fully describe this physical system.

This meant that the volume of information of a physical system must be finite if the space of object and its energy are finite. In informational terms, this bound is given by

$\Upsilon \le \left(2\cdot \text{\pi}\cdot R\cdot E\right)/\left(\hslash \cdot c\cdot \mathrm{ln}2\right),$ (1)

where Υ is the information expressed in the number of bits contained in the quantum states of the chosen object sphere. The ln2 factor comes from defining the information as the natural logarithm of the number of quantum states, R is the radius of an object sphere that can enclose the given system, E is the total mass-energy, including rest masses, ħ is the reduced Planck constant, and c is the speed of light.

After almost 35 years of publication of the Bekenstein theory, we proposed an information-oriented method [3] , according to which it is possible to calculate the amount of information inherent in the model. Using a fundamentally new concept, we introduced a metric called comparative uncertainty, which allows us to check the priori discrepancy between the chosen model and the observed material object. The information-oriented approach has already been applied to the design of thermal energy storage systems, technological processes for the production of pumpable ice, climate models and heating of space spacecraft, measurements of the Planck constant, Avogadro constant and the Boltzmann constant.

The idea of this article is to combine Bekenstein formula and information-oriented method with the help of a theoretically grounded approach for numerically calculating the possible lowest energy, length, and amount of information resolutions without going into theoretical debates and ineffective discussions.

Hints of graininess stem from attempts to unify the general theory of relativity, Einstein’s theory of gravity, with quantum mechanics, which describes the work of three other forces: electromagnetism, strong and weak nuclear interactions. The result is a single structure, sometimes called quantum gravity, which explains all the particles and forces of the universe.

One from the main obstacles that might be standing in the way is that although researchers and scientists consider continuous space-time, there are still unresolved problems associated with the processes of observation and measurement. Although discrete and continuous features coexist in any natural phenomenon, depending on the scales of observation [1] , one can suppose a deeper level of reality, which exhibits some kind of the elementary discrete structure.

This article contains five chapters. In Chapter 1, basic explanations are given for the numerical calculation of the possible lowest energy, length, and resolution of the volume of information. Chapter 2 contains the calculation of the amount of information inherent in the physical-mathematical model, and the formulation of the system of basic dimensional quantities (SBQ), from which the designer selects the number of quantities to describe the process under study. Such a system must satisfy a certain set of axioms that form an abelian group. This, in turn, allows the author to use the approach to calculate the total number of dimensionless criteria in the existing International System of Units (SI). An exact expression is formulated mathematically for calculating the amount of information contained in the model. Chapter 3 is devoted to combining the method of Bekenstein’s bound and information-oriented method to the newest attractive origin of quantizing energy, the length and volume of information. Chapters 4 and 5 are expanded to discuss and complete the use of the amount of information inherent in the model and the Bekenstein’s estimate to test the value of the smallest blocks of space, energy and information medium.

2. Methodology

Bekenstein proved [2] that a bound of a given finite region of space with a finite amount of energy contains the maximum finite amount of information required to perfectly describe a given physical system. In informational terms, this bound is given by Equation (1) or

$S\le \left(2\cdot \text{\pi}\cdot \kappa \cdot R\cdot E\right)/\left(\hslash \cdot c\right),$ (2)

where S is the entropy, $\kappa $ is Boltzmann constant.

The results are purely theoretical in nature, although it is possible to find application of the proposed formula in medicine or biology. Factually, the own act of the Bekenstein modelling process already implies an existence of the formulated physical-mathematical model describing the sphere under investigation. In this model, the quantities are taken into account from the International system of units (SI) [3] . SI is a set of dimensional quantities, base and calculated on their derived basis, which are necessary and sufficient to describe the known laws of nature, as in the physical content and quantitatively [4] . In turn, SI includes the base and derived quantities used for descriptions of different classes of phenomena (COP). For example, in mechanics SI uses the basis {length L, mass M, time Т}, that is, COP_{SI} ≡ LMT. Basic accounts of electromagnetism here add the magnitude of electric current (I). Thermodynamics requires the inclusion of thermodynamic temperature (Θ). For photometry, it needs to add force of light (J). The final base quantity of SI is a quantity of substance (F).

Because of the analyses of the recorded quantities’ dimensions [2] , the model of Bekenstein relation includes four base dimensional quantities of SI: the length (L), weight (M), time (Т), temperature (Θ). That is why one can classify COP_{SI} º LMTΘ, where º means this class includes four above mentioned base quantities.

In generally, the dimension of any derived quantity (q) can only express a unique combination of dimensions of the base quantities in different degrees [5] :

$q\u220d{L}^{l}\cdot {M}^{m}\cdot {T}^{t}\cdot {I}^{i}\cdot {\theta}^{\theta}\cdot {J}^{j}\cdot {F}^{f},$ (3)

where $l,m,\cdots ,f$ are the exponents of the base quantities taking only integer values, the range of each has maximum and minimum value [6] :

$\begin{array}{l}-3\le l\le +3,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-1\le m\le +1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-4\le t\le +4,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-2\le i\le +2,\\ -4\le \theta \le +4,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-1\le j\le +1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}-1\le f\le +1.\end{array}$ (4)

So the number of choices of dimensions for each quantity ${e}_{l},{e}_{m},\cdots ,{e}_{f}$ , according to Equation (4) is the following:

${\u0435}_{l}=7;\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\u0435}_{m}=3;\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\u0435}_{t}=9;\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\u0435}_{i}=5;\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\u0435}_{\theta}=9;\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\u0435}_{j}=3;\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\u0435}_{f}=3.$ (5)

In this case, the number_{ }of possible dimensionless criteria µ_{SI }with ξ = 7 base quantities of SI equals [7]

$\begin{array}{c}{\mu}_{\text{SI}}=\left({\u0435}_{l}\cdot {\u0435}_{m}\cdot {\u0435}_{t}\cdot {\u0435}_{i}\cdot {\u0435}_{\theta}\cdot {\u0435}_{j}\cdot {\u0435}_{f}-1\right)/2-7\\ =\left(7\times 3\times 9\times 5\times 9\times 3\times 3-1\right)/2-7=38265\end{array}$ (6)

where “−1” corresponds to the occasion when all exponents of base quantities in formula (3) are treated as having zero dimensions; dividing by 2 means that there are both required and reverse quantities, for example, the length (L^{1}) and the running length (L^{−1}). In other words, the object can be judged, knowing only one of its symmetrical parts, while others are structurally duplicating this part, may be regarded as information empty; 7 corresponds to seven base quantities (L, M, T, I, Θ, J, F).

It can be shown [3] that an amount of information ΔA_{e} about the observed modeled sphere is calculated according to the following

$\Delta {A}_{e}\le \kappa \cdot \mathrm{ln}\left[{\mu}_{\text{SI}}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right]$ (7)

where ΔA_{e} is measured in units of entropy [8] , z” is the number of physical dimensional quantities recorded in the mathematical model, β” is the number of the base dimensional quantities recorded in a model.

In order to transform ΔA_{e} to bits ΔA_{b}, one should divide it by the following abstract number κln2 = 9.569926 × 10^{−24} kg⋅m^{2}⋅s^{−2}⋅K^{−1} [9] [10] . Then

$\Delta {A}_{b}=\mathrm{ln}\left[{\mu}_{\text{SI}}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right]/\mathrm{ln}2.$ (8)

In this case, the mathematical theory of information does not cover all the wealth of information content, because it is distracted from the semantic content side of the message. From the point of view of the information-based approach, the phrase of 100 words, taken from the newspaper, Shakespeare or Einstein’s theory has about the same amount of information.

3. Using Applications of μ_{SI}-Hypotheses

3.1. Dose of Energy

In the case of Bekenstein bound, the information quantity contained in a sphere Υ equals the information quantity ΔA_{b} obtained by modelling process

$\Upsilon =\Delta {A}_{b}$ (9)

or taking into account Equations (1) and (8)

$\left(2\text{\pi}\cdot R\cdot E\right)/\left(\hslash \cdot c\cdot \mathrm{ln}2\right)=\mathrm{ln}\left[{\mu}_{SI}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right]/\mathrm{ln}2.$ (10)

So

$R\cdot E=\hslash \cdot c\cdot \mathrm{ln}\left[{\mu}_{\text{SI}}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right]/2\text{\pi}=5.031726\times {10}^{-27}\cdot \mathrm{ln}\left[{\mu}_{SI}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right].$ (11)

According to the analyses of the recorded quantities dimensions, the Bekenstein model is classified by COP_{SI} º LMTΘ.

To verify z’’ - β’’, we will use the definition of the comparative uncertainty [9] and its model expression ε_{LMTθ} [3] :

$\left({\epsilon}_{LMT\theta}\right)=\left[\left({z}^{\prime}-{\beta}^{\prime}\right)/{\mu}_{\text{SI}}+\left({z}^{\u2033}-{\beta}^{\u2033}\right)/\left({z}^{\prime}-{\beta}^{\prime}\right)\right],$ (12)

where ε_{LMTθ} = Δ_{pmm}/S, Δ_{pmm} is the absolute uncertainty in determining the dimensionless theoretical quantity u, “embedded” in a physical-mathematical model and caused only by its dimension; S is the dimensionless interval of supervision of a quantity u; z’ is the number of the physical dimensional quantities in the selected COP_{SI}; β’ is the number of the dimensional base quantities in the selected COP_{SI}.

The conditions for achieving the minimum comparative uncertainty of a model ε_{LMTθ} for COP_{SI} º LMTΘ can be formulated if one equates its partial derivative with respect to z’ - β’, to zero. Then we get:

$\begin{array}{c}{\left[{\epsilon}_{LMT\theta}\right]}^{\prime}{}_{{z}^{\prime}-{\beta}^{\prime}}={\left[\left({z}^{\prime}-{\beta}^{\prime}\right)/{\mu}_{SI}+\left({z}^{\u2033}-{\beta}^{\u2033}\right)/\left({z}^{\prime}-{\beta}^{\prime}\right)\right]}^{\prime}\\ =\left[1/{\mu}_{SI}-\left({z}^{\u2033}-{\beta}^{\u2033}\right)/{\left({z}^{\prime}-{\beta}^{\prime}\right)}^{2}\right]\end{array}$ (13)

$\left[1/{\mu}_{\text{SI}}-\left({z}^{\u2033}-{\beta}^{\u2033}\right)/{\left({z}^{\prime}-{\beta}^{\prime}\right)}^{2}\right]=0$ (14)

${\left({z}^{\prime}-{\beta}^{\prime}\right)}^{2}/{\mu}_{\text{SI}}=\left({z}^{\u2033}-{\beta}^{\u2033}\right)$ (15)

Taking into account Equation (5), let us calculate z’ - β’:

${z}^{\prime}-{\beta}^{\prime}=\left({\u0435}_{l}\cdot {\u0435}_{m}\cdot {\u0435}_{t}\cdot {\u0435}_{\theta}-1\right)/2-4=\left(7\times 3\times 9\times 9-1\right)/2-4=846$ (16)

where “−1” corresponds to the occasion when all exponents of base quantities in the formula (3) are treated as having zero dimensions; dividing by 2 means that there are both required and reverse quantities, for example, the length (L^{1}) and the running length (L^{−1}). In other words, the object can be judged, knowing only one of its symmetrical parts, while others structurally duplicating this part, may be regarded as information empty; 4 corresponds to four base quantities (L, M, T, Θ).

The comparative uncertainty of a model (ε_{min})_{LMTθ}, can be reached at the condition (15). Then, we get

$\left({z}^{\u2033}-{\beta}^{\u2033}\right)={\left({z}^{\prime}-{\beta}^{\prime}\right)}^{2}/{\mu}_{\text{SI}}={846}^{2}/38265=18.704194\approx 19$ (17)

Taking into account Equations (7), (11), (17), the achievable value of (R・E)_{min} equals

$\begin{array}{c}{\left(R\cdot E\right)}_{\mathrm{min}}=5.031726\times {10}^{-27}\cdot \mathrm{ln}\left[{\mu}_{SI}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right]\\ =5.031726\times {10}^{-27}\cdot \mathrm{ln}\left[38265/18.704194\right]\\ =3.835958\times {10}^{-26}\left({\text{m}}^{3}\cdot \text{kg}\cdot {\text{s}}^{-2}\right).\end{array}$ (18)

(R・E)_{min} can be applied for verifying the lowest energy doze E_{min}, indicating that the universe itself cannot distinguish that energy levels lower a special limit [11] . Further, the age of universe T_{univ} is about 13.7 ± 0.13 billion years or 4.308595 × 10^{17} s [12] . Then, taking into account c = 299,792,458 m/s, a radius of universe is

${R}_{univ}={T}_{univ}\cdot c\text{}=1.291684\times {10}^{26}\left(\text{m}\right).$ (19)

So, taking into account (18) and (19), the minimum energy resolution E_{min} is the following

$\begin{array}{c}{E}_{\text{min}}=3.835958\times {10}^{-26}/1.291684\times {10}^{26}=2.969734\times {10}^{-52}\\ \approx 3\times {10}^{-52}\left({\text{m}}^{\text{2}}\cdot \text{kg}\cdot {\text{s}}^{-2}\right).\end{array}$ (20)

E_{min} is hard to imagine and, it is lower a value [11] : 10^{−50} m^{2} kg・s^{−2}. At the same time, this value (Equation (20)) is the same order on the other 10^{−45} ergs = 10^{−52} m^{2} kg・s^{−2} [13] . E_{min} can be used, along with µ_{SI}, and combining the thought experiment with field studies, for measurement of the uncertainty values of fundamental physical constants.

3.2. “Graininess” of Space

Until recently, scientists believed that the diameter of the grain of space or the minimum possible length in nature is nothing more than Planck’s length (~1.6 × 10^{−35} meters). There are numerous concepts, approaches, methodologies, and formulas proposed for identifying the boundary, or transition zone, where space-time becomes granular R_{min} or, in other words, a resolution limit of length in any experiment [14] .

In this connection, attention should be paid to the undeservedly forgotten fact that European scientists reported on the results of the most outstanding attempt to detect the quantization of space [15] . To carry out their calculations, a group of physicists from France, Italy and Spain used data from the European space telescope Integral, namely, its capture of the gamma-flash, GRB 041219A, which occurred in 2004. According to calculations, the grain of space, if it exists, must influence the polarization of transmitted rays. And the influence is the more noticeable, the more intense the radiation and the more distance it had to go through. GRB 041219A was included in 1% of the brightest gamma outbreaks among all people caught. In addition, the distance to the source was at least 300 million light years. It was a very fortunate case, allowing checking the existing performances. It must be added that the degree of influence of the quantization of space on transmitted light depends also on the dimensions of the grain itself, so the parameters of a distant flash could indicate this value or at least its order.

Scientists have already made attempts to find the grain of space, decoding the light of distant gamma flares. The current observation was ten thousand times more accurate than all the previous experiments of this plan. The analysis showed that if the granularity of space exists at all, then it should be at a level of 10^{−48} meters or less.

Following ideas introduced in the chapter Methodology, we have supposed that any our measurement has a certain intrinsic limited length about small-scale physics. We are going to calculate it. Hooft [16] introduced S_{HS} that is the holographic entropy bound expressed in terms of the entropy

${S}_{\text{HS}}\le \text{\pi}\cdot {c}^{3}\cdot {R}^{2}/\left(\hslash G\right)$ (21)

or

${\Upsilon}_{\text{HS}}\le \text{\pi}\cdot {c}^{3}\cdot {R}^{2}/\left(\hslash \cdot G\cdot \kappa \cdot \mathrm{ln}2\right)=\text{\pi}\cdot {c}^{3}\cdot {R}^{2}/\left(\hslash \cdot G\cdot 0.95699\times {10}^{-23}\right)$ (22)

where Υ_{HS} is the information quantity expressed in bits and corresponding to S_{HS}, c is the light speed, ħ is the Plank constant, ħ = 1.054572 × 10^{−34} m² kg・s^{−}¹, G is the gravitational constant (G = 6.67408 × 10^{−11} m^{3}・kg^{−1} s^{−2}), R is the radius of an object sphere expressed in meters, κln2 = 9.569926 × 10^{−24} kg⋅m^{2}⋅s^{−2}⋅K^{−1}, π = 3.141592.

Equating ΔA_{b} in Equation (8) to Equation (22) and using the known values of physical quantities, we get

$\text{\pi}\cdot {c}^{3}\cdot {R}^{2}/\left(\hslash \cdot G\cdot 0.956993\times {10}^{-23}\right)=\mathrm{ln}\left[{\mu}_{\text{SI}}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right]/\mathrm{ln}2$ (23)

$1.256712\times {10}^{93}\cdot {R}^{2}=\mathrm{ln}\left[{\mu}_{\text{SI}}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right]/\mathrm{ln}2$ (24)

$R=3.388203\times {10}^{-47}{\left\{\mathrm{ln}\left[{\mu}_{\text{SI}}/\left({z}^{\u2033}-{\beta}^{\u2033}\right)\right]\right\}}^{1/2}$ (25)

Taking into account Equations (7), (17) and (25), the minimum achievable value of the length discretization or the universal, global standard of length equals

${R}_{\mathrm{min}}=3.388203\times {10}^{-47}\cdot {\left[\mathrm{ln}\left(38265/19\right)\right]}^{1/2}\approx 9\times {10}^{-47}\left(\text{m}\right)$ (26)

This R_{min} = 9 × 10^{−47} is in excellent agreement with result of Laurent et al. [15] . It could be suggested that this metric of space is only a pure mathematical concept that measures a “degree of distinguish ability”. In addition, maybe, the minimal length scale is not necessarily the Planck length. The scale of distance, just like the duration of time, turns out to be a property not of the world but of the models we use to describe it [17] . With the help of these calculations, it is possible to identify a boundary or a transition zone, where space-time becomes nonlocal granular and physical.

3.3. “Grain” of Information

Taking into account Equations (20) and (26), let us calculate a possible achievable minimum amount of information Υ_{q}, in other words, an information quantum bit, or “qubit” [18] , which can be viewed as the basic building blocks of quantum information systems [19] :

${\Upsilon}_{q}\le \left(2\cdot \text{\pi}\cdot {R}_{\mathrm{min}}\cdot {E}_{\mathrm{min}}\right)/\left(\hslash \cdot c\cdot \mathrm{ln}2\right)=0.79411\times {10}^{-71}\left(\text{bit}\right)$ (27)

On the strictly posed question “does information exist by itself?” a completely reasonable answer would be “yes and no”. “Yes”, because we cannot deny the availability of information, its storage, transfer, processing, and so on, which we encounter every day, even in our daily lives. We know that information is of great importance and can significantly affect the course of events. Information exists independently of people’s consciousness [20] . On the other hand, the answer “no” also has a rational grain. Is it possible to “touch” this notorious information? Most likely, information exists objectively, but not material in itself.

If information can be stored in the position of a smallest particle, the activation energy for its motion will be sufficiently lowest [21] . If the information, like some substance, is granular beyond a certain scale, it means that there is a “base scale”, a fundamental unit that cannot be broken down into anything less. This hypothesis so far contradicts the generally accepted opinion in the scientific community.

It was noted [22] that information is a quantity which is both discrete and continuous, where time and other physical phenomena might be reconceived as simultaneously discrete and continuous with an information theoretic formulation. Perhaps Equation (27) will spur researchers to further understand of the concept of information. In addition, this value may have usages in definition of qubits for quantum computation.

3.4. Information Embedded in Photon

The radius of the particle is determined by the region, in which it can produce some effect. According to Liu et al. [23] , radius of single photon r_{p} in energy region of E_{p} = 2.1 GeV equals 2.8 × 10^{−15} metres. In this case, taking into account Equation (1), the amount of information contained in one photon is

$\Upsilon \le \left(2\cdot \text{\pi}\cdot {r}_{p}\cdot {E}_{p}\right)/\left(\hslash \cdot c\cdot \mathrm{ln}2\right)=270\left(\text{bit}\right).$ (28)

Because many advanced algorithms require thousands of bits to begin with, the total necessary power of a useful quantum machine, including those involved with error correction, could easily run into the millions. Taking into account Equation (28) and proton dimensions, reader can easily calculate a possible required power of the future quantum machine.

In fact, the author does not offer anything concrete. First, these are all assumptions. Second, these are too much doubtful and those that have been written not trustworthy. However, if you are still reading and you like this unorthodox application of information theory, then all of the above data can stimulate your imagination.

4. Discussion

We have theoretically calculated for the first time the information-based scales of possible granules of length. Our results are in an excellent agreement with data predicted by Laurent et al. [15] . Our result, using a completely different theoretical-based approach, confirms its generality and, in particular, its applicability to practical applications in theoretical and experimental physics.

Although the values of the smallest blocks of energy, information quantum bit, and amount of information contained in photon are in doubt, our results demonstrate the potential of the information-based approach. Therefore, the significance of this result is that today’s test experimental systems is far from the fundamental limit and that future improvement is possible taking into account the class of phenomena and a number of chosen quantities in a model. Given that an achievable accuracy is the key issue that limits the in-depth understanding of the world around us, the result has profound suggestions for the future modifications of the existing physical theories.

Perhaps some of the readers of this article consider four examples presented as a game of numbers. In defense, the author reminds them the attempts of Heisenberg and many other scientists to find the “firstborn” building blocks of the universe. The calculated results are just a routine calculation from the formulas known in the scientific literature. The author does not set himself the task of understanding the submitted data for the application, such as quantum electrodynamics, or the theory of gravity, because only experts in these areas can “separate the wheat from the chaff”. However, if the Bekenstein bound and µ_{SI}-hypothesis have a physical explanation, maybe, the discrete resolutions of energy, length and information can be used to study the universe.

We believe that there is still a somewhat similar phenomenon which is manifested.

5. Conclusions

The information-oriented approach realizes two possibilities. First, it dictates the necessary number of quantities to be taken in order to achieve the best approximation of a model to the measured object. Second, it allows scientists and engineers to develop a perfect model based on their experience, knowledge and intuition in the development of a specific physical and mathematical model of the phenomenon being studied.

By combining Bekenstein’s bound and µ_{SI} hypothesis, an attempt is made to quantize energy, length and the amount of information as a tool for building pictures and models of the world. On other hand, like the uncertainty principle, µ_{SI} hypothesis can be a fundamental limitation of our ability to cognize and predict the universe.

The obtained results indicate that, in all likelihood, the centuries-old philosophical opposition between the concepts of discreteness and continuity, when it was transferred to the soil of real reality, was naturally resolved in favor of the concept of discreteness of physical space. However, this does not mean that the concept of continuity cannot be used as a working concept in theoretical constructions of physics and related sciences, but only in this case, it is necessary to take into account the fundamental limitations imposed by the µ_{SI} hypothesis on the concept of continuity.

Researchers can radically accelerate the speed of designing and delivering new models to industry and science. Using µ_{SI} hypothesis, development teams can orchestrate and optimize activities for the development of physical phenomena and engineering systems.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Lesne, A. (2007) Discrete vs Continuous Controversy in Physics. Mathematical Structures in Computer Science, 17, 185-223. https://doi.org/10.1017/S0960129507005944 |

[2] |
Bekenstein, J.D. (1981) A Universal Upper Bound on the Entropy to Energy Ratio for Bounded System. Physical Review D, 23, 287-298. http://sci-hub.tw/10.1103/PhysRevD.23.287 https://doi.org/10.1103/PhysRevD.23.287 |

[3] |
Menin B. (2017) Information Measure Approach for Calculating Model Uncertainty of Physical Phenomena. Amer. Journal of Computational and Applied Mathematics, 7, 11-24. https://goo.gl/m3ukQi |

[4] |
Sonin, A.A. (2001) The Physical Basis of Dimensional Analysis. 2nd Edition, Department of Mechanical Engineering, MIT, Cambridge. http://web.mit.edu/2.25/www/pdf/DA_unified.pdf |

[5] |
Sedov, L.I. (1993) Similarity and Dimensional Methods in Mechanics. 10th Edition, CRC Press, Boca Raton. https://goo.gl/o7BeZL |

[6] |
NIST Special Publication 330 (SP330) (2008) The International System of Units (SI). https://www.nist.gov/sites/default/files/documents/2016/12/07/sp330.pdf |

[7] |
Menin, B.M. (2014) Comparative Error of the Phenomena Model. International Referred Journal of Engineering and Science, 3, 68-76. http://goo.gl/DwgYXY |

[8] |
Menin, B.M. (2015) Possible Limits of Accuracy in Measurement of Fundamental Physical Constants. International Referred Journal of Engineering and Science, 4, 8-14. http://goo.gl/HjYBOs |

[9] | Brillouin, L. (2004) Science and Information Theory. Dover, New York. |

[10] |
Volkenstein, M.V. (2009) Entropy and Information. Birkhauser Verlag AG, Basel-Boston-Berlin. https://goo.gl/eHhRvd |

[11] |
Schmitt, F.-J. (2009) The Lower Bound on the Energy for Bounded Systems Is Equivalent to the Bekenstein Upper Bound on the Entropy to Energy Ratio for Bounded Systems. Berlin Institute of Technology, Berlin, 1-4. https://arxiv.org/ftp/arxiv/papers/0901/0901.3686.pdf |

[12] |
WMAP Science Team Cosmology: The Study of the Universe: NASA’s Wilkinson Microwave Anisotropy Probe (2011). https://map.gsfc.nasa.gov/universe/WMAP_Universe.pdf |

[13] |
Alfonso-Faus, A. (2013) Fundamental Principle of Information-to-Energy Conversion, 1-4. https://arxiv.org/ftp/arxiv/papers/1401/1401.6052.pdf |

[14] |
Garay, L.J. (1995) Quantum Gravity and Minimum Length. International Journal of Modern Physics, A10, 1-23. https://arxiv.org/pdf/gr-qc/9403008.pdf https://doi.org/10.1142/S0217751X95000085 |

[15] |
Laurent, P., et al. (2011) Constraints on Lorentz Invariance Violation Using Integral/IBIS Observations of GRB041219A. Physical Review, D83, 121301(R). http://sci-hub.tw/10.1103/PhysRevD.83.121301 |

[16] | ‘tHooft, G. (1993) Dimensional Reduction in Quantum Gravity. arXiv:gr-qc/9310026. |

[17] | Caticha, A. (2015) Geometry Form Information Geometry. 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in SP. Science and Engineering, Potsdam, 1-11. arXiv:1512.09076v1 |

[18] | Bais, F.A. and Farmer, J.D. (2007) The Physics of Information, 1-65. arXiv:0708.2837v2 [physics.class-ph]. |

[19] |
Braunstein, S.L. and van Loock, P. (2004) Quantum Information with Continuous Quantities. http://arxiv.org/pdf/quant-ph/0410100.pdf |

[20] | Burgin, M. (2010) Theory of Information. Fundamentality, Diversity and Unification. World Scientific Publishing, Singapore. |

[21] |
Landauer, R. (1961) Irreversibility and Heat Generation in the Computing Process. IBM Journal, 183-191. https://goo.gl/LYQEU8 |

[22] |
Kempf, A. (2010) Space-Time Could Be Simultaneously Continuous and Discrete, in the Same Way That Information Can Be. New Journal of Physics, 12, Article ID: 115001. http://goo.gl/k0xi6g https://doi.org/10.1088/1367-2630/12/11/115001 |

[23] |
Liu, S.-L. (2017) Electromagnetic Fields, Size, and Copy of a Single Photon. 1-4. https://arxiv.org/ftp/arxiv/papers/1604/1604.03869.pdf |

Journals Menu

Contact us

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2023 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.