Precision Limit for Observation: The Bridge for Quantum Classical Transitions

Abstract

Quantum mechanics (QM) is an extremely successful theory; however, there is still no consensus regarding its interpretation. Among the controversies, the quantum classical transitions are the outstanding questions. In this paper, starting from measurement theory, we discuss the role that the precision limit for observation plays in QM and attempt to lubricate the relationship between the precision limit and some unique characters and nature of QM. By reviewing Bohmian mechanics, one of the nonlocal hidden variable theories, we discuss the possibility of restoring determinism in QM. We conclude that it is the existence of the precision limit that makes it impossible to restore determinism in QM, and it is the root that makes QM different from classical physics. Finally, the boundary between the so-called classical and quantum worlds is discussed. We hope these philosophical arguments can provide a kind of epistemic understanding for QM.

Share and Cite:

Li, S. , Liang, C. and Jin, M. (2025) Precision Limit for Observation: The Bridge for Quantum Classical Transitions. Journal of Quantum Information Science, 15, 59-74. doi: 10.4236/jqis.2025.151004.

1. Introduction

Since the birth of quantum mechanics (QM), it has shown amazing ability in the qualitative explanation of general problems and the quantitative calculation of specific problems, bringing about drastic changes in physics. Unlike classical physics, which is mainly established through the generalization and summarization of experimental phenomena and experience, the formalism of QM was fulfilled prior to its interpretation. At the beginning of the establishment of QM, some non-logical terminologies, such as wave-function and operator, were introduced. These non-logical terminologies have no empirical meaning except that they imply physical content in the formalism. To endow this formalism with physical meaning by transforming it into a hypothetical deductive system which is empirically stated, it is necessary to associate some non-logical terminologies or some formulas containing non-logical terminologies with the observable phenomena and empirical operations. Physicists at that time paid painstaking efforts to explore the explanatory principle to interpret QM.

The Copenhagen interpretation eventually came to be regarded as the orthodox interpretation of QM. However, even the Copenhagen interpretation failed to reach an agreement on some fundamental issues. For example, Heisenberg insisted that the uncertainty principle was an independent principle, while Bohr attempted to incorporate it into his complementarity principle. Although most of the principles of Copenhagen interpretation are accepted by the mainstream scientists, the probabilistic interpretation of the wave-function [1] is the most controversial. Einstein adhered to the statistical ensemble interpretation of quantum theory, and refused to accept quantum theory as a complete description of physical reality [2], leading to decades of controversies regarding the interpretation of quantum theory between him and Bohr. Finally, in 1935, he proposed a thought experiment with Podolsky and Rosen (known as EPR paradox), questioning the completeness of QM [3]. From then on, physicists largely adhered to the Copenhagen interpretation, and the controversies were considered as a strictly philosophical quarrel by most physicists. Nearly thirty years later, Bell proposed an inequality to test the locality of QM [4], transforming question about the completeness of QM by the EPR thought experiment into that about non-locality. Bell’s inequality translates the immaterial philosophical ideas involved in the EPR thought experiment into concrete quantitative mathematical descriptions, providing access to experimental test. Bell’s inequality does not judge the completeness of QM, but only to illustrate whether QM or local hidden variable theory should be chosen. Subsequent experiments on the test of Bell’s inequalities and its improved forms have shown that these inequalities are violated [5]-[9], indicating that any local hidden variable theory cannot reproduce all quantum mechanical predictions. Though the local hidden variable theories have been denied, the non-local hidden variable theories have not been falsified by far. Leggett proposed a class of non-local reality models and gave a new inequality [10], and pointed out that QM can violate this inequality, indicating that such non-local reality models cannot fully describe QM. Whether Leggett inequality is violated and whether it can judge the correctness of QM and non-local hidden variable theories are also widely debated [11]-[15].

QM is based on an explicit, rigorous, and solid mathematical formulism [16]-[18] and promotes modern scientific and technological developments greatly [19] [20]. However, there is still no consensus regarding its interpretation, because it is seemingly weird and counterintuitive. Especially, the quantum classical transitions are the outstanding questions. In effect, most controversies and debates about the interpretation of QM are the quantum classical transitions, including the Copenhagen interpretation, EPR thought experiment and hidden variable theories. After all, most of the laws of the physical world that we witness everyday are usually based on classical physics, which are idealizations of our observations and provide a description of the physical reality consistent with our common sense. There is no need to consider the uncertainty principle to discuss the observed phenomena. As it comes to the microscopic objects, only probabilistic predictions for measurement outcomes can be provided by QM. In effect, QM and classical physics are essentially unified, apart from the fact that a higher precision is involved in QM. For example, QM was viewed as a generalization of classical physics by Bohr [21].

By reviewing the thoughts, discussions, and debates during the establishment and development of QM, we attempt to find out the root that accounts for difference between QM and classical physics. Inspired by the work concerning hidden variable theories [4]-[15] [17] [22] [23], we seem to have found the answer: the existence of a precision limit for observation. It has clear classical picture, and meanwhile results in the unique characters in QM. In this paper, from a philosophical point of view, we try to lubricate the relationship between the precision limit and the superposition, quantization, identity principle, probabilistic interpretation, uncertainty principle in QM. By reviewing Bohmian mechanics, we discuss the possibility of restoring determinism in QM. In the end, we will discuss the boundary between the so-called classical and quantum worlds.

2. Role of the Precision Limit for Observation in Physics

Physics is the science of measurement, as is pointed out by Campbell [24]. Measurement plays an extremely important role in classical physics and QM. Here, we mainly focus on the measurement in QM. As is known, QM describes the interaction between microscopic objects, and it can merely predict probabilities for measurement outcomes. Therefore, measurement theory influences quantum theory from the underlying logic. Though von Neumann had established the framework of measurement theory [17], measurement is still the most problematic and controversial part of quantum theory. The establishment of QM, especially matrix mechanics, depends on observable quantities, which are obtained by the measuring apparatuses. In the measurement, the measuring apparatus interacts with measured object, exchanging energy quanta, thereby changing the states of the measured object and measuring apparatus, and thus information of the measured object is obtained through the change of its state. Precision measurement is based on this principle: when the microscopic systems such as electrons, photons, phonons, atoms and molecules are coupled with the external factors such as electromagnetic field, temperature and pressure, their states will be changed, and information about these external factors can be deduced according to the changes in the microscopic systems’ states [25] [26].

Even the most sophisticated measuring apparatuses require the use of media like photoelectricity to fulfill measurement. Each measurement necessarily brings some disturbance to the objects observed. Here, we take the measurement conducted by photons as an example to argue the role of measurement in physics, and attempt to find out the root that accounts for the difference between QM and classical physics based on these argumentations. For macroscopic objects, the energy and momentum of photons are too small to cause a distinguishable change in the macroscopic objects’ states, i.e., the disturbance brought by photons during measurement can be neglected; for microscopic objects, taking electron as an example, the momentum and energy of the photons are close to that of the electrons, and the measurement inevitably brings the non-ignorable disturbance of their state. To obtain the determinate location and momentum of the electrons, particles with size and momentum much smaller than the electron should be adopted, and only by this means, the measurement conducted by such particles does not change the states of the electrons. However, the cost is λph , resulting in ΔrΔp/2 (where =h/ 2π is reduced Planck constant). As a result, the uncertainty principle breaks down and QM no longer works. In other words, a deterministic description of a microscopic object can be made only if the relation λ=h/p is not valid. Obviously, such particles do not exist. If they did exist, a sub-QM theory would need to be developed. We can say there exists a precision limit for observation, just as Dirac pointed out “we have to assume that there is a limit to the fineness of our powers observation and the smallness of the accompanying disturbance—a limit which is inherent in the nature of things and can never be surpassed by improved technique or increased skill on the part of the observer.[18]. This precision limit is determined not by the measuring apparatus, but by the action quantum h behand the objective physical law. Since it is the smallest unit of discrete variation, no measuring apparatus can overstep the precision limit.

Due to the existence of the precision limit for observation, it is impossible to accurately obtain all details about the microscopic objects at the same time. It can be viewed another kind of expression of the uncertainty principle. Therefore, uncertainty principle is inherent with the existence of a precision limit. As a result, to describe microscopic objects’ states, we have to consider all possible cases and assign probability to each case, describing their states in terms of wave-functions ψ(r,t) :

ψ(r,t)= c 1 φ 1 + c 2 φ 1 +...+ c n φ n , (1)

where φ i denotes the state of the i-th cases, which is called stationary state.

p i = | c i | 2 is the probability of i-th cases, which meet the relation 1 n | c i | 2 =1 . The

wave-function in Equation (1) is linear superposition of all possible stationary states. Superposition is a unique character and nature essentially different from classical physics. For a superposition state, it is impossible to know with certainty what state the microscopic object is in, and only probabilities can be given. By this means, complete description can be provided for physical phenomena as far as possible. For example, it is impossible to predict the specific time at which a single radioactive atom decays, and its half-life period can be obtained only by recording and counting the decay processes of a large numbers of radioactive atoms. As a matter of fact, the use of statistical methods for microscopic objects’ states is a compromise solution for not knowing all details, for it is impossible to obtain any information about the objects before measurement. What’s more, there is fluctuation (also known as uncertainty or error) in this statistic, naturally leading to the uncertainty principle, which has an explicit mathematical proof [27].

The microscopic objects whose states can be described by the same wave-function are viewed as identical particles. These particles possess the identical intrinsic physical properties, such as rest mass, charge, spin, etc. The microscopic system’s state remains unchanged when exchanging the states of any two particles (exchanging the states of any two particles only makes the wave-function symmetric and anti-symmetric). For this reason, the identical particles’ states are essentially indistinguishable. Identity is another unique character and nature essentially different from the classical physics. In classical physics, we never discuss how to distinguish two objects, for there is always a way to distinguish them, such as size, shape, color, and so on. Though measuring apparatus can distinguish identical particles’ states by no means, it can identify the number of identical particles, and provide statistical distribution. The conclusion that the microscopic objects’ states are quantized can be reached just by the statistical physical derivation of the mathematical expressions of Bose-Einstein statistics or Fermi-Dirac statistics. Planck concluded that energy is quantized through statistical physical derivation of his formula for the energy distribution law of the black-body radiation spectrum [28] [29]. As is known later, the light quanta involved in the black-body radiation are bosons, which are bound to follow the Bose-Einstein statistics.

The wave-function ψ( r,t ) has considered all possible cases of the microscopic system’s state, and each case is assigned the determinate probability, as shown in Equation (1). Before measurement, we known nothing with certain about the about the microscopic objects’ state. Measurement is a process of changing the microscopic objects’ states from indistinguishable to distinguishable. For example, if we start with two cats which are exactly same (indistinguishable), one of them dies through the coupling of a radioactive atom, and then the two cats’ states become distinguishable. This example easily reminds us of the Schrödinger’s cat thought experiment [30]. In some cases, the number of the possible cases of microscopic objects’ states can be reduced to one, which is usually called the single state. For microscopic objects in the same single state, their quantum numbers are the same, making them a special kind of identical particles. There is no discernible difference between these particles, and even if there exists difference between them, it is below the precision limit, and cannot be discerned by any means. In we conduct the measurement, only one outcome can be obtained, thus deterministic description can be provided.

For two stationary states, they are either exactly same or completely different, and there is no continuous transition, resulting in the quantization of microscopic objects’ state. Therefore, quantization is related to two or more microscopic states with distinguishable differences. The physical properties of the microscopic objects are quantized, and the energy, momentum and angular momentum etc. are integer/half-integer multiples of the basic quantum, and thus the measurement outcomes are necessarily discrete. For example, for hydrogen atoms, their energy levels are discrete due to the quantization of orbital angular momentum; For a one-dimensional infinite square well potential, only the electromagnetic waves forming a stable standing wave can exist stably. Two different stationary states may cause observable changes (it is limited by the transition selection rules), which are known as observable quantities. When the transition occurs between these two stationary states, the transition probability and frequency correspond to the relative intensity and frequency of the observable radiation, respectively. In matrix mechanics, the transition matrix elements are given directly, corresponding to the observable quantities. Although the wave-function in wave mechanics is not observable quantity, the difference between the initial state and the final state is observable quantity.

It seems that all the problems of QM stem from the precision limit for observation. Figure 1 shows an overview of the relationship between the precision limit and the uncertainty principle, superposition, probabilistic interpretation, identity principle and quantization in QM. The physical and philosophical arguments in this paper are based on the sketch.

Figure 1. Schematic depiction of the logical implications in this paper.

3. Probabilistic Determinism

For any given wave-function ψ( r,t ) , its evolution can be accurately described by Schrödinger equation, providing that there is a determinate Hamiltonian H ^ :

i ψ t = 2 2m 2 ψ+Vψ= H ^ ψ . (2)

It is somewhat similar to determinism in classical physics, where the evolution of an object’ state depends on the initial state and interaction process. There are some differences: in general, classical physics predicts a determinate value for some point, while QM predicts the probability within a certain interval, but the probability is deterministic. The determinism in QM is probabilistic, which can be called “probabilistic determinism”.

Figure 2. A single photon passing through beam splitter (50:50).

Schrödinger equation is linear, making the evolution of the wave-function unitary. For a superposition state, after evolution, it can still only be described probabilistically, i.e., the indeterminacy is delivered from the initial state to the final state. However, it makes a difference for a single state. Let’s review a simple experiment, i.e., a single photon passing through beam splitter BS (50:50), as shown in Figure 2(a). There is a probability of 1/2 that the photon is reflected (detected by detector D1), and a probability of 1/2 that the photon transmits the BS (detected by detector D2). The quantum-mechanical depiction for this process is shown in Figure 2(b), and the evolution of the photon’ state is given by [31]:

|0 a |1 b BS 1 2 ( |0 c |1 d +i |1 c |0 d ) . (3)

It can be seen from Equation (3) that the initial state |0 a |1 b is determinate, while the final state becomes a superposition of |0 c |1 d and |1 c |0 d . A determinate initial state leads to indeterminate results, indicating that the classical determinism and causality fails in this case. Obviously, this uncertainty comes from the measurement process. The interaction within the microsystem implies randomness. Just as Born pointed out “These probabilities are thus dynamically determined. But what the system actually does is not determined, …” [32]. Therefore, both the initial states and interactions contribute to the quantum indeterminacy.

Perhaps, someone would argue that many similar phenomena exist in classical processes, such as flipping a coin. In effect, coin flipping experiment is a pseudorandom process. In classical physics, if we know the present precisely, we can calculate the future. Such a complicated phenomenon can also give a deterministic result. The initial spatial state, linear velocity and angular velocity of the coin can be determined, then by adopting the laws of classical mechanics and taking air resistance caused by a revolving coin into consideration, with the aid of computer, we can reproduce the flight of the coin and deduce which side of the coin will face up after it lands on the desk. In contrast, the above single photon experiment is a truly random process, and we have no way of knowing for certain whether a photon transmits or is reflected.

From a classical point of view for a given BS, its interaction with each single photon is identical, and thus the measurement outcomes shall be deterministic. It seems that the single photon state contains hidden information. The only possibility is the phase information ϕ , for the initial phase of a single photon state can be different. Though phase contain the spatial-time information, it cannot be extracted by any means. The space-time distribution of the probability density p( r,t ) equals to square of the wave-function module | ψ( r,t ) | 2 , i.e., p( r,t )= | ψ( r,t ) | 2 , while phases do not affect the statistical distribution of particles, which are thereby generally considered physically meaningless. It seems that the indiscernibility of the phase can be traced back to the existence of a precision limit for observation (though I have no rigorous proof at present), suggesting that the wave-function contains some unknown information. The hidden variables theory is based on this consideration.

4. Nonlocal Hidden Variable Theories—Hope to Restore Determinism?

If all details of the microscopic objects at a certain moment is known, then their future development can be deterministically described. The precision limit prevents from obtaining all details of the microscopic objects at the same time, and meanwhile it hides the yet-to-be-discovered underlying physics. In other word, the wave-function alone provides an incomplete description of the microscopic system. The non-local hidden variable theories, especially Bohmian mechanics (BM), are proposed to solve this problem. BM is a causal interpretation of QM, initially conceived by de Broglie in 1927 [22], and refined by Bohm in 1952 [23], which is also called the pilot-wave theory or de Broglie-Bohm interpretation.

BM adopts the same formalism of standard QM, and restates the wave-function ψ( r,t ) in an exponential way [23]:

ψ=Rexp( iS/h ) , (4)

where p( r )= | R( r ) | 2 is the probability density, consistent with the Born’s probabilistic interpretation. S( r )/m is equal to the velocity vector v (r), for any particle passing the point r. By a set of derivations, Bohm obtained the following equation:

m d 2 r/ d t 2 =( V+Q ) . (5)

It is also called the Bohm-Newton equation, where V is the classical potential

energy and Q= h 2 2m 2 R R is the quantum potential.

In BM, the wave-function is replaced by an ensemble of Bohmian particles (BPs), i.e., the initial distribution of Bohmian particles is constructed by wave-function, which endows the wave-function with complementary microstructure. Then, by solving the Bohm-Newton equation and thus describing the evolution of each BP by corresponding Bohmian trajectory, the deterministic description of BP is realized. Bohm’s approach integrates complementary microstructure into the formalism of standard QM, yielding a deterministic account of quantum system’s evolution. By this approach, the classical mechanics archetype is partially restored, which seems to bring hope to restore determinism and causality in QM [33]. It has shown that BM can fully reproduce the predictions of QM. For example, it had successfully reproduced graphic depiction of the well-known double-slit experiment [34]. Usually, BM serves as a comprehensive alternative for interpreting QM. We used to apply BM to study the ionization, excitation, radiation of atoms in the light field [35]-[38], and found that the space-time evolution of probability density obtained by numerically solved Schrödinger equation and Bohmian trajectories are in good agreement with each other, as shown in Figure 3 [36]. By analyzing the space-time evolution of Bohmian trajectories, clear physical pictures of these processes are given.

Figure 3. Space-time evolution of the probability density obtained by numerically solved Schrödinger equation (left) and Bohmian trajectories (right) in different electric fields: (a) E0 = 1 a.u.; (b) E0 = 1.8 a.u.; (c) E0 = 2.8 a.u. [36].

As it comes to the single photon experiment in Section 3. All the single photons are characterized by the same wave-function. Each single photon has its own initial phase, i.e., different photons have different phases, making their quantum states different, though it does not affect their probability density distribution. We can assume that the difference between them is below the precision limit, impossible to be distinguished. By characterizing the space-time distribution of single photons via BPs, and by assigning deterministic trajectories to BPs interacting with BM, we are able to distinguish the BPs transmitting the BS and being reflected, just like the double-slit experiment. By this means, it seems that a deterministic description of the single photon experiment can be provided.

However, BPs are virtual particles constructed through the mathematical treatment of wave-functions, which cannot be verified experimentally. The description of microscopic objects’ states can only be given through the statistics of BPs, which is in effect to reconstruct the wave-functions. The measurement of particle’s location or momentum must still adhere to the uncertainty principle. As a result, BM can only provide probabilistic predictions of measurement outcomes. In addition, BM suffers from epistemological dilemmas. The quantum potentials

Q= h 2 2m 2 R R in BM is also constructed by mathematical treatment of wave-

functions, preserving non-locality. It is problematic when dealing with multi-body systems: In QM, a two-body system is described by a wave-function, and when the state of one object is measured, the description of it changes, and that of the other object changes accordingly. According to the setting of BM, changing the state of an object will affect the state of the other object sharing correlation with it, leading to the “spooky action at a distance”, which is the most controversial aspect of BM.

It may be frustrated that even the nonlocal hidden variable theories fail to restore determinism in QM. In hidden variable theories, some unknown physical quantities to ensure the objectivity of the reality, attempting to restore the complete description of physical reality. It is inherently contradictory to the uncertainty principle. The uncertainty principle cannot be completely described by using the terminologies of classical physics, i.e., there is no counterpart of it in classical physics, which had been argued by the EPR thought experiment [3]. Consequently, there is no hidden variable model that can be experimentally verified. In effect, any attempt to interpret quantum theory by only using the terminologies of classical physics are bound to fail. Bohr initially refused to accept the concept of light quanta, and firmly believed that light only appeared as quanta when dealing with the exchange of energy and momentum between matter and radiation, and as waves in other cases. He tried to bridge quantum theory with classical physics, and eventually put forward the semi-classical BKS theory, but arrived at the conclusion that the energy conservation is statistical [39]. The failure indicated that the methods and concepts of classical physics cannot be inherited in quantum theory. Einstein once tried to establish the causal connection between the wave and particle properties for microscopic objects, and to incorporate quantum theory into a field theory based on the causality and continuity principle, but he failed [40]. Though Bohr’s complementarity principle successfully reconciled the dilemma of adopting completely different descriptions for the same object (for example, the wave-particle duality), it is the uncertainty principle makes the quantum theory logically self-consistent. Recently, Spegel-Lexne et al. experimentally demonstrated that the quantum uncertainty and wave-particle duality are equivalent [41]. Since the uncertainty principle arises from the existence of a precision limit for observation, we can say that it is the precision limit makes it impossible to restore causality and determinism in QM.

QM sticks to the wholeness, for the measurement process, both the measured object and measuring apparatus must be considered, as a result, it is no longer possible to independently consider the state of the measured object [42]. As it comes back to the single photon experiment, the single photon is viewed an independent object before measurement for the conditions are determinate, and thus a single-value description of the photon’s state is given. As the single photon interacts with the BS, BS must be considered, so as to achieve complete description. Since BS provides two possible paths for the photons, two measurement outcomes are provided, and thus the final state is superposition of two states. We are incapable of the specific process, but only to accept that the single photon passing through the BS is a random process. QM only focuses on prediction for measurement outcomes, and it does not describe what happens exactly. As pointed out by Dirac “A question about what will happen to a particular photon under certain conditions is not really very precise. To make it precise one must imagine some experiment performed having a bearing on the question, and enquire what will be the result of the experiment. Only questions about the results of experiments have a real significance and it is only such questions that theoretical physics has to consider[18]. For each measurement, energy quanta are exchanged between the interacting objects and energy conservation is satisfied. It seems reasonable to provide a clear physical picture of the interaction process depending on whether the energy quanta are exchanged and the number of the energy quanta exchanged. However, the randomness of interaction prevents us from deterministically describing the exchange of energy quanta between the interacting objects during the interaction.

5. Boundary between the Classical and Quantum Worlds

It is impossible to reconstruct causality and determinism in QM, as opposed to classical physics. It naturally raises the question: where is the boundary between the so-called classical and quantum worlds? Before answering this question, we need to clarify what quantum is.

QM is a mathematical description that only makes sense when dealing with interactions between the microscopic objects. For the concept of quantum, only quantum properties are meaningful, and they are obtained after dividing the physical phenomena by the specific rules made by mankind according to their own cognition. Photon, for example, its wave and particle properties are just the sides shown in a certain way, and it is neither a wave nor a particle before observation. It is the measurement that endows it with property. Besides, there are also observation-independent sides (such as the spin, the rest mass, charge) which endow the microscopic objects with reality. It does not mean that quantum mechanics only applies to describing microscopic systems, it can also explain some macroscopic phenomena, such as superconductivity [43]-[45], superfluidity [43] [46]-[48] and Bose-Einstein condensate [49]-[51].

It is not appropriate to describe the world in terms of “quantum” and “classical”. Tentatively, we name the worlds described in terms of QM and classical physics as the quantum world and classical world, respectively. Although they follow two entirely different sets of laws, the underlying logic is the same: physical laws are derived from observations, and both classical physics and QM are established based on measurements. For example, observation requires the scattering of photons, but what is different is that the influence of photons on macroscopic objects is not significant, so the classical world can be considered unaffected by the measurement process; while the photons have a great influence on the microscopic objects, which changes the state of system to be measured, so that the definition of the state of system to be measured depends on the interaction. We can see that the “quantity” accounts for this problem. It is the quantitative change that causes the qualitative change of the state of the measured system, which makes the research subjects different in the two cases: In classical physics, usually attention only need to be paid to the measured objects; while in QM, the entirety composed of the measured object and the measuring apparatus must be considered, i.e., there is wholeness in quantum measurement. The observation processes bring about uncontrollable influence on the interacting microscopic objects, showing randomness, and the stochastic phenomenon can only be described by statistical laws.

In classical physics, it seems that once the initial conditions are determined, by solving the classical dynamics equation, the location and velocity of the objects at any moment can be predicted deterministically. Does this mean that the classical world must be deterministic? Obviously, it is only theoretically true. As it comes to the experiments, the errors in measurement outcomes for the macroscopic objects can be ignored when the time scale is small, but they become visible to eyes when the time scales in question are large enough. Taking a meteorite with a diameter of 1 m flying in space as an example, its velocity is measured to be v=20 km/s . Even if the measurement error in the velocity is only Δv=1 μm/s , the deviation in its travel distance can reach ΔL31.5m after one year of flight, and the prediction of its position must take this deviation into account. If the measurement error in its velocity direction is considered, the deviation in position will be much larger, and the deterministic description of its position is out of the question. Therefore, there is no absolute boundary between the so-called classical and quantum worlds. The “classical” and “quantum” are merely attributes imposed on the observed world, depending on the problem discussed. In the example of flying meteorite above, even the classical world, deterministic predictions cannot be given when it is discussed on a time scale large enough.

6. Conclusion

The existence of a precision limit for observation can account for unique characters like superposition, indeterminacy, identity in QM, and it can be traced back to classical physics. This precision limit is determined not by the measuring apparatus, but by the action quantum h behand the objective physical law. From a philosophical point of view, we try to lubricate the relationship between the precision limit and some unique characters and natures in QM. It is the root that accounts for difference between QM and classical physics. It is likely to bridge the gap between quantum mechanics and classical physics. By reviewing BM, which integrates complementary microstructures to the wave-functions, we discuss the possibility to restore determinism in QM. However, the precision limit prevents from distinguishing the microstructures. BM can only serve as a comprehensive alternative for interpreting QM. Since QM enjoys great success in phenomenal predictions, the attempt to restore determinism and causality is not an urgent issue. After all, the most effective and persuasive approach to examine a theory is experimental tests, rather than the philosophical discussions and logical deductions. In effect, QM and classical physics are essentially unified, apart from the fact that a higher precision is involved in QM, because QM is the theory dealing with the precision limit for observation. The scope of the discussions determines whether to adopt classical physics or QM. Even no deterministic predictions can be made for the classical world when the time scale discussed is large enough, indicating that there is no absolute boundary between the so-called classical and quantum worlds.

Acknowledgements

The research was funded by the National Key Research and Development Program of China (Grant No. 2019YFA0307701), and the National Natural Science Foundation of China (Grant No. 11974138).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Born, M. (1926) Zur Quantenmechanik der Stoßvorgänge. Zeitschrift für Physik, 38, 803-827.
https://doi.org/10.1007/BF01397184
[2] Einstein, A. (1936) Physics and Reality. Journal of the Franklin Institute, 221, 349-382.
https://doi.org/10.1016/S0016-0032(36)91047-5
[3] Einstein, A., Podolsky, B. and Rosen, N. (1935) Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47, 777-780.
https://doi.org/10.1103/PhysRev.47.777
[4] Bell, J.S. (1964) On the Einstein Podolsky Rosen Paradox. Physics Physique Fizika, 1, 195-200.
https://doi.org/10.1103/PhysicsPhysiqueFizika.1.195
[5] Clauser, J.F., Horne, M.A., Shimony, A. and Holt, R.A. (1969) Proposed Experiment to Test Local Hidden-Variable theories. Physical Review Letters, 23, 880-884.
https://doi.org/10.1103/PhysRevLett.23.880
[6] Freedman, S.J. and Clauser, J.F. (1972) Experimental Test of Local Hidden-Variable Theories. Physical Review Letters, 28, 938-941.
https://doi.org/10.1103/PhysRevLett.28.938
[7] Aspect, A., Grangier, P. and Roger, G. (1981) Experimental Tests of Realistic Local Theories via Bell’s Theorem. Physical Review Letters, 47, 460-463.
https://doi.org/10.1103/PhysRevLett.47.460
[8] Brunner, N., Cavalcanti, D., Pironio, S., Scarani, V. and Wehner, S. (2014) Bell Nonlocality. Reviews of Modern Physics, 86, 419-478.
https://doi.org/10.1103/RevModPhys.86.419
[9] The BIG Bell Test Collaboration (2018) Challenging Local Realism with Human Choices. Nature, 557, 212-216.
https://doi.org/10.1038/s41586-018-0085-3
[10] Leggett, A.J. (2003) Nonlocal Hidden-Variable Theories and Quantum Mechanics: An Incompatibility Theorem. Foundations of Physics, 33, 1469-1493.
https://doi.org/10.1023/A:1026096313729
[11] Egg, M. (2013) The Foundational Significance of Leggett’s Non-Local Hidden-Variable Theories. Foundations of Physics, 43, 872-880.
https://doi.org/10.1007/s10701-013-9723-7
[12] Branciard, C. (2013) Bell’s Local Causality, Leggett’s Crypto-Nonlocality, and Quantum Separability Are Genuinely Different Concepts. Physical Review A, 88, Article 042113.
https://doi.org/10.1103/PhysRevA.88.042113
[13] Laudisa, F. (2014) On Leggett Theories: A Reply. Foundations of Physics, 44, 296-304.
https://doi.org/10.1007/s10701-014-9787-z
[14] Branciard, C., Brunner, N., Gisin, N., Lamas-Linares, A., Ling, A., Kurtsiefer, C. and Scarani, V. (2008) Testing Quantum Correlations versus Single-Particle Properties within Leggett’s Model and Beyond. Nature Physics, 4, 681-685.
https://doi.org/10.1038/nphys1020
[15] Gröblacher, S., Paterek, T., Kaltenbaek, R., Brukner, Č., Żukowski, M., Aspelmeyer, M. and Zeilinger, A. (2007) An Experimental Test of Non-Local Realism. Nature, 446, 871-875.
https://doi.org/10.1038/nature05677
[16] Weyl, H. (1950) The Theory of Groups and Quantum Mechanics. Courier Corporation.
[17] Von Neumann, J. (1955) Mathematical Foundations of Quantum Mechanics. Princeton University Press.
[18] Dirac, P.A.M. (1958) The Principles of Quantum Mechanics. 4th Edition, Oxford University Press.
[19] Feynman, R.P., Leighton, R.B. and Sands, M.L. (1965) The Feynman Lectures on Physics. Vol. 3, Quantum Mechanics, Addison-Wesley.
[20] Pauling, L. and Wilson, E.B. (2012) Introduction to Quantum Mechanics with Applications to Chemistry. Courier Corporation.
[21] Bohr, N. (1928) The Quantum Postulate and the Recent Development of Atomic Theory. Nature, 121, 580-591.
https://doi.org/10.1038/121580a0
[22] de Broglie, L. (1927) La mécanique ondulatoire et la structure de la matière et du rayonnement. Journal de Physique et le Radium, 8, 225-241.
https://doi.org/10.1051/jphysrad:0192700805022500
[23] Bohm, D. (1952) A Suggested Interpretation of the Quantum Theory in Terms of ‘Hidden’ Variable. I. Physical Review, 85, 166-179.
https://doi.org/10.1103/PhysRev.85.166
[24] Campbell, N.R. (1928) An Account of the Principles of Measurement and Calculation. Longmans, Green and Company, Limited.
[25] Braunstein, S.L. (1992) Quantum Limits on Precision Measurements of Phase. Physical Review Letters, 69, 3598-3601.
https://doi.org/10.1103/PhysRevLett.69.3598
[26] Baso, D.N., Averitt, R.D. and Hsieh, D. (2017) Towards Properties on Demand in Quantum Materials. Nature Materials, 16, 1077-1088.
https://doi.org/10.1038/nmat5017
[27] Tsao, Y.H. (1984) Uncertainty Principle in Frequency-Time Methods. The Journal of the Acoustical Society of America, 75, 1532-1540.
https://doi.org/10.1121/1.390824
[28] Planck, M. (1900) On an Improvement of Wien’s Equation for the Spectrum. Annalen der Physik, 1, 719-721.
https://doi.org/10.1002/andp.19003060410
[29] Planck, M. (1901) On the Law of Distribution of Energy in the Normal Spectrum. Annalen der Physik, 4, 553.
https://doi.org/10.1002/andp.19013090310
[30] Schrödinger, E. (1935) Die Gegenwärtige Situation in der Quantenmechanik. Naturwissenschaften, 23, 844-849.
https://doi.org/10.1007/BF01491987
[31] Gerry, C. and Knight, P. (2005) Introductory Quantum Optics. Cambridge University Press.
[32] Born, M. (1927) Physical Aspects of Quantum Mechanics. Nature, 119, 354-357.
https://doi.org/10.1038/119354a0
[33] Bohm, D. (1953) Proof That Probability Density Approaches in Causal Interpretation of the Quantum Theory. Physical Review, 89, 458-466.
https://doi.org/10.1103/PhysRev.89.458
[34] Philippidis, C., Dewdney, C. and Hiley, B.J. (1979) Quantum Interference and the Quantum Potential. Nuovo Cimento B, 52, 15-28.
https://doi.org/10.1007/BF02743566
[35] Song, Y., Guo, F.M., Li, S.Y., Chen, J.G., Zeng, S.L. and Yang, Y.J. (2012) Investigation of the Generation of High-Order Harmonics through Bohmian Trajectories. Physical Review A, 86, Article 033424.
https://doi.org/10.1103/PhysRevA.86.033424
[36] Song, Y., Li, S.Y., Liu, X.S., Guo, F.M. and Yang, Y.J. (2013) Investigation of Atomic Radiative Recombination Process by Bohmian Mechanics Method. Physical Review A, 88, Article 053419.
https://doi.org/10.1103/PhysRevA.88.053419
[37] Wei, S.S., Li, S.Y., Guo, F.M. and Yang, Y.J. (2013) Dynamic Stabilization of Ionization for an Atom Irradiated by High-Frequency Laser Pulses Studied with the Bohmian-Trajectory Scheme. Physical Review A, 87, Article 063418.
https://doi.org/10.1103/PhysRevA.87.063418
[38] Song, Y., Yang, Y., Guo, F. and Li, S. (2017) Revisiting the Time-Dependent Ionization Process through the Bohmian-Mechanics Method. Journal of Physics B-Atomic Molecular and Optical Physics, 50, Article 095003.
https://doi.org/10.1088/1361-6455/aa630d
[39] Bohr, N., Kramers, H.A. and Slater, J.C. (1924) LXXVI. The Quantum Theory of Radiation. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 47, 785-802.
https://doi.org/10.1080/14786442408565262
[40] Einstein, A. (1923) Bietet die feldtheorie Möglichkeiten für die Lösung des Quantenproblems? In: Trageser, W., Ed., Sitzungsberichte der Preussischen Akademie der Wissenschaften, Springer Spektrum, 359-364.
[41] Spegel-Lexne, D., Gómez, S., Argillander, J., Pawłowski, M., Dieguez, P.R., Alarcón, A. and Xavier, G.B. (2024) Experimental Demonstration of the Equivalence of Entropic Uncertainty with Wave-Particle Duality. Science Advances, 10, eadr2007.
https://doi.org/10.1126/sciadv.adr2007
[42] Greenstein, G. and Zajonc, A. (2006) The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics. Jones and Bartlett Learning.
[43] Feynman, R.P. (1957) Superfluidity and Superconductivity. Reviews of Modern Physics, 29, 205-212.
https://doi.org/10.1103/RevModPhys.29.205
[44] Tisza, L. (1950) Theory of Superconductivity. Physical Review, 80, 717-726.
https://doi.org/10.1103/PhysRev.80.717
[45] Ginzburg, V.L. and Landau, L.D. (2009) On the Theory of Superconductivity. Springer, 113-137.
https://doi.org/10.1007/978-3-540-68008-6_4
[46] Landau, L. (1949) On the Theory of Superfluidity. Physical Review, 75, 884-885.
https://doi.org/10.1103/PhysRev.75.884
[47] Leggett, A.J. (1999) Superfluidity. Reviews of Modern Physics, 71, S318-S323.
https://doi.org/10.1103/RevModPhys.71.S318
[48] Packard, R. (2006) Berkeley Experiments on Superfluid Macroscopic Quantum Effects. AIP Conference Proceedings, 850, 3-17.
https://doi.org/10.1063/1.2354592
[49] Anderson, M.H., Ensher, J.R., Matthews, M.R., Wieman, C.E. and Cornell, E.A. (1995) Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor. Science, 269, 198-201.
https://doi.org/10.1126/science.269.5221.198
[50] Ruostekoski, J., Collett, M.J., Graham, R. and Walls, D.F. (1998) Macroscopic Superpositions of Bose-Einstein Condensates. Physical Review A, 57, 511-517.
https://doi.org/10.1103/PhysRevA.57.511
[51] Morsch, O. and Markus, O. (2006) Dynamics of Bose-Einstein Condensates in Optical Lattices. Reviews of Modern Physics, 78, 179-215.
https://doi.org/10.1103/RevModPhys.78.179

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.