Description of the Nature Using the Models Developed in Euclidean Space

Abstract

In elucidating the laws of matter motion, it is necessary also to take into account the subjective human possibilities to think and construct models. These possibilities are restricted to the framework of Euclidean space. No problems could arise during the development of the laws of classical science. However, it was established later on that in some areas it was rather difficult to describe the motion of the matter in terms of Euclidean models. In these cases, researchers either introduce a space of higher dimensionality, use complex numbers, or make some deformations of our habitual Euclidean space. Those were exactly the cases for which the pseudo-Euclidean, Hilbertian, reciprocal, micro-Euclidean and other spaces were proposed. Humans are able to think only in terms of Euclidean space. So, to provide a correct description of unusual motion of matter, the necessity arises to transform the information into the understandable Euclidean space. The operators suitable for these purposes are Lorentz transformations, Schrodinger equation, the integral transformations of Fourier and Weierstrass, etc. The features of information transformations between different spaces are illustrated with the examples from the areas of X-ray structural analysis and quantum physics.

Share and Cite:

Stabnikov, P. (2022) Description of the Nature Using the Models Developed in Euclidean Space. Natural Science, 14, 78-93. doi: 10.4236/ns.2022.142009.

1. INTRODUCTION

In the present work, a new approach to the description of the properties of the matter is proposed. This approach is based on our ability to think and develop models. A child gets acquainted with the specific features of our world through falling, abrasions, injuries and so on. Generalized abstract vision of the real space in which we take on our life is called Euclidean, and the rules of spatial movement are described by the laws of classical mechanics. For this reason, all people initially think in terms of Euclidean geometry in agreement with the laws of classical mechanics. These features of our way of thinking arise at the youngest age and proceed lifelong. Moreover, human brain had been adjusting to solving the problems in our Euclidean space for many thousand years through natural selection. We are not adjusted to thinking in the categories of other space kinds. To think in terms of other spaces, it is necessary for mentality to develop describing the motions of the matter in these spaces.

The Euclidean space is a three-dimensional space with the properties described by the axioms of Euclidean geometry [1]. These axioms may be modified, deformed, supplemented with additional statements and rules; multidimensionality may also be introduced, which allowed mathematicians to develop many different spaces. The first alteration of the axiomatics of Euclidean geometry was proposed by Lobachevsky, who modified the fifth Euclidean axiom in 1826. This axiom states that only one straight line parallel to some other straight line may be drawn through a given point [1]. The approach in which this axiom does not hold formed a new consistent geometry, which was called Lobachevsky’s or Hyperbolic geometry. Independently of Lobachevsky, a similar modification of Euclidean geometry was proposed by J. Bolyai in 1832 [2]. While Euclidean geometry is a geometry with zero curvature, the curvature of Lobachevsky’s geometry is negative. Riemann developed another geometry with positive curvature. A two-dimensional model of Riemannian geometry is the surface of a sphere; Euclidean geometry is applicable on it in a small scale, while for a larger scale Riemann geometry is applicable due to noticeable curvature [1]. All these works paved the way for the development of a vast area of non-Euclidean geometries and their generalizations that found application in mechanics and other branches of science. Physicists succeeded in explaining relativistic effects with the help of four-dimensional Riemann geometry. At the same time, topology began to develop, as the doctrine of the properties and deformations of figures without breaks and gluing started its development. In the XX century, topology had taken its shape as a self-sufficient branch of knowledge. This is how geometry turned into a branched set of mathematical theories studying different spaces (Euclidean, Lobachevsky’s, Riemannian, Klein’s, projective, etc.) and geometric objects in these spaces. The application of the new approaches in analytical and differential geometry was also implemented [2].

Another important modification of Euclidean geometry is connected with the additional introduction of imaginary numbers to write functions, scalar products of vectors, etc. Imaginary numbers were developed for the first time for solving square, cubic and higher-order equations. The solutions of differential equations and the results of integral transformations are also written in the form of complex numbers. Expansion of Euclidean geometry by admitting infinite dimensionality and introducing complex numbers is called Hilbert’s geometry [1]. The simplest version of Hilbert’s space is a unidimensional space with the functions recorded in complex numbers. The functions from two different Hilbertian spaces may be interrelated with the help of the integral transformations (Fourier, Weierstrass, Gmelin, etc.).

Further expansion of Hilbert’s geometry proceeded through simultaneous introduction of complex numbers, both for writing functions and for the coordinate axis (actual coordinate axis is expanded into a complex plane). Investigation of interconnections between a function represented by complex values, and a coordinate plane in this space is the subject of the part of mathematics called the theory of functions of a complex variable. Results of the studies of these interconnections are largely used in applied problems: expansion in series, evaluation of integrals with the help of residues, development of the methods of operator calculus, etc. Later on, results of the investigation of Hilbert’s and other mathematical spaces won extensive applications in solving various problems of probability theory, thermal conduction, quantum physics, X-ray structural analysis, etc. It should be stressed that complicating and development of new kinds of spaces had always been based on Euclidean geometry, which is understandable for everybody.

The foundations of classical mechanics are based on the Euclidian geometry and elementary mathematics. With advances in science and technology, it was established that the motion of the matter in some areas is excellently described by the models elaborated in the Euclidean space. For example, the electric current in conductors is described by the model of incompressible liquid. This is why the movement of charges had been called the current. In some areas, it is a problem to describe the motion of the matter using classical notions. These areas include motion at a speed close to the speed of light, the structure of micro-objects and interactions between them. In these cases, different approaches are used to explain the features of the motion of matter. Four of them are listed below:

1) Deformation of the notions and terms of Euclidean space, classical mechanics, calorimetry and so on, with the help of equations or models the application of which leads to additional restrictions, for example, impossibility for a body possessing a mass to move at a speed higher than the speed of light, impossibility to reach a temperature below 0 K, discrete structure of micro-objects, appearance of additional attraction-repulsion forces (dispersion interaction), etc.

2) The use of several models developed in the Euclidean space to explain the properties of objects under different conditions. For example, the models of electron- and hole-driven motion of charges are used to describe the features of semiconductors. To explain the specific features of the motion and interactions of micro-objects, both the corpuscular and wave models are used. This approach has been called the wave-particle dualism.

3) Simultaneous application of Euclidean and quantum models, for example, superfluidity and superconductivity, are described using the classical idea of a liquid and the model of multi particle quantum unification. This symbiosis has led to the idea of a quantum liquid.

4) The use of diverse mathematical procedures that allow transformation of transfer of the information from other spaces into values or functions suitable for modeling in the Euclidian space which is clear for us. Examples are Schrodinger equation, Green functions, and integral transformations, such as Fourier, Weierstrass, etc.

These approaches are to be considered in more detail. To do this, it is necessary at first to introduce the descriptions of 1) natural understanding, 2) assumption, and 3) acceptance. Natural understanding will be considered as solving a problem in our Euclidian space without additional assumptions and restrictions. An assumption will be understood as almost any kind of admission. However, some assumptions are impossible, for example, it is impossible for a sum of two velocities to be larger than infinity. Acceptance will be understood as a solution of a task with additional conditions, which are not natural for Euclidean space. Since childhood, people understand the features of body motion at not very high velocities, but in order to understand the features of the motion of bodies at velocities close to the velocity of light, it is necessary to accept some additional corrections, such as the relativistic velocity addition law, time dilation, and a linear contraction of a moving body. These corrections are introduced by means of mathematics through replacement of the classical Galilean transformation by Lorentz transformations [2 , 3]. The motion with these corrections is described in a new space, which is called pseudo-Euclidean. Here we will make an attempt to explain time dilation with the help of the light clock model, described in many publications [4 - 6] (shown in Figure 1).

Figure 1. Description of two light clocks moving with respect to each other with speed u. (a) A “Light clock” at rest in the S’ system. (b) The same clock, moving through the S system. (c) Illustration of the diagonal path taken by the light beam in a moving “light clock”. Copied from site https://www.feynmanlectures.caltech.edu/I_15.html.

Platforms (a) and (b) have two identical clocks. They measure time by relying on the movement of a light pulse from a flashbulb to a mirror and back to a photocell. We will assume that these clocks are synchronized and show the same results if they do not move relative to each other. Then one clock is placed in an inertial reference frame S, moving at a speed u relative to the fixed frame S’. If we consider this problem within the framework of classical mechanics, then the course of time is the same in all inertial frames of reference, and the apparent speed of the light signal will be greater from the point of view of a stationary observer. However, according to Einstein, the maximum speed of movement in any direction cannot exceed the speed of light. It is the journey of light that we consider in the problem. So, in order to remain within the framework of Einstein’s relativistic theory, it is necessary to accept that the apparent motion of a light pulse in the S-frame must take a longer time interval than in the S’-frame. A longer time means that the passage of time must be slower in a moving inertial frame of reference. Due to the equivalence of all inertial frames, time in the inertial frame S will also go slower from the point of view of the observer in the inertial frame S’. To determine the time stretch factor, consider a right-angled triangle (c), Figure 1. It follows from this triangle that time dilation is proportional to C 2 V 2 . Assuming that this time is equal to one second, we obtain the deceleration coefficient K = C 2 + V 2 C . It is necessary to divide the value by the speed C so that K is a dimensionless quantity. Thus, we have obtained the time dilation coefficient K = 1 V 2 C 2 in a moving inertial reference frame, which coincides with the value used in the Lorentz transformation. This is one version of the explanation of why time dilation occurs in inertial systems, but there are other versions that rely on the ideas of Euclidean space to explain this effect. However, it should be noted that all these considerations are not without drawbacks, since it is problematic to explain the features of the motion of matter in other spaces.

In general, it may be concluded that it is hardly possible to understand the relativistic mechanics in the same manner as we understand the classical mechanics. Moreover, the march of time accelerates for everybody with aging, as all elderly people use to say. This is a specific feature of pour memory, which worsens with aging. In other words, personally a human being is unable to evaluate the march of time correctly under habitual conditions. Such an evaluation may be made only with the help of a high-precision clock assuming that the march of time is always the same. We may also assume that in another inertial system moving with respect to ours the geometry, the march of time, and the laws of classical mechanics would be the same as those in our inertial system. This approach is called Galileo’s principle of relativity [2 , 3]. However, we may equally assume that the march of time in another inertial system can be either slower or faster in comparison with ours, but the assumptions of this kind are unnatural. Still, to unite classical mechanics and electrodynamics, it is necessary to replace Galilean transformations by Lorentz transformations [2 , 3]. This leads to time slowing down, a different velocity addition law, and a decrease in the linear dimensions of bodies in a moving inertial system.

However, the problems connected with the introduction of these additional corrections were outlined by Professor B. I. Peshchevitsky, whom I consider to be my teacher in science. Prof. Peshchevitskty demonstrated in his report at the Institute of Inorganic Chemistry SB RAS and in his work [7] that any velocity of movement may be taken as the limiting unattainable velocity. For this purpose, it is necessary to make the corresponding corrections to Lorentz transformations. A different kind of non-contradictory mechanics similar to Einstein’s mechanics may be built up on the basis of this assumption. It follows from the approach proposed by Prof. Peshchevitsky that the velocity of light should not necessarily be an absolute constant, so it may be assumed that in the distant past or in the future this value might be different, and this would not affect the description of the relative motion of the matter. To develop this idea, relying on the global expansion of the Universe, as assumption has been made that the velocity of light is permanently increasing. This trend can be detected with the modern precise devices by comparing the results obtained at an interval of several decades [8].

Using Lorentz transformations, we may estimate quantitatively (with the help of equations) time dilation and size reduction for bodies moving at relativistic velocity, and this description is accepted. However, this description is not in agreement with our natural understanding inherent since our childhood, first of all because the space and the time are qualitatively different and completely incompatible notions. This situation does not contain any contradiction in it because the motion of the matter in pseudo-Euclidean space must not be accurately described in the natural terms of Euclidean space and the laws of classical mechanics.

It should also be noted that the attempts to explain the features of motion with relativistic velocities using only the terms of Euclidean space and classical mechanics are still in progress [6 , 9 - 12]. Non-standard and original approaches are developed in these works. However, all these approaches resemble the situation described by H. Ch. Andersen in his famous fairy tale The Snow Queen. In the kingdom of the Snow Queen, Kay was trying to arrange letters in the word eternity but failed. Only Gerda, who had reached that wonderful kingdom from the usual space inhabited by people, destroyed the witchery of the Snow Queen. Gerda, with her love, warmth and tears, turned Kay back into a usual human being. Kay collected the sacred word, and they returned to the place where people had been living. In the same manner, all of us exist in a fairy kingdom of classical geometry and classical mechanics. This is why we can hardly explain the relativistic effects relying only on the terms of our habitual kingdom without additional unnatural limitations or conditions.

Another example: the results of the motion of micro-objects are not continuous but jump-like. The smaller are the objects, the more difficult it is to explain their interactions in the terms that we use since our childhood. For instance, the radiation of hydrogen atom consists of several series of spectral lines. These series were called Lyman, Balmer, Paschen etc. according to the names of researchers who proposed equations to calculate the wavelengths of these series. A unifying equation for calculating the wavelengths of hydrogen spectral lines was proposed by Rydberg: 1/λ = R(1/n2 − 1/m2), where λ is wavelength, R is the Rydberg constant, n and m are integers (n < m) [3 , 4]. This equation is convenient for calculation but it does not explain anything. To explain the spectra, Bohr, relying on the ideas of Rutherford, Planck and Einstein, proposed a model in Euclidean space according to which the interaction of an electron with the nucleus follows Coulomb’s law, but an electron may occupy only discrete stable positions (orbits) close to the nucleus. When an electron passes from one orbit to another, energy is released or absorbed in the form of light quanta [4]. However, Bohr’s model is somewhat strange for our classical space in which both the space and the interactions are continuous, but this strange model allowed one to explain the spectrum of hydrogen, so it was accepted by physicists. In other words, the model proposed to explain atomic spectra was constructed in Euclidean space, but it was supplemented with an additional requirement of stepwise structure. De Broglie made another supplemented to this model by introducing the idea of standing waves stacked over the electron orbit [4]. With this approach, an integer number of semi-waves must be stacked at the orbit. This was a vivid explanation of the discreteness of electron orbits.

One more example: two types of motion that are well known in classical physics are corpuscular and wave. However, a particle and a wave are two incompatible terms [4]. If micro-objects move in a large space, the classical corpuscular model is suitable to describe their motion (geometric optics, the motion of electrons in television or microscope tubes, etc.). If the motion occurs in small regions (the size of restricted space being smaller than or comparable with the wavelength of a moving body), the wave model is suitable. It should be noted here that no localized object is matched to the wave motion: this is a periodic motion of some medium. A particle and a wave are two qualitatively different terms in Euclidean space. This is the difficulty for providing symbiosis of these two terms [3 , 4].

There are also other examples of deformation, stepwise gradation, symbiosis of the models developed in Euclidean space to explain the features of the motion of material objects in the areas in which the classical models work poorly or do not work at all. Moreover, with further advances in studying the nature, the problem of description of the matter will only become more complicated. To simplify comprehension of the properties of objects, it will be necessary to develop new approaches, in particular the methods to transform the information into the understandable Euclidean space.

In the present work, we describe a method to transform the information between the Inverse and Direct spaces with the help of the integral transformation of Fourier series to obtain the electron density, which is then used in modeling the crystal structure. It is proposed to consider Schrödinger’s equation as a convenient method to transform the information from the micro space to the understandable Euclidean macro space. The possibilities of the Fourier’s integral for coupling the corpuscular and wave representations are also considered. Two approaches to the calculation of the diffraction of light off a screen with slits with the help of differential equations and integral transformations are described. It is also demonstrated that the integral Weierstrass transformation may be used to provide a link between two Euclidean geometries with different metrics of infinitely small values [8 , 13]. The Weierstrass transformation is written as F(t) 1 4 π + e ( x y ) 2 4 P f ( x ) d x . The results of the numerical transformation of a bell-like function e x 2 for different P values are shown in Figure 2.

It follows from Figure2 that the larger is parameter P, the more smeared appears to be the image of initial function f(x). It also follows from this Figurethat the more smeared appears an object, the more indefinite will appear its motion. This will allow a natural explanation of Heisenberg’s indeterminancy principle.

2. DEVELOPMENT OF X-RAY DIFFRACTION METHOD

Studies of crystals revealed the constancy of angles between faces and the symmetry of external faceting. The ideas of crystals as atomic packing were developing. The simplest model of an atom in Euclidean space is a ball. It was stressed in [14] that Kepler, astronomer and mathematician, formulated a hypothesis for the first time in 1611 that the highest density is provided by the pyramidal packing of identical balls, that is, the closest cubic packing. Such scientists as Hooke, Huygens, Lomonosov assumed that crystal shape was determined by packing of spherical or ellipsoid particles. Further progress in the understanding of crystal structure was due to the introduction of the ideas of unit cell and translation. Relying on these ideas, in 1848 Bravais derived 14 types of translation lattices [14]. In 1883-1898, Barlow described previously unknown closest hexagonal packing of balls and predicted the structure of ionic crystals: NaCl, CsCl, ZnS and others [15]. The development of the foundations of structural analysis was described in more detail in [14 , 15]. Those were intuitive guesses based on the symmetry of external crystal faceting in Euclidean space, the idea of unit cell, and the ideas of the spherical shapes of micro-objects.

Figure 2. Numerical Weierstrass transformation of the original function e x 2 for different Р. Integration limits: −10 to +10.

The closed symmetry elements include axes and planes of symmetry. One may describe 32 point groups of crystals through combinations of these elements of symmetry. Combinations of 14 types of Bravais translation lattices, 32 point groups, and the introduction of additional two open elements of symmetry, namely a plane of sliding and screw axes, allow 230 space groups. This notion was deduced independently by Fedorov and Schoenflies in 1890-1891. These results still remain the theoretical foundation of structural crystallography [14 - 17]. Undoubtedly, the formation of the notion of crystal structure proceeded from two positions, relying both on the macro characteristics of crystals and on the three-dimensional periodicity of the assumed spatial elements, called unit cells, and the possibility to accommodate micro balls, ellipsoids or ions in these cells. It should be stressed once more than the foundations of structural analysis were built through modeling in Euclidean space.

Further progress in the methods of crystal structure determination was connected with the application of X-rays. The first experiments on continuous X-ray radiation scattering on crystals were carried out in 1912 by Laue, Friedrich and Knipping [14]. The data obtained in those experiments showed that the atoms located periodically in a crystal may be an analogue of a diffraction grating for X-rays. The Braggs, father and son, used the monochromatic X-ray radiation and applied rolling or rotating crystal method, which allowed them to confirm the hypothetic model of rock salt proposed previously by Barlow [15]. At first, these new data were some additional information confirming the previously developed ideas of crystal symmetry and structure. Only geometric data of X-ray diffraction patterns were attracted for this purpose, without taking into account the intensities of reflections. Later on, it became clear that diffraction methods were more informative in determining the symmetry than the consideration of the external crystal faceting. For instance, it is possible to reveal the type of Bravais lattice (P, I, or F) by means of diffractometry for cubic crystals [18], while the method of external faceting allows one only to determine that a crystal belongs to the cubic system.

It follows from the analysis of variables in the diffraction experiment that four variables may be involved in this method. To make this statement more clear, remember that two variables are necessary to determine the coordinates of a point on the Globe: longitude and latitude. So, two variables, for example α and β, are necessary to fix a reflection on an imaginary sphere in our three-dimensional Euclidean space. Similarly, two variables, for example γ and Δ, are necessary to describe the orientation of a crystal in the space. As a total, four variables α, β, γ and Δ are necessary to provide a complete description of crystal orientation and reflections. However, due to the fact that the radiation of an X-ray tube is depolarized, one of the variables cannot be informationally loaded. This may be illustrated with a simple example. Let a reflection be observed at a definite crystal orientation. Now we turn the crystal around the axis coinciding with the direction of the primary beam. The observed reflection will draw a circle on the imaginary reflection sphere, and the intensity of radiation would not be changed (provided that the intensity of the primary beam from the X-ray tube is constant), which is shown in Figure 3. For this reason, an X-ray experiment may be currently described by only three independent variables. However, there is a hope that the possibility to use an X-ray laser in the investigation of crystals may arise in the future. In that case, all potentially possible variables provided in our three-dimensional Euclidean space might be involved in the diffraction investigation. At present, we have at our disposal partially polarized synchrotron radiation, which also may be used to solve the phase problem. It should be stressed that a laser ray is polarized, and two more parameters may be used to describe its interaction with the substance: ψ and Δ. Ψ is a change of the phase between TE (s-polarization) and TM (p-polarization) wave, while Δ is a change of the amplitude of this wave. If the analysis of both polarization and phase of reflected radiation will be additionally possible also for X-ray laser radiation, then the number of variables in a laser X-ray experiment may be increased even more.

X-ray scattering by crystals leads to separate reflections resembling the reflections of visible light from planes. This was the basis to elaborate the idea that there are sets of planes inside the crystals that form separate reflections [16], as shown in Figure 4.

According to the condition formulated by W. L. Bragg, G.W. Wolfe, reflections of X-rays from a family of planes appears only in the case if all the reflected rays coincide in phase. Only in this case these rays will enhance each other, Figure 5.

Figure 3. Rotation of a crystal around the axis coinciding with the primary beam. (I) Crystal orientation at which the implied plane forms the reflection. (II) Crystal rotation over 180 degrees with the conservation of reflection conditions.

Figure 4. Three versions of traces (straight lines) from a family of planes with different interplanar spacings crossing a flat network of nodes (atoms) in the crystal.

Figure 5. Conditions for the appearance of reflections and from a family of planes with interplanar spacing d. (I) Condition: AB + BC = λ, (II) conditions: AB + BC = 2λ, where λ is the wavelength of radiation.

Relying on the idea of reflecting planes, W. L. Bragg and G. W. Wolfe obtained an equation:

2 d sin ( θ ) = n λ

This equation allows one to determine interplanar spacing in the family of planes from the reflection angle and radiation wavelength. This approach forms a three-dimensional array of plane families and experimental reflections. The next stage of solving the structural problem is determination of unit cell parameters. In the general case, this task is ambiguous, which is shown in Figure 6.

Among all possible versions, the preferred cells are those in which the sides are equal, the angles between the axes are 90 degrees, and the volume is minimal. However, in some cases, for example in the crystals of triclinic system, the choice is not wide because all possible cells have different sides and all angles between the axes differ from 90 degrees. The choice of unit cell parameters is equivalent to introducing a system of coordinates in the crystal. After having chosen unit cell parameters, one may assign Miller indices (HKL) to every family of planes, similarly to every reflection. Miller indices are connected with the segments on three crystallographic axes, cut out by the nearest plane to the origin or coordinates. Each index is a ratio of the unit cell parameter to the segment between the origin of coordinates and the point of intersection of the plane and the axis. These values are integer positive or negatives digits, for example (10-1), (001), (203), etc. It should be noted that each family of reflections may form two different reflections as shown in Figure 7.

This approach allows us to obtain the unit cell parameters and to assign Miller indices to each reflection relying on the array of experimental data. It should be also stressed that modern diffractometers do this work in the automatic mode. The user is only to confirm the results or make corrections, for example propose another unit cell geometry or another crystal system.

It should be noted that, under definite conditions, scattering of neutrons and electrons on crystals is similar to the diffraction of X-rays. Because of this, electron diffraction and neutron diffraction analyses involve the same approaches as those used in X-ray diffraction studies to establish crystal structures.

The next stage in determining the structure of crystals is connected with the application of the integral transformation of Fourier series to calculate electron density in the unit cell. According to [15], the method based on Fourier series was applied for the first time for linking the Straight (direct) and Reciprocal spaces by father and son Bragg. Patterson used the absolute intensities of reflections to calculate a convolution of electron density with itself using the Fourier series [17]. In the general case, the theory of

Figure 6. Possible versions of choosing a unit cell, by the example of a two-dimensional network.

Figure 7. Two different possible reflections from each family of planes, to which Miller indices with different signs are assigned, for example (HKL) and (-H-K-L), (-HKL) and (H-K-L).

Fourier series assumes that the dimensionality of the Straight and Reciprocal spaces should be the same. In addition, the information should be represented by complex values both in the Straight and in Reciprocal spaces. This means that the Straight and Reciprocal spaces should be three-dimensional, taking into account the real and imaginary components. In fact, Fourier series transformation binds two Hilbert spaces. One of them is continual, and complex values are significant in it within one unit cell. Another Hilbert space is discrete, and complex values in it are significant only in the points of the reciprocal lattice marked with indices H, K, L. However, we are more accustomed to work with the models in the three-dimensional space, in which electron density is only positive. Density cannot be negative. It is very difficult to image what an imaginary density might be. However, experimental diffraction data allow us to eliminate the imaginary component during the transformation of Fourier series. According to Friedel’s law, the reflections of a pair of diffraction rays with indices HKL and (-H-K-L) are always equal to each other in the intensity [18]. The phases may always be made inverse for such a pair. This allows us to eliminate the imaginary component of electron density during Fourier series transformation. (The equality of intensities of symmetrical reflections is sometimes violated in the case of anomalous scattering). In the general case, for electron density to be only real in the Straight space, it is necessary for the sum of the pairs of reflections HKL and (-H-K-L) to be equal to zero or 2π, which is readily achievable by assigning phases to these values.

While the imaginary values of electron density during Fourier series transformation may be eliminated due to the symmetrical nature of the array of reflections (IHKL = I-H-K-L), the negative values of electron density may be eliminated only by selecting definite phase values for all reflections. The matter of fact is that an X-ray experiment allows obtaining the squared intensities of reflections. To use these data for Fourier series transformation, it is necessary to find the square root of reflection intensities and assign orientation (phase) in the Reciprocal space to each of them. However, the number of these reflections may be from several thousand to 150 thousand. This is where the complicacy of the phase problem of X-ray structural analysis resides. To solve this problem, many researchers proposed various methods, in particular Patterson’s method, direct methods, a heavy atom method, the method of nonlocal search etc. [14 - 18]. At present, due to the efforts of many engineers, mathematicians, physicists, crystallographers, unique devices have been built, unified programs and clear criteria for the evaluation of results have been developed. The finalizing stage in establishing the structure is refinement of atomic coordinates by the least squares method. The reliability of interpretation of the results is evaluated with the help of R-factor, which is a sum of squared discrepancies between experimental and calculated values for all reflections. This allows reliable establishment of crystal structure, in particular for such complicated packed objects as viruses, nucleic acids and proteins [17].

So, to determine the structure of crystals formed from such micro-particles as atoms, ions and molecules, some information may be obtained on the basis of reflections (symmetry, space group), while the most important portion of information is obtained through numerical Fourier series transformation. The application of numerical Fourier series transformation allows us to transform the information into our understandable Euclidean space in the form of electron density, which allows us to determine the coordinates of atoms in the unit cell. It should also be noted that all the models used in X-ray structural analysis are Euclidean. Thus, the models in the Straight space are Euclidean by definition. To understand the features of the structure of Reciprocal space, the reciprocal lattice model and the model of Ewald sphere were proposed [16 - 18]. These models were elaborated in the Straight space (not in Reciprocal one), so they are Euclidean, too. These models are used to compare the metrics of the straight and reciprocal lattices and to explain the origin of reflections (Ewald’s sphere).

It should be noted that the final results of X-ray structural analysis are presented by the information only in Euclidean (Straight) space (space group, unit cell parameters, atomic coordinates, R-factor). This is due to the fact that the majority of consumers are simply unable to think and carry out modeling using the ideas of the Reciprocal space. If one needs the information on the Reciprocal space, it may be always calculated on the basis of the final data. For example, to calculate theoretical diffraction patterns, it is quite sufficient to use the data filed in structural works or in the Cambridge database of structural data.

3. FEATURES OF THE DESCRIPTION OF MICRO-OBJECT MOTION

About a hundred year ago, the necessity arose to describe a stepwise change of the states of micro-objects. However, among the entire set of mathematical operations, there were only two methods that could potentially allow obtaining results in the form of discrete values or states. These were matrix algebra and differential equations. A matrix is a discrete table, and the transition from one matrix to another is discrete. This was the basis of Heisenberg’s matrix mechanics. Schrodinger proposed to use differential equations as a tool, so that the solution is expressed as a set of discrete functions. According to Heisenberg, the results of matrix transformations are readily linked to observable values, for example light frequency and amplitude, while such non-observable values as coordinates and electron density distribution are thus excluded [19]. However, it is difficult to describe the observable values using the terms of Euclidean space. This is the major problem of matrix-based approach, while solving Schrodinger’s equation, one obtains wave functions, the squares of which may be interpreted as a probable value of electron (or some other kind of) density. This density is a quite clear term of Euclidean space. Because of this, the approach proposed by Schrodinger had won broad application for the description of the properties of micro-objects. Later on, it was demonstrated that the approaches proposed by Heisenberg and Schrodinger are equivalent, as these theories may be deduced from each other [4 , 19].

In Schrodinger’s method, at first a model is constructed in the Cartesian coordinates of the interacting particles to be described [4 , 20 - 22]. It is assumed that this interaction obeys Coulomb’s law. However, Coulomb developed his law in Euclidean space, so, in order to apply this law, it is necessary to assume that micro-particles interact with each other in the micro-space exactly in the same manner as macro-particles interact with each other in the Macro-space. This means that it is admissible to apply Euclidean geometry and Galileo’s relativity principle to the Micro-level. These steps allow us to write down the analytical expression for the potential energy of interactions between charged micro-particles [4 , 20 - 22]. In some cases, a transition from Cartesian coordinates to spherical or cylindrical ones is made [4 , 20 - 25]. This transition is made using the equations developed for Euclidean space. Then the equation for potential energy is embedded into the differential Schrodinger’s equation. Solving the equation, one obtains a set of wave functions; a square of each of them is interpreted as a probable electron density, now in our Cartesian space. To solve Schrodinger’s equation is a complicated mathematical task. The analytical solution had been obtained only for hydrogen-like atoms. For atoms containing more electrons, results are searched for using numerical methods with additional simplifications, which are described in detail in manuals and in special publications [4 , 20 - 25].

According to the approach proposed by the author, Schrodinger’s equation allows us to transform the features of micro-particle interactions into understandable Macro-space. In other words, Schrodinger’s equation plays the same part as Fourier series transformation does in X-ray structural analysis in the transformation of information between the Straight and reciprocal spaces. An essential feature of the solution to Schrodinger’s equation is a discrete set of wave functions. In the general case, wave functions may have negative and complex values. Because of this, to obtain only real density values, it is necessary to use a product of the obtained wave function and its complex conjugate function [4 , 20 - 25]. On the basis of electron density in Euclidean space, one may calculate charges at atoms, the electrostatic energy of interactions between particles, etc. Since the solution of Schrodinger’s equation is represented as a set of different wave functions, it is possible to evaluate energy change accompanying the transition from one quantum state of the initial set of particles to another state. It is also possible to estimate the energy of some set of free atoms, and then the energy of a molecule formed through chemical interactions of these atoms. The difference between these energy values may be interpreted as the chemical energy of atom bonding [23 - 25].

Further development of quantum physics is connected with the description of macro-ensembles of quantum particles. However, the wave nature of particles in quantum mechanics does not allow one to distinguish between these particles. Moreover, within the quantum approach, a principal issue for the description of the behavior of the systems composed of many particles is: what particles they are composed of. The particles may be bosons or fermions, depending on whether their spin is integer or half-integer. This issue has a strong effect on the statistical description of the particles [4 , 22]. Fermions obey Pauli principle. This is the basis of Mendeleev’s Periodic Tableand defines the properties of electrons in metals and semiconductors. The properties of bosons manifest themselves in light emission, which is the basis of quantum generators, lasers.

Difficulties also arise in describing the behavior of the behavior of micro-particles, when it is necessary to take into account the wave and corpuscular properties of micro-particles at the same time. An example may be the work by Bohm [26 , 27], who described the calculation of the pattern of light diffraction behind a screen with slits. Having solved Schrodinger’s equation and written the wave function in polar coordinates, he divided it in two parts: corpuscular and wave [18]. This allowed him to describe diffraction pattern at any distance from a screen with two slits. The closer to the screen, the larger is the contribution from the corpuscular component. The obtained result is on good agreement with the experiment ([27], Figure 6.12). Here it is necessary to recognize Bohm’s brilliance for having succeeded in finding a corpuscular component in the wave function. However, Schrodinger did not assume the presence of any corpuscular component in the wave function when he was creating his equation. So, according to Bohm, to describe light interference at any distance behind a screen with slits, it is necessary to take into account both the corpuscular and wave components of light.

By present, other mathematical procedures allowing one to obtain a set of discrete states of micro-particles have been developed. These include Green’s equations, and integral transformations. The integral Fourier’s transformation should be specially mentioned (not to be confused with the Fourier series transformation), as well as the integral Weierstrass’ transformation. For a unidimensional case, the forward

and inverse Fourier transformations are written as: F y = 1 2 π + f x e i y x d x and f x = 1 2 π + F y e i x y d y . The integral Fourier transformation is one-to-one (single-identical), that is, the result of consequent double numerical transformation will be the same initial function (with slight distortions connected with the features of numerical methods). This transformation allows linking the corpuscular and wave presentations. Thus, the image of Dirac function in the inverse Fourier space is represented by a continuous wave (the sum of sine and cosine). If we multiply this image by e A y 2 , where A is some positive value, we will obtain a function close to the wave package. If we subject this function to the inverse Fourier transformation, we will obtain a bell-like function with the maximum located exactly where the initial Dirac’s function was located, and its area will be equal to the value of Dirac’s function multiplied by one step along the abscissa axis. This means that the wave package may be represented by a bell-like function, and the Fourier image of a particle may be represented as a wave package.

It has been demonstrated recently that one may use not only the approach proposed by Bohm for the qualitative description of light diffraction behind a screen with slits, but also such integral transformations as Fourier’s and Weierstrass’ [8 , 28]. The idea of the new approach to the quantitative description of the behavior of micro-particles is based on the geometric approach. According to this approach, the motion of macro-objects takes place in our habitual Euclidean geometry. The motion of micro-particles occurs also in Euclidean space, but this space is distinguished by an increased value of infinitesimal. We will call the geometry of this space Micro-Euclidean. Euclidean definition is also true in this geometry: a point is that which has no parts, but the sizes of a point in Macro-Euclidean and in Micro-Euclidean geometries are different. From the viewpoint of Micro-Euclidean geometry, a point in this geometry is enlarged to some finite size. This unusual inflation of one of the foundations of Euclidean geometry, namely infinitesimal, is similar to Lobachevsky’s modification of the fifth Euclidean postulate. This nonlinear representation of spaces is also similar to the nonlinear deformation of Euclidean space when passing to Pseudo-Euclidean space, in which an infinitely high speed is decreased to the speed of light with the conservation of inaccessibility attribute.

An infinitely small point in micro-Euclidean geometry appears before us in the Macro-space as a diffuse finite element in which we may accommodate an array of our infinitely small points. Hence, for us, a continual motion of micro-objects in micro-Euclidean geometry will appear as step-wise transitions between two diffuse points [28].

Let us consider a task. There are two pupils standing before the blackboard. Each pupil holds several circles of the same size, but the circles in the hands of the first pupil are larger than those in the hands of the second pupil. Let every circle be an infinitely small point specified for each pupil. How will these pupils measure an increasing size of a segment? If the segment is shorter than the diameter of the smallest circle, both pupils will say that the length of the segment does not exceed the infinitely small value (a point). If the length of the segment is longer than the size of the smallest circle but shorter than the length of the larger circle, one pupil will say that the segment is small but its length may be estimated, while the other pupil will say that the segment still does not exceed the infinitesimal. When the segment length becomes larger than the sizes of both kinds of circles, both pupils will be able to estimate the length of the segment relying on the sizes of the circles identifiable as infinitely small points. They also may estimate the length of longer segments packing their circles along the segment to cover it completely Figure 8(Ia) and Figure 8(Ib). Of course, their results will differ from each other but they will be represented by jogged lines, Figure 8(IIa).

It follows from Figure 8(IIa) that the resulting jogged lines have different slopes, which will cause inconsistence for long distances. To make estimation results close to each other for long distances, it is necessary to multiply them by correction coefficients. These coefficients are equal to the true segment length divided by the estimation result for the first value different from zero. Plots taking into account the corrections are shown in Figure 8(IIb).

This approach, based on the assumption of infinitesimal inflation, has the potential to explain discreteness with increasing distance. It should also be noted that the images in the geometry with the larger value of infinitesimal will be more blurred or fuzzy. This approach was developed as an alternative with the help of which one might explain W. Heisenberg’s uncertainty by geometric statements. The most important item is that this geometric approach is more fundamental because it is based on clear geometric statements, unlike for the wave-corpuscle dualism relying on two antagonistic notions: a wave and a particle.

Figure 8. Estimations of segment length based on different ideas of infinitesimal ((Ia), (Ib)) and taking into account the corrections (IIb).

This addition is an attempt of the topological extension of the geometric principles of an infinitely small value. According to this approach, the geometry of microworld does not differ from macro geometry except for the size of an infinitely small point. In the opinion of the author, this approach allows us to explain the discreteness of microworld. In addition, this will allow us to extent Galilean relativity principle to the micro level.

Interconnection between these two geometries is possible with the help of the integral Weierstrass’ transformation. Representations of objects in these two geometries will differ from each other by different sharpness of the patterns.

It should also be noted that mathematical and topological approaches for comparing two geometries with different metrics of infinitely small values have not been developed yet. However, there is a hope that the assumption concerning so-called inflation of infinitesimals would allow providing better explanation of the features of micro-object motion.

4. CONCLUSIONs

The proposed approach was been developed relying on the analysis and generalization of the successful approaches to describing the nature. Though the development of classical physics did not require any transformations of the information because it was relying on the models in Euclidean space, further development of physics was complicated by the fact that it became difficult to describe the motion of material objects within the framework of the classical space. Because of this, various operators of the transformation of information into the space understandable for us started to be applied.

The approach developed in this work does not correct the laws of relativistic, structural and quantum physics. The goal of the work was to demonstrate that describing the nature one should take into account our subjective possibilities of thinking and modeling, which work well only within the framework of Euclidean space. For this reason, various methods of the transformation of information from other spaces at present play an essential part in the description of the motion of the matter. The author hopes that the proposed new insight into the progress of physics taking into account the transformation of information into our understandable Euclidean space would make the description of nature better comprehensible.

ACKNOWLEDGEMENTS

The author expresses gratitude to many researchers from the Laboratory of Crystal Chemistry and the Laboratory of Physical Chemistry of Nanomaterials at the Institute of Inorganic Chemistry, SB RAS, for valuable remarks and advice.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Prokhorov, A.M. (1971) Big Soviet Encyclopedia (Bolshaya Sovetskaya Entsiklopedia). Sovetskaya Entsiklopedia, Moscow. (In Russian)
[2] Osipova, J.S. (2011) Big Russian Encyclopedia (Bolshaya Russkaya Entsiklopedia). Bolschay Rossiyskaya Enciklopedia, Moscow. (In Russian)
[3] Prokhorov, A.M. (1988) Phisical Encyclopedia (Fisicheskaya Entsiklopedia). Sovetskay Enciklopedia, Moscow. (In Russian)
[4] Tipler, P.A. and Llewellyn, R.A. (2007) Modern Physics. Vol. 1. Transl. E.M. Leikina, Mir, Moscow, 494.
[5] Feynman, R., Leighton, R. and Sands, M. (1978) Feynman Lectures on Physics. Vol. 2. Transl. J.A. Smorodinsky, Mir, Moscow.
[6] Machotka, R. (2018) Euclidean Model of Space and Time. Journal of Modern Physics, 9, 1215-1249.
http://www.scirp.org/journal/jmp
https://doi.org/10.4236/jmp.2018.96073
[7] Peschevitsky, B.I. (1989) Relativistic Attraction. Chemistry and Life, No. 11, 87-89. (In Russian)
https://doi.org/10.2307/3378615
[8] Stabnikov, P.A. (2018) The Framework in Which Matter Develops. (Ramki v kotorykh razvivaetsia material). Palmarium Academic Publishing, Saarbrücken, Germany. (In Russian)
[9] Tavokol, R. (2009) Geometry of Spacetime and Finsler Geometry. International Journal of Modern Physics A, 24, 1678-1685.
https://doi.org/10.1142/S0217751X09045224
[10] Hohmann, M. (2013) Extensions of Lorentzian Spacetime Geometry: From Finsler to Cartan and Vice Versa. Physical Review D, 87, Article ID: 124034.
https://doi.org/10.1103/PhysRevD.87.124034
[11] Brill, D. and Jacobson, T. (2006) Spacetime and Euclidean Geometry. General Relativity and Gravitation, 38, 643-651.
https://doi.org/10.1007/s10714-006-0254-9
[12] Jonsson, R. (2001) Embedding Spacetime via a Geodesically Equivalent Metric of Euclidean Signature. General Relativity and Gravitation, 33, 1207-1235.
https://doi.org/10.1023/A:1012037418513
[13] Stabnikov, P.A. (2019) A New Geometric Approach to Explain the Features of the Micro World. Natural Science, SCIRP, 11, 246-351.
https://doi.org/10.4236/ns.2019.117024
[14] Solodovnikov, S.F. (2014) The Universality of Crystallography. Journal of Structural Chemistry, 55, S5-S13.
https://doi.org/10.1134/S0022476614070014
[15] Smolegovsky, A.M. (2009) U.L. Bragg and His Role in the Creation of Structural Crystal Chemistry. Instityt Istorii Estestvosnania Tekhniki, PAN, Moscow, 199. (In Russian)
[16] Milburn, G. (1975) X-Ray Crystallography. Butterworths (1972), London, Transl. N.S. Andreeva, Mir, Moscow, 256. (In Russian)
[17] Vainshtein, B.K. (1979) Modern Crystallography (Sovremehhaya Kristallographiya). Vol. 1. Science, Moscow, 383. (In Russian)
[18] Guinier, A. (1961) Theorie et Technique de la Radiocristallographie. Transl. from French N.V. Belov, Fismatlit, Moscow, 604.
[19] Davydov, A.S. (1982) Prospects of Quantum Physics (Perspektivy Kvantovoy Physiki). Naukova Dumka, Kiev, 551. (In Russian)
[20] Landau, L.D. (2001) Lifschiz. Quantum Mechanics (Kvantovaya Mekhanika). Fismatlit, Moscow, Vol. 3, 803.
[21] Martinson, L.K. and Smirnov, E.V. (2009) Quantum Physics (Kvantovaya Fisika). MGU, Moscow, 527. (In Russian)
[22] Tsipenyuk, J.M. (2006) Quantum Micro- and Macrophisics (Kvantovaya mikro-i makrofisika). Fismatkniga, Moscow, 638. (In Russian)
[23] Mayer, I. (2006) Selected Chapters of Quantum Chemistry (Izbrannye glavy kvantovoy khimii). BINOM, Moscow, 384. (In Russian)
[24] Gelman, G. (2012) Quantum Chemistry (Kvantovaya khimiya). BINOM, Moscow, 533. (In Russian)
[25] Tsirelson, V.G. (2010) Quantum Chemistry (Kvantovaya khimiya). BINOM, Moscow, 495. (In Russian)
[26] Bohm, D. (1952) A Suggested Interpretation of the Quantum Theory in Terms of Hidden Variables. Parts I and II. Physical Review, 85, 166-193.
https://doi.org/10.1103/PhysRev.85.180
[27] Greenstein, G. and Zajonc, A. (2012) The Quantum Challenge. Modern Research on the Foundations of Quantum Mechanics. Transl. V.V. Aristova, A.V. Nikulova. Intellekt, Moscow, 431.
[28] Stabnikov, P.A. (2019) Discussion of the Book “The Quantum Challenge”. Natural Science, SCIRP, 11, 301-306.
https://doi.org/10.4236/ns.2019.1111032

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.