Entropy and Cosmological Constant of a Universe Calculated by Means of Dimensional Analysis, Margolus-Levitin Theorem and Landauer’s Principle ()
1. Introduction
The Standard Model of elementary particles and the theory of the General Relativity have contributed to a deep understanding of natural phenomena.
As is well known, within these theories there are constants, more or less fundamental, whose values can not be derived from the fundamental principles of the theories themselves and must be entered by hand.
Whatever the origin of our universe, the values of these constants must reflect the nature of fundamental interactions, their intensities, their mutual relations and the structure of our universe.
Combinations of these constants can provide dimensionless quantities, whose values can suggest energy scales or some other key variable, characterizing different behaviors of the elementary constituents of the universe and can suggest different mathematical approaches for their study. An example is the fine structure
constant
, introduced by Arnold Sommerfeld in 1916.
We don’t have a physics that describes the universe before the Planck era, what we know is that in correspondence with the differentiation of the fundamental forces through symmetry breakings, their respective constants remained fixed.
Moving from the microcosm to the large scale structure of the universe, current observations of Supernovae Ia indicate accelerated expansion of the Universe. This accelerated expansion is interpreted as due to the presence of a vacuum energy, or dark energy, generated by something called quintessence or by the presence, in the General Relativity equations, of a positive Λ cosmological constant, corresponding to a negative pressure.
The idea to interpret the cosmological term as a vacuum energy density
belongs to Zeldovich [1] . However, direct application of this idea leads to puzzling results. Quantization introduces zero-point vacuum energies for quantum fields and therefore, in principle, can affect the geometry through Einstein’s equations. Unfortunately, the theoretical value of the cosmological constant, predicted by quantum field theories, exceeds the observed value by factors ranging from 1060 to 10120.
The cosmological constant connects the large scale structure of the universe with the subatomic vacuum. Why is the net vacuum energy density finite, positive and so very small? These questions, with the huge difference between theory and experiment, represent what is named the cosmological constant problem [2] .
In this paper, by means of dimensional analysis, we consider a spherically simmetric universe with a mass
and radius equal to c/H, where H is the Hubble constant, c the speed of light and G the Newton gravitational constant. Assuming H proportional to 1/t, where t is the time, this universe evolves with continuous creation of matter at a rate such that to mantain, during the expansion, a density always equal to the critical density
.
This scenario reminds the one proposed in 1948 by F. Hoyle [3] , H. Bondi and T. Gold [4] as an alternative to the Big Bang theory.
In this scenario, known as the continuous creation or steady state theory, the universe expands and a continuous creation of matter keeps its density constant.
The second step is to consider this universe as a computer [5] [6] which, in its temporal evolution, processes and stores information and whose maximum speed of dynamical evolution must satisfy the Margolus-Levitin theorem [7] [8] [9] . This theorem imposes a fundamental limit on quantum computing and affects all possible means by which a calculation can be performed.
It is then possible to calculate the maximum number of events that occur in the universe and to introduce an entropy using Landauer’s principle [10] [11] [12] , according to which logical operations in a computer necessarily require energy dissipation and an increase in entropy.
Finally, a hypothesis on the source that feeds the energy of the universe and a dimensional analysis allow to obtain an expression of the cosmological constant as a function of the Hubble constant and the speed of light.
2. The Dimensional Analysis
In the search for dimensionless quantities or, in any case, for dimensional quantities that can serve as scale quantities in the elementary physical phenomena, it can be very useful dimensional analysis. Although the dimensional considerations can possibly produce only a result, without explaining its physical origin, they may be heuristically very effective in suggesting links, otherwise difficult to imagine.
An example is provided by the Planck scale. By combining the speed of light, Newton’s gravitational constant and Planck’s constant, one gets the so-called Planck’s units. Since these quantities are expressed through Newton’s constant, which concerns gravitational interaction, and Planck constant, a fundamental quantity of Quantum Mechanics, their orders of magnitude suggest the scale of energies at which the Standard Model and General Relativity should be merged into a single coherent theory.
Let us consider a physical system described by the variables
through relations of the type
(1)
which in general are not known to priori. As it is well known, dimensional analysis attempts to arrange these variables in dimensionless groups
, with
, where the
are combinations of the
. In the context of dimensional analysis, the
theorem or the Buckingham
theorem [13] [14] is of considerable importance.
The theorem states that an equation of n arguments as Equation (1), dimensionally homogeneous with respect to m fundamental units (as for example in mechanics, length, time and mass), may be expressed as a relationship between
dimensionless variables having the form:
(2)
The
can be chosen by putting generically
(3)
Expressing the
dimensionally as products of the fundamental units, one obtains the values of m exponents
that make dimensionless the
, while the remaining
values of
are arbitrary. In this way one obtains l independent dimensionless quantities.
For example, let us consider the four fundamental constants of physics: the light velocity c, the Planck constant
, the Newton gravitational constant G and the elementary charge e. Taking as fundamental units mass, length and time, we try to determine the dimensionless quantity
given by:
(4)
In this case the Buckingham theorem provides only one dimensionless quantity:
(5)
that is, the fine structure constant
. If
were equal to 1, e would be equal to the Planck charge
.
3. A Cosmological Mass
Let us now look for dimensionless quantities that can be derived from the product:
(6)
where M is a mass and H is a quantity having the dimension of the inverse of time. By imposing that
is dimensionless and expressing
,
and
as functions of
and
, one obtains:
(7)
There are two independent dimensionless quantities. The first,
, obtained, for example, for
and
, is the product of H by Planck time
:
(8)
For
,
while, if we interpret H as the Hubble constant, whose current value is
, the current value of
is
.
The second dimensionless quantity, obtained for
and
, is:
(9)
Relation (9) identifies the mass:
(10)
If we put
, we get
, the Planck’s mass. This leads us to consider H as a cosmological quantity. Then
must also have a cosmological character.
Hereafter we identify H as the Hubble constant and we denote the values of the various quantities in the current epoch with the subscript 0. For
one gets:
(11)
a mass equivalent to about 1080 baryons, the current estimated mass of the universe.
If we introduce the Hubble time
, whose current value is
, Equation (10) can be written as
(12)
or, in terms of the Planck power
:
(13)
Equation (13) can be interpreted by saying that, in a universe having an age
, the total energy is obtained assuming that in a quantum of time
, a quantum of energy
is generated.
4. The Cosmological Mass and an Entropy of Evolution
Consider an isolated physical system and the problem of counting the maximum number of distinct states through which this system can evolve in a given time, i.e. its maximum speed of dynamical evolution. This process can be considered a kind of computation and than it is constrained by the Margolus-Levitin’s theorem [7] [8] [9] .
This theorem sets a limit to the speed with which a physical system can evolve from an initial state to a final state orthogonal to it.
Assume that the system has a discrete energy spectrum
and states numbered so that the energy eigenvalues
, associated with the states
, are non-decreasing. Choose the zero of energy so that
and let E be the average energy of the system. Let
be the time it takes for an arbitrary quantum state
, at time
, to evolve into an orthogonal state. Then, Margolus and Levitin showed that
(14)
They also addressed the question of how much quickly a quantum system could evolve through a long sequence of mutually N orthogonal states and they showed that in this case one has
(15)
Thus, if one considers a long evolution of the system and fix the zero of the energy at the fundamental state, the minimum time
, required for a transition between two orthogonal states, must satisfy the relationship:
(16)
According to their theorem, the number of operations, carried out in a given time interval and using a given amount of energy, by any devise or any process that allows to perform a calculation, cannot be grater than
operations per second, per joule.
As a physical system we now consider a spherically simmetric universe made up of ordinary matter, dark matter, radiation and dark energy, whose total energy is
and whose radius R is equal to its Schwarzschild’s radius
. Then its mass
is distributed with a average density
equal to the critical density
and one has:
(17)
The dependence of the total mass M and of the radius R of this universe on time t occurs through the Hubble constant H. Since
, the mass increases linearly with t, while the density, which is always critical, decreases as
. If a cutoff is imposed at the Planck time, for the total energy we have, for
:
(18)
where
is the Planck energy and, for the critical density:
(19)
The evolution over time of this universe, having currently the mass
and
, can be understood as a real computation, a continous processing of data. This universe stores and processes information. It is a giant quantum computer in which the hardware is the universe itself and the laws of physics are the software. Its calculation activity must therefore remain within the limits imposed by the laws of physics and by the initial conditions. Then, treating the whole universe as a computer [5] [6] , one can apply to it the Margolus-Levitin theorem.
This universe expands always at the same speed and
is a measure of its age. Putting
, assuming that the energy E appearing in the formula (16) is, instant by instant, equal to
and that, excluding the period of a possible inflationary expansion, the evolution of this universe occurs slowly through states of equilibrium, we have, according to (10) end (16):
(20)
Then the maximum number of transitions of the system between
and
is given by
(21)
where
is the Planck length.
The maximum number of elementary operations or events, that can occur in the spacetime volume, is then bounded by the surface area
of the spatial volume:
(22)
The result given by (22) agrees with the holographic principle [15] , according to which the maximum amount of information, stored in a region of space, scales as the area of its two-dimensional surface, like a hologram.
The microscopic information stored in this universe, which results from its continuous processing activity, is inaccessible to an observer. Now, in Thermodynamics, one meets a similar situation: many internal microstates of a system are all compatible with the one observed macrostate. Thermodynamics entropy is a measure of missing information and quantifies this correspondence. So we attribute to our universe an entropy connected with its continuous data processing activity.
Now, when an isolated quantum system evolves, it always does so reversibly; we assume instead that the evolution of this universe, as a whole, takes place irreversibly as in the real universe.
In 1961 Rolf Landauer discovered [10] [11] [12] that logical operations that get rid of information, such as the erasures, necesseraly require dissipation of energy in a computer. The erasures transform information from an accessible to an inaccessible form with a rise in entropy, whereas logical operations that can be reversed do not lead to a rise in entropy.
The link between computational irreversibility and information loss is given by Landauer’s principle. According to this principle, for each bit of information eliminated, the entropy of the environment grows by, at least,
, where
is the Boltzmann’s constant, while the energy dissipated is, at least, equal to
, where T is the temperature of the environment in which the computer is located.
Then, for example, in the real primordial universe at the Planck temperature, considered as a computer, the deletion of one bit of information, would result in a dissipation of energy, within the universe itself, equal to
.
We now introduce, for our hypothesized universe, an evolution entropy
that counts the number of its internal transitions. Thus, invoking a sort of Landauer principle, we set:
(23)
where the
factor is the average entropy increase per transition. We can then write:
(24)
Apart from the factor
, formula (24) coincides with the entropy formula of Bekenstein-Hawking,
, for a stable Schwarzschild’s black hole with mass
and radius
.
Both the Planck constant
and the gravitational constant G appear in the entropy formula. The presence of
derives from the application of Margolus-Levitin’s theorem, that of G from the dependence of
on G, given by Equation (10). Furthermore
diverges in the limit
. This facts suggests that it is purely a quantum effect.
The current numerical value of this form of entropy is:
(25)
At Planck time, assuming
, one has:
(26)
The entropies
and
behave very differently from thermodynamic entropy. The entropy of ordinary matter increases with the volume and is proportional to the mass. The proportionality of
to the square of Schwarzschild’s radius and the proportionality of
to the square of the radius of the universe, show that
and
are proportional to the square of the mass.
The analyzed universe has an energy that is initially zero and increases linearly with time. What is the source of this energy?
Suppose that a scalar field creates an expanding spherical vacuum bubble. The gravitational energy density associated with the vacuum is then given by:
(27)
To a variation dr of the bubble radius corresponds, in this simple topology, an increase in the vacuum energy given by:
(28)
The vacuum energy of a bubble of radius R is therefore:
(29)
This energy must tend to zero as
; since the only geometric quantity that appears in the formula is the radius of the sphere, the simplest hypothesis is to consider Λ proportional to the curvature of the spherical bubble and set
, with k constant. We then have:
(30)
The mass equivalent to this energy is:
(31)
and, for
and
:
(32)
where
is just the mass given by formula (10) and found by means of dimensional analysis.
Finally, for the cosmological constant, for
, one gets:
(33)
At Planck time, whit
, one has::
(34)
If it were possible to connect the hypotetical scalar field inflaton with the cosmological constant, the monstrously large value of the cosmological constant at the time of the big bang, would be a great engine to trigger an inflation process.
By means of dimensional analysis, we now show how the dependence of Λ on H and c, having the form
(35)
is plausible.
5. A Cosmological Constant through Dimensional Analysis
Let us consider the product
(36)
where E is an energy and Λ is the cosmological constant. Through the Buckingham theorem one gets the dimensionless quantity:
(37)
The quantity
(38)
is then dimensionally an energy. If we assume that this energy is distributed within a sphere of radius
, we get the energy density:
(39)
If we identify
with the vacuum energy density
(40)
we obtain the cosmological constant Λ as a function of Hubble constant and the light velocity:
(41)
For
, one would have
(42)
and, from Equation (40) and Equation (41), the energy density and the equivalent mass density would be repectibvely:
(43)
and
(44)
from which:
(45)
and
(46)
Now
(47)
where
is the contribution of ordinary, dark matter and radiation; since
, we would get
(48)
Then, the vacuum energy would make up 56 percent of the total energy, while the remaining 44 percent would be the contribution of ordinary matter, dark matter and radiation.
According to current theoretical predictions, dark energy accounts for about 68 percent of all energy of the real universe.
We remark that the causal set theory for quantum gravity [16] , which assumes spacetime to be discrete at Planck scale, predicts that the cosmological constant varies stochastically at all epochs with an amplitude depending on H2, as in Equation (35).
A formula for Λ substantially equal to Equation (41) was derived by Gurzadyan and Xu [17] , starting from an alternative view on the cosmological and vacuum energy. Their formula reads:
(49)
where a is the scale factor. Putting
, one obtains:
(50)
from which:
(51)
to compare with formula (41).
From Equation (46) we have that also
remains constant during the expansion of universe. At the Planck era, for example, formula Equation (38) becomes:
(52)
If we assume that this energy is distributed within a sphere of radius equal to (
), we obtain an expression of the vacuum energy density at Planck time:
(53)
from which:
(54)
By considering the density of mass corresponding to vacuum energy, we have, for
:
(55)
6. Conclusions
Using dimensional analysis, we considered a universe having mass
and radius
, that evolves according to a Bondi-Gold-Hoyle scenario, with continuous creation of matter at a rate such as to maintain, during the expansion, the density always equal to the critical density. By means of the Margolus-Levitin theorem and Landauer’s principle, we have assigned to this universe an entropy associated with its evolution over time.
The density is always critical and the parameters
and
do not depend on the time.
In the calculation of entropy, the result expressed by the formula (23) and consistent with the holographic principle strictly depends on the hypothesis
. This hypothesis involves an expansion of the hypothesized universe at constant speed, with a radius R given by
and, therefore, characterized by an acceleration parameter
and by an expansion without acceleration. However, in the calculation of
e
, the hypothesis
was not explicitly used, so an estimate of the value of q can be obtained from the relation:
(56)
which gives the negative value
(57)
that results in an expansion that accelerates.
Considering the link between R and the deceleration parameter q:
(58)
and assuming
, the current value of this acceleration is :
(59)
a value of the same order of magnitude as the critical acceleration, equal to
, of Milgrom’s MOND theory.
At Planck’s time, the non-zero value of q, given by formula (57), would involve the acceleration
(60)
The analogous structure of the formulas of the entropy of a black hole and of
is related to the fact that even a universe can have an event horizon. An accelerating universe traps light as a black hole does, in the sense that it leaves in the dark everyting beyond a certain distance.
The crucial difference between a cosmological event horizon and the event horizon of a black hole is that in a black hole spacetime collapses towards a singularity, while in an accelerating universe all space expands and each observer will have his own event horizon. Any radiation emitted beyond a certain distance will never reach it, while any radiation emitted by the universe will all fall towards its interior. Black holes evaporate, universes don’t.