1. Introduction: The Origin of the Big Bang and Its Evolution
The present cosmological description is, to say the least, unsatisfactory. Essential ingredients appear to be ad hoc recipes: inflation, missing mass and dark energy. The main ingredient is General Relativity (GR) [1] which however was founded before Hubble’s [2] discovery, so that a theory essentially meant to describe a static Universe was forced to incorporate this revolutionary effect via the notorious cosmological constant. This artifact, of no physical origin, then entails in the first Friedman [3] equation (see below) the appearance of the even more mysterious dark energy, which has rightly eluded till now any experimental evidence. In addition great part of the dynamics, up to superclusters, has been described in Newtonian terms, which have thus legitimated the existence of missing mass. Let us also recall that two essential hypotheses are at the foundations of our theoretical framework: isotropy and homogeneity. Whereas the first is backed up by evidence, the second is a reasonable but disputable ingredient although it has the great advantage of allowing simpler calculations. It is more in the line of a Copernican line of thought. However this implies the absence of pressure gradients and this seems to represent another formidable problem. As a matter of fact whereas in Newtonian mechanics the repulsive role of pressure is paramount in the description of stellar formation, no such a role is played in GR. Quite on the opposite pressure adds up to ordinary matter density and increases attraction. But in the end fits demand it to be negative and hence curiously reminiscent of the “old” (Newtonian) repulsive pressure!
For our purposes the most relevant thing is the connection between the self energy and the mass which implies (following Feynman’s conjecture [4] ) that the energy to assemble from infinity many masses
is zero when their self energy provides a deep enough potential well,
(1)
and the corresponding strong field parameter
The above requirement is easily understood. A bound gravitational system of whichever form (photons or matter) increases its energy when the interaction distance increases. Since energy must be conserved this can happen only by allowing for matter (energy) creation to restore the energy balance, as it is actually the case. And this reflects in the strong field parameter. Of course this approach is highly speculative but “we must remember that here we are not dealing with an ordinary problem but with a cosmological problem” [4]. It is obviously based on the form of the Newtonian potential, probed in familiar contexts, implemented by the b.h. condition. This is, so to say, the opposite of the GR approach which is, in a sense, the extension of the weak field Newtonian limit.
Our aim will be to see whether this, Newton based, very simple model accounts for experimental facts without additional parameters. Indeed to make a long story short GR, at least in the form of Friedman [3] equations, essentially reproduces Newtonian physics apart from the cosmological term.
In addition
intervenes in the problem of inertial forces. Indeed the inertial and gravitational masses
and
are equal if
i.e.
This is only a qualitative argument which has been detailed for the different situations where inertial forces enter in Ref. s ( [5] [6] [7] [8] [9] ).
There is an additional relation which deserves attention, at least heuristically. Namely from the b.h. condition one can derive
i.e. that the power to create matter in the Universe is constant and equal to that at the Planck time to create radiation. Interesting is also the fact that in the expression for the Planck power
has disappeared. Of course this is balanced by an equal and opposite contribution from the self energy.
For the present Universe
where
and
the subscript 0 standing for the present value of these quantities and when there is no possibility of misunderstanding simply M ad R.
It is obvious that in a Universe which has expanded, if the mass would have remained constant, as in the standard treatment where
, the previous value would have become
(2)
which implies in turn a negative total energy! To cure it we must admit that the total mass has varied, necessarily increasing by the same amount.1
Such an approach for the varying mass problem has already been proposed in [11] which we follow here in an improved version.
It has been strongly emphasized [12] how close to the critical one the density in the course of the expansion must have been otherwise we would not be here. This reflects in the constancy of the strong field parameter
. An additional argument why
cannot depend on time is that the only reasonable possibility is
Now since H is positive and
, as it will be shown later, this eventuality is discarded.
Clearly the constancy of
implies the cancellation of the acceleration. Indeed
(3)
where the last term is proportional to the acceleration
cancelled by the mass variation
Thus the matter creation mechanism entering the black hole condition implies also a steady state expansion.
The present treatment will hence be based on Equations (1) and (3).
How the b.h. condition treated as an equation compares to the Einstein ones will also be elucidated in the following. The essential point seems to be its implementation of the strong field limit which appears to be realized in our Universe. How that happens will be shown explicitly for radiation.
2. The Planck Scenario
Vacuum fluctuations i.e. the appearance of virtual particles are an essential ingredient of our theoretical armory. For instance in QED the vacuum fluctuation of an e+-e− pair lives because of the uncertainty principle
(4)
essentially for times
and the potential
modifies negligibly the previous argument.2
According to the prevailing picture the Planck fluctuation should last for times of the order of Planck’s time
.
That this is not so can be seen by considering that the total Planck energy
is zero
(5)
3) i.e. the same condition which applies to our Universe. The previous relation can be also easily proven with the explicit form of the Planck quantities.
This backs up one way of deriving the Planck quantities by requiring the Compton wavelength of a particle to coincide with its b.h. radius. Thus the Planck mass corresponds to the energy contained in the minimum quantum radius (because it cannot be of smaller dimensions without violating the uncertainty principle) i.e. to the smallest quantum black hole.
Such a configuration is not stable and can thus evolve.
Starting from the Planck scale, where dR cannot decrease, this bubble must have necessarily expanded. However an expanding gravitational system of given mass gains energy because of the interaction term as mentioned in the previous paragraph. Thus in order to conserve the total energy, and because of the different role of the mass in the two terms, such an expansion produces a mass increase i.e. mass must be created.
Notice also (by using 200 MeV fm = 1) that
which indicates a “strong interaction” relation between gravitational Planck quantities arising just from first principles.
The interesting thing is moreover that the black body total energy expression
yields effectively an additional consistency relation, substantiating the previous relation (to within the above numerical approximations). The photon number is thus
The previous relation should be corrected by including the effective number of constituents which behave like photons above their respective threshold [14] and hence contribute a sizable amount of energy (
, hadrons). However the above connection between energy and photon number can be used as an effective result.
In conclusion
is equal to 1 and Feynman’s [4] conjecture receives also a microscopic support.
3. The Time of Radiation. The Role of Pressure Gradient
The balance between gravitational attraction of the photon cloud and its pressure has also been considered by Weinberg [14] (“it is the balance between the gravitational field and the outward momentum of the contents of the universe that governs the rate of expansion of the universe”).
Let us start by considering early enough times i.e. when “photons” are in thermal equilibrium with hadrons through creation and destruction of (typically) nucleon-antinucleon pairs (short for hadrons like quarks.. etc and other degrees of freedom) which will add to photons, which however determine the typical orders of magnitude, particles masses being negligible. The photon energy density being
the radiation mass is
Notice that these relations are based only on the Special Relativity treatment of the black body. But its gravitational self energy in these extreme conditions cannot be neglected. We demand again the whole energy to be zero. The form of the self energy is dictated by the Newton potential4
In sum energy conservation can be formulated for primordial “photons” as
(6)
The overall total “bare” energy factors our like the mass in the energy conserved Newtonian approach. We have
which is sort of a relativistic Hubble-like relation for the dimensions of the Universe whose outer radius expands at velocity c. This provides a tentative alternative description where the critical density is halved and no dark energy enters (see below Friedman equations). Since the Hubble parameter is seen to be determined by highly relativistic galaxies it seems also from an experimental point of view more adequate to resort to a relativistic description.
Indeed one has for
which is in line with the traditional value and also the well known relation between temperature and the age of the Universe
(7)
We can thus summarize our results:
1) total energy remains equal to zero
and the temperature decreases.
2) bare mass increases
3) photon number increases
This provides repulsion. A huge generation of less energetic photons has taken place (because of the deep potential well) and a comparable number of nucleon-antinucleon pairs has been created which are in thermal equilibrium with them and which annihilate.
From the previous expressions it is immediate to get
As mentioned in the introduction also the cancellation of the acceleration results.
Now
which corresponds to Equation (3).
Therefore the matter creation mechanism entering the black hole condition implies also a steady state expansion. The previous equation also clarifies the role of the expansion (+) and of the temperature (−) in the varying mass. Also with a generic
, without an explicit T dependence, the result holds true.
Let us underlined once more that the
behavior is the opposite of the classical black body spectrum where, at constant R a decreasing temperature implies a decreasing number of photons. The fundamental difference is however that the temperature decrease is due here to an expansion i.e. to an increase of the “box” and this will result in an increase of the photon number.
Here the black body treatment of primordial photons realizes and justifies the previous general results, at variance with GR.
We can conclude that pressure counterbalancing attraction seems to be proven for radiation (so that we do not think it justified to follow GR in adding pressure to the attraction) and for later times when hadrons are formed the dilution due to Universe expansion may be compensated by the increase of their number.
This approach differs from the steady state proposal by Hoyle [15] since for us the density variation is fundamental and we give a microscopic justification of particle creation. As mentioned the constancy of
guarantees the constancy of the inertial mass and presumably disposes of the retardation problem.
4. The Universe Evolution. Baryogenesis
As repeatedly underlined the relevant point is that the self energy dependence on R is not the same as that of the energy so that this correction does not factorize. Thus self-energy can then act as a gauge in the reconstruction of the history of the Universe. As a consequence its evolution will be dictated by the black hole condition and will be therefore different from the traditional one determined by the GR equations. In particular it will affect the CMB treatment and its horizon determination. Also the ratio of nucleon to photon number will be shown not to be constant.
Indeed whereas in going backward in time the black body constraint on matter keeps being valid (apart from very early times where it has to be interpreted as “radiation”), for photons it begins to play a role when the photon energy increases reaching that limit at recombination and matches with the treatment of radiation of primordial times.
We try here to outline the behavior of radiation and matter in the course of time (recalling some of the results obtained in [11] ). Of course both of them contribute to the gravitational field. Due to their different behavior it is however simpler to treat them separately. At present the matter contribution completely realizes the strong field limit whereas radiation, in spite of the numerical preponderance of photons yields a negligible contribution. Indeed the total matter energy in the Universe coming from Hubble’s law is
(8)
depending on the nucleon energy density
and the particle density
, as
m standing here for the nucleon mass and subscript m for matter. The present energy density of radiation, coming from the CBR, is
(9)
This yields a total energy of radiation for the Universe at present
(10)
Thus matter dominates in energy over radiation
(11)
However the total number of photons is given by
The ratio of photons to nucleons is thus of the order
where
.
As regards matter their present dominance with
is used to reconstruct their importance down to
where photons materialize in them. Therefore
1)
,
,
the Universe begins
2)
,
,
The number of photons at this temperature is roughly 1057 which is also pleasantly the number of nucleons in going from the present 1080 to 1057 at that time (radii being in a
ratio).
With these numbers we can give a more direct justification of baryogenesis. Indeed with the equality of the number of photons and nucleon antinucleons at this temperature it would be enough an imbalance of
to justify the present nucleon dominance. So the explanation of baryogenesis is rather insensitive to the model, provided it reproduces the photon dominance.
3)
,
,
This is the temperature for electron positron threshold. Below they annihilate leaving a cold Universe where nuclei can start forming. The photon to nucleon
ratio is
. Customarily the photon density is regarded as a reliable
quantity. Its ratio to nucleon density is then used to determine the latter quantity and finally the present percentage of the critical density. Of course since the reconstruction history is different in the present model, also that number will differ. Indeed when the photon mean free path will become larger than the Hubble radius
where
and
, photons will decouple from the electrons and the Universe will become transparent. And this decoupling happens roughly at recombination. Moreover baryogenesis (and D production) can be accounted for only if the nucleon density is sufficiently low. In a standard context with constant nucleon number the density at that time would have turned out too big, whereas in the present approach where the nucleon number varies with time this finds a natural explanation, contradicting the smallness of the present nucleonic percentage.
4)
recombination,
. (roughly one order of magnitude bigger than the traditional estimate based on
),
.5
The remarkable thing is that at recombination photon and the matter energy equalize. Indeed
whereas by enforcing the strong field limit for matter
. Thus the remark by Weinberg [14] “It is striking that the transition from a radiation to a matter dominated universe occurred at just about the same time that the contents of the universe were becoming transparent to radiation, at about 3000 K. No one really knows why this should be so ...” receives in this approach a natural explanation.
Note also that the photon density
is of the order of 109 at
and of 1018 at
. Thus
is constant from
to
corresponding to
. The photon to nucleon ratio is
compared with the present estimate 107 showing that this ratio is not constant in time as usually said.
To summarize, back to
we have conservation of
but decrease of
till the different energy equalize. In still former times but after
both energy densities increase in the same way because of the self energy constraint. But their number density varies as
(12)
In conclusions one has a linear connection between matter creation and the dimensions of the Universe throughout, and a linear dependence between the temperature of radiation and radius R back to recombination time and quadratic before, Equation (7)
A totally different scenario from the prevailing one.6
5. On the Friedman Equations. Elementary Considerations
The Friedman equations with the cosmological term for a flat Universe read
(13)
and
(14)
where
and
is the scale factor. They are based on the LFRW metric which will be extensively treated in par. 7) and are the Einstein equations relevant to determine the velocity and acceleration of the Universe.
Let us first underline an obvious fact. Thanks to the Lemaitre-Hubble relation the first two terms of the first equation are forced to have the same
dependence. The dimensions of the sphere around the origin factorize. This means that
is independent of scale and may only depend on time. Thus the fact that one obtains the same condition which is regarded as a property of an infinite universe, implies that one can use the metric and the equations locally also for the interior of the finite b.h. bubble.
Thus the fact that the previous equation seems to get a support from the Newtonian limit is utterly misleading. Indeed in that case a real constant can be added to the sum of the kinetic and potential energy determining the escape velocity. However no similar role can be attributed to
in the cosmological case (it is worth recalling that the cosmological term was indeed introduced in order to provide a stable non expanding Universe, that the solution was found by Friedman not to be stable and that the cosmological constant was “forced” after the Lemaitre-Hubble discovery to somehow reproduce a repulsive agent). Thus Equation (15) and Equation (16) are substantially different.
(15)
(16)
Put it another way one cannot add a constant term to a homogeneous equation without contradicting it.
Let us also remark that pressure enters GR7 so has to increase the effect of matter density but then in whichever form (
) or in the cosmological term it turns out to be necessarily negative in order to account for experimental data. In this phenomenological approach it is attributed a repulsive role along the Newtonian picture, its gradient balances gravitational attraction and just from inspection of the previous equations this is consistent with the relativistic Hubble like equation without any dark energy. It has already been underlined that the b.h. condition halves the potential contribution to the first equation. Indeed as it is more transparent in Newtonian terms, in GR the coefficient of
comes essentially from a N.R.
where of course v can never attain c, thus doubling the role of the density. This in spite of the fact that the Hubble parameter is essentially of relativistic character
with a “reasonable”
. From the b.h. condition
one immediately obtains a halved role of the density.
So dark energy exists only within the standard theoretical framework and its existence ironically recalls of ether. As a matter of fact the necessity of introducing the cosmological term in the acceleration equation implies an additional contribution of its potential energy. This does not happen for the b.h. model where the counter-acceleration follows from the varying mass in the energy equation.8
The role of the density in the b.h. model is different from that in the Friedman equations. Indeed according to our model the density is fixed by the equation and there is no critical density which determines the fate of the Universe which will expand for ever.
6. The Cosmological Term and Vacuum Energy. The Problem of Flatness
Let us consider the first (the energy) Friedman equation which can be rewritten as
and said to be valid for all times (see e.g. [17] ). It should constrain the amount of matter density and of the elusive dark energy associated to the cosmological constant. Now in the present matter dominated regime where
we have
and
The same time dependent behavior of
results also for radiation era where
and with
and
although with a different meaning of R (with respect to
) and different consequences. This proves that
gives only the gross features of our expansion (in the sense that the two previous different solutions are both compatible) and that finer details can only be got from the behavior of
.
Therefore if energy conservation (to which the previous condition essentially corresponds) has a meaning at all i.e. must be valid at any time and not by chance just at the present one, the term with constant
would increase due to the factor
in the future and decrease in the past thus unmistakably violating the above equality.
In fact
The previous relation is regarded as a test of no curvature and at the same time it raises the problem of why space-time, which is the strongest quantity in the Universe development [18], would have so dramatically changed in the course of time due to the time dependence of the second term. This has been overcome (see e.g. [19] ) showing that the effect coming from the cosmological term can be cancelled only by a curvature effect in turn reexpressed through the second Friedman equation. Its deviation from 1 is then shown to depend on the parameter of the corresponding pressure which has been obliged to be again negative (
). This leads to the stability of the solution
. Thus space curvature peculiar to GR can be reconciled with flatness simply because self energy provides the appropriate counterterm. In other words the popular picture of space deformation by gravity at a local level is completely discarded at universal scales.
It can be also easily recognized that the previous condition on
is equivalent to the time derivative of the b.h. condition.
Let us also underline that a void Universe
[20] would also produce
. That can be reconciled with our preceding result where
in the sense that the total zero energy requirement seems to be somewhat equivalent to the no matter case.
Finally Perlmutter’s [21] worry about the fact that “it seems a remarkable and implausible coincidence that the mass density, just in the present epoch, is within a factor of 2 of the vacuum energy density” finds a natural explanation. The two things are just the same: indeed the (non existing) cosmological term can be related at most to (a fraction of) the present matter density.
(17)
which when compared to the primordial quantities
(18)
would yield
(19)
and identifying
with a fraction of matter density
(20)
Thus what is presented (if one assumes the constancy of
) as the well known most disastrous prediction of physics ever, unless various bosons, fermions etc. would conspire to cancel these 120 orders of magnitude, seems to find here a natural explanation.
could be interpreted at best as part of the rate of particle creation from the vacuum which “accompanies” a varying matter density of the universe. In other words we have to admit that the “Universe vacuum” may differ from the textbook one.
7. Different Metrics and the Horizon Problem. Inflation? Acceleration?
We now pass to see which conclusions can be reached in a more formal way. The local invariant interval of a homogeneous isotropic expanding Universe reads
(21)
where
is the dimensionless scale factor which is supposed to convey all of the time dependence of the expansion, x is the comoving coordinate and the angular dependence has been left over because of isotropy.
We will examine two different implementations:
1) the time dependence of
is reabsorbed in the term
via a rescaling of the invariant interval.
2) a “Painleve-Gullstrand like” one (or the Lemaitre-Hubble-Painleve-Gullstrand), where the same approach used in the central symmetric static case [22] will be extended to the “Hubble frame”. In other words like the free falling frame is used to dispose of gravity allowing the local use of SR, so it happens here for the (infinity of) frames which expand at the Hubble velocity.
Both approaches have advantages for different aspects of the problem and will help to elucidate which physical features are of course common to both and hence physical and which statements on the contrary have a significance only relative to a given metric.
1) Rescaled Minkovski interval or the conformally flat coordinates and causality. The problem of the horizon. Inflation?
The previous expression Equation (21) can be rewritten for
as
where
In terms of
light velocity is always c but of course the flow of time is altered with respect to “ours”.
Notice that
(non accelerated expansion) is divergent for early times unlike
(decelerated expansion) for the
of the GR treatment. If
is integrable, time has had a beginning and there are regions not causally connected to a common one in the past, if not this time is infinite in the past and any two finite regions have a common one in the past which they are causally connected to. This coordinate system is hence particularly suited for the discussion of causality since it is of the Minkovski form and it puts strong bounds on the behavior of the scaling factor
.
Indeed light velocity is obtained as usual by putting to zero the previous invariant interval
and in the present model
.
The interpretation of
, the conformal time, is important. It represents the comoving distance traveled by light at time t. Since two points can communicate at most with light velocity it therefore represents the dimensions of the region causally connected at time t, thus defining the causal horizon.
This 1/t behavior which “stretches” early times with respect to the present ones, is enough to solve the problem of causality and the connected horizon problem. Indeed it reproduces naturally the inflationary explosion.
As a matter of fact in the present model the dimensions of the region causally connected and hence thermalized at decoupling
are much bigger than the comoving Hubble radius
which determines (for us) the observable region (see Figure 1)
![]()
Figure 1. Starting at
from a region of dimensions
whose worlds lines expand at velocity c all subsequent events are causally connected since they were at the Planck scale.
is the comoving time.
The world we experience has always been in causal contact.
Moreover it gives us a measure of temperature fluctuations at decoupling time which appear at an angle of
degrees. That the previous relation yields
should not be considered as a failure but on the contrary a spectacular semiquantitative confirmation of the present approach over 60 orders of magnitude.
Let us now turn to the problem of the reported acceleration of supernovae. The comoving distance
in the standard approach where
is given by [23]
whereas for “our”
by
As can be easily seen from Figure 2, they are equal for
whereas for
they are respectively 0.52 [23] in the traditional approach and 0.60 in present one, and for higher z always bigger in the latter. This is easily explained. The Universe was decelerating for
and hence light took less time to reach us from distant stars; therefore distant objects would look brighter. Thus in order to justify their apparent faintness one had to invoke an acceleration. On the contrary in the present approach, since expansion was a steady one, high z objects are actually farther apart than in the traditional scenario and hence fainter. We then see that the time evolution predicted by GR in a standard treatment can only be maintained at the price of introducing extra parameters (particularly dark energy), which are not necessary in the present description. One might object that one has replaced one parameter with another one. This is however not completely true in the sense that our “creation” mechanism has some microscopic justification particularly in the radiation era, and is predictive without further adjustments, in addition to accounting for causality whereas dark energy and inflation seem just questionable recipes.
![]()
Figure 2. Comoving distance in the standard approach where
(lower curve) and in the present model where
(upper curve) vs. z. For givenz comoving distances in the present approach are larger typically by some tens of a percent which correspond actually to differences between traditional predictions and measured luminosities.
The red shift is easily accounted for along standard lines. Indeed light emitted at former time
is affected with respect to the present time
where
by the factor
i.e. time runs slower. Hence the present time frequency is much slower than that at the time of emission
To conclude the present 1/t evolution the Planck fluctuation disposes of inflation. The standard picture of an infinite Universe at the Planck time, which, because of the unexpected causal connections, necessarily has to shrink at the inflation “point” which then expands very rapidly for a very short time is thus overridden.
2) The Lemaitre-Hubble-Painleve-Gullstrand (LHPG) metric
Let us come to another relevant coordinate system: the Hubble-Painleve-Gullstrand (LHPG). This is reproduced by introducing in the invariant interval Equation (21)
(22)
and the Hubble parameter
.
Thus
or
(23)
So the original space part of the invariant interval has been transformed in a velocity dependent one in contrast to what has been done in the case of the rescaled Minkovski interval.
Here
represents the velocity of expansion of the point y at the time t.
We can [22] keep the invariant interval in the genuine Painleve’-Gullstrand form i.e.
At equal times (
) the radial y coordinate
measures proper distances.
We then have by putting
the proper time. Thus we get a well known but nevertheless relevant result that the most distant the celestial objects under consideration the higher their velocity and consequently the smaller their time intervals. Thus far away objects live longer than naively expected with respect to our time.
For transverse light propagation
Radial light propagation is got by setting to zero the previous invariant interval
or in terms of
,
in the (y, t) plane
where the case of backward propagation is considered in order to see objects in the past.
The first relevant result is that the velocity of light, always c in the local frame changes in space-time as the vector composition with the Hubble expansion velocity. Thus light was more and more deviated in the past because of the increasing role of
in the radial and transverse light propagation. This makes clear how the velocity of stars is of course composed of the recession velocity of the Hubble frame plus the intrinsic subluminal intrinsic velocity
. This explains the so called “superluminal” behaviour of galaxies with high z.
The similarity with the P.G. metric ( [22] and ref.s therein) used in the static spherically symmetric case is manifest. There the free falling frame carrying the absolute time of
represented the inertial frame with the SR Minkovski interval locally eliminating gravity. Here the same happens for the outward Hubble velocity. Therefore the LHPG metric represents an infinity of inertial frame and provides a dynamical extension of the Minkovski metric more in the Einstein spirit, this time not “eliminating “ gravity but expansion.
The connection between the rescaled and LHPG coordinate systems is immediate.
If we rewrite the basic equation in terms of the comoving coordinate
we get
(where the proper sign of c has been chosen) just reproducing the light velocity of the previous paragraph.
Recently the most distant galaxies observed (GN-z11) and (EGSY8p7) with respectively
and 8.68 at a distance of 13.4 and 13.2 billions of years have caused particular concern because of their closeness to the very age of the Universe. This however depends again on the history reconstruction. The present
with respect to
of the usual treatment leaves us unworried since the time span between t’ and the present
is larger in this approach than in the standard one.
This metric has manifestly the advantage, already clear in the static symmetric case, of evidentiating the connection between local and global coordinates in the propagation of light. The mirage effect in space-time, much bigger than in light deflection and in lensing, would alter our view of the past. This is illustrated in Figure 3 and represents a simple realization of the photon geodetics, which near us can be well approximated by
. Thus after decoupling which represents for us the frontier of visibility almost a straight line. This metric has however the advantage of explicitly showing that world lines originated in the primeval black hole.
Finally application of the Euler-Lagrange equations (which can be used also for galaxies) of motion in the N.R. limit
(24)
![]()
Figure 3. Light propagation in the
plane. Because of the vector composition of the local invariant light velocity c with the frame velocity determined by the varying Hubble parameter, light as observed at a given place (on Earth at present for example) in global coordinates deviates more and more when emitted at former times (with an analogous effect to light deviation in a static gravitational field). Not in scale.
yields
where the expression of the Universe acceleration enters. Thus the equivalence principle holds true if the Universe expansion is unaccelerated.
8. On Olbers’s Paradox
We want briefly to reconsider the reasons why the night is not brilliant. The first qualitative argument is that the night is indeed bright but at the wavelength of CMB photons and not at wavelength of visible light.
Take a single star at a distance r from the earth of radius R. If it emits W photons per unit time, a fraction
is received from the earth. Consider then the whole Universe as composed of spherical layers of width dr with N stars per unit volume. The total contribution is then
which, when integrated over all space appears to yield an infinite contribution. However, since distant stars have increasing velocity all of them outside of
do not contribute making the sum finite. Thus the same black hole condition enters again.
9. Hubble’s Law, Angular Momentum and Missing Mass
An immediate consequence of Hubble’s law is that since all points are equivalent the same expansion law should hold for all of them. Although this may appear trivial, the expansion with respect to a privileged point i.e. the center of a system (galaxy) implies that the relative distance of the orbiting object varies, apart from the moving away of the whole system.
In Newtonian mechanism angular momentum for central forces is conserved i.e.
However if Hubble’s law is valid
This implies non conservation of angular momentum
Thus in addition to particle number conservation another cherished belief cannot be extrapolated from our limited space time experience to other scales proper to the Universe creation process.
One could as well reexpress the previous relation as
i.e. that the violation of angular momentum conservation for a central force is greater the bigger the dimensions and therefore the characteristic time (T) of the system, in line with the previous result.
This analysis has used Newtonian absolute time. It is therefore not correct, but it was just aimed at showing the limits of a Keplerian treatment.
Let us now turn to the missing mass problem. If one has a mass M with spherical symmetry orbiting ones obey Newton’s law thus determining their acceleration and therefore
Thus the velocity should fall as
. This is not what one observe since the orbital velocity is greater or when a curve is measured it flattens out for large distances. Well known examples are the Sun velocity, the external Hydrogen lines orbiting the galaxy M33 and M11 and the Coma cluster whose parameters have been reported in the accompanying table. The Keplerian approach is apparently justified since involved velocities are indeed non relativistic. To start with, given the quoted values for the velocities the respective periods are of the order of 108 y, 108 y and 109 y respectively. To trust our theoretical treatment over such periods when the star formation mechanism is not yet established is probably a bit presumptuous. This has lead, among other alternatives [24], to postulate the existence of missing mass. Its features, apart from peculiar gravitational properties, are a relative increase according to the dimensions of the system (Table 1).
![]()
Table 1. Astronomical parameters. Distances in meters. Velocities in km/s. In the final column the Hubble velocity
, calculated not at the present time
but at
, time of structure formation, is reported.
Thus the quantities for our Galaxy and M33 are similar but the distance of the latter is larger whereas for Coma all of them are bigger. For what has been said before we have H at the time of formation. The “Hubble” velocity which should add to that coming from the virial theorem and attributed to the visible mass is then almost ok for the sun, scanty for M33 and again ok for Coma. Moreover rotating external layers might influence the velocity of orbiting masses. This has been considered by Mizony [25] showing that this is indeed the case and that the usual treatment based on a symmetrical central mass is inadequate, thus disproving a missing mass halo. One further comment about the usual statement that dark matter is necessary to assure the necessary gravitational force to bind these systems otherwise they would have disappeared [26] To start with superclusters are not seen nearer and therefore at later times, where they have evolved in smaller structures (galaxies) with higher symmetry. Therefore they have decayed. Second we have from the LHRW invariant interval the connection between distant objects and their proper time. That is the farther the stars the slower their proper time. For instance for the Coma cluster the factor
entering the proper time is close to zero with the present parameters. The ensuing picture is that of a competition between the Hubble effect which tends to disrupt and the slowing down of time which temporarily assures the stability of the gravitating system. This completely overturns the naive and peacefully picture of Newtonian systems and alters our view of the past. Therefore we might conclude that the existence of missing mass is at least questioned.
10. Conclusions
As summarized in the introduction the present theoretical situation is commonly dramatically presented as: some 90 percent of our world is in the form of unknown entities (dark matter and dark energy) with, to say the least, “peculiar” properties. This naturally leads to question the validity of the GR description, which because of the success in the post Newtonian regime (whose results can however be obtained simply from the Equivalence Principle and Special Relativity [22] ) seems hardly questionable. In the present work the theoretical treatment has therefore been reconsidered. The model has been presented of a black hole Universe. It can successfully account for inflation, the horizon problem, flatness, dark energy. It also questions the reported acceleration and partially the need of dark matter. The extrapolation to cosmogonical scales of some of our most cherished and successful belief (at our space time scales) has been proven to be incorrect.
Namely particle number and Newtonian angular momentum are necessarily not conserved.
Acknowledgements
This work originates from continuous discussions with G. Morchio whom I want to thank particularly for the paragraphs on the metrics. I wish also to thank Dr. E. Cataldo for continuous encouragement.
NOTES
1Such a relation which realizes Mach’s principle has also been used by [10] in considering the possibility of a G variation. This however has been experimentally disproved.
2In this connection let us recall speculations [13] according to which in the QED strong field case also the previous relation might not hold true thus leading to pair creation.
3One should therefore probably re express the uncertainty relation as
.
4Also Weinberg considered the balance between pressure and potential energy to determine the Jeans mass which at
resulted in too big globular clusters. That would roughly correspond in the present approach to the total Universe mass.
5However at recombination the strong field limit is overcome by the previous value of the Universe radius so that in order to be consistent R must be determined from
which yields
, slightly altering the previous estimate.
6This does not exhaust the treatment of photons. Indeed the CXB [16] shows the presence of an additional sizable energetic photon background due to the
photon emission from the core in the formation process of stars. Whether this can contribute to the baryonic black hole limit is an open question. In any case this more energetic photon component coming mainly from AGNs pertains to a later stage of the evolution.
7This does not question how pressure enters the energy momentum tensor nor GR, the problem simply being whether that theory accounts for reality or not.
8We give here an additional heuristic argument to show how one can reconcile the present approach with Friedman-like equations. As a matter of fact if we give
its correct dimensions
in the second equation, by neglecting pressure and imposing zero acceleration
with a unit value for
we have
. Thus a non constant cosmological term could realize the black hole condition. As mentioned, in going from the acceleration equation to the energy one the b.h. condition fixes the “integration constant”.