Thermodynamic Equilibrium of the Atmosphere

Abstract

One of the issues of Thermodynamics is the question of what exactly thermodynamic equilibrium means. It is often interpreted as thermal equilibrium. The question is if this is correct. This is especially relevant for the case of the atmosphere, where gravitational energy also plays a role, which might allow for temperature gradients in equilibrium. In order to answer this question, this work goes back to Boltzmann’s original ideas. As will be shown here, thermodynamic equilibrium also means thermal equilibrium in this case. Moreover, it will also be shown why a lapse rate (a linear drop of temperature with altitude) is observed in a mechanically stable atmosphere. The implications for climate research are discussed.

Share and Cite:

Stallinga, P. (2025) Thermodynamic Equilibrium of the Atmosphere. Atmospheric and Climate Sciences, 15, 591-614. doi: 10.4236/acs.2025.153030.

1. Introduction

As a typical gassy system, the atmosphere can be taken as a prototypical thermodynamic system. It was on the basis of such systems that Thermodynamics was developed, most specifically during the advent of the James Watt steam engine. Ludwig Boltzmann developed his famous statistical analysis around this time. What is more, Thermodynamics and Statistical Physics are the basis for all other physical laws of nature. For example, entropy is the most fundamental concept in modern physics and is unquestionable. Or, as Eddington wrote in his book The Nature of the Physical World (p. 74) [1]:

“The law that entropy always increases—the second law of thermodynamics—holds, I think, the supreme position among the laws of Nature. ... [I]f your theory is found to be against the second law of thermodynamics, I can give you no hope; there is nothing for it but to collapse in deepest humiliation.”

Swenson states it in another, yet similar, way [2],

“(T)he laws of thermodynamics are special laws that sit above the other laws of physics as laws about laws or laws on which the other laws depend.”

All processes in our universe are based on Thermodynamics laws. It is therefore essential that we understand these laws well and can apply them with rigor to systems in nature. This work contributes to a better understanding of thermodynamics, particularly where confusion exists about the subject. The exact issue that is addressed here is: what does thermodynamic equilibrium in the atmosphere mean? From textbooks, we learned that it means that temperature is the same everywhere. This is also what Maxwell and Boltzmann claimed. Yet, the thermodynamic theory was derived for “laboratory systems,” and they are special in the sense that they are closed and have constant parameters with, for instance, an absence of gravity in the theory. The atmosphere, though, is an open-ended system (only on one side bound by the planetary surface) and, moreover, so large that gravity might play an additional role. Others, therefore, claimed that equilibrium in Earth’s atmosphere involves a temperature gradient. The reasoning behind it makes sense: molecules, when going from high altitude to low altitude, lose gravitational-potential energy, which is necessarily converted into kinetic energy. Temperature, being a measure of average kinetic energy, then implies that the molecules of air closer to the surface must have higher average kinetic energy, and the air thus has a higher temperature. We are not talking here about quantum mechanic effects such as those proposed for stellar atmospheres [3] [4] but our good old planet Earth described by classic physics.

Noteworthy is that this would then enable a perpetuum mobile, because we would, for instance, stick a Seebeck generator into the atmosphere and be able to generate free eternal electric power. One of the people who claimed this is Loschmidt, about which Trupp writes [5],

“In 1868, J.C. Maxwell proved that a perpetual motion machine of the second kind would become possible if the equilibrium temperature in a vertical column of gas subject to gravity were a function of height. However, Maxwell claimed that the temperature had to be the same at all points of the column. So did Boltzmann. Their opponent was Loschmidt, who died more than 100 years ago, in 1895. He claimed that the equilibrium temperature declined with height, and that a perpetual motion machine of the second kind operating by means of such a column was compatible with the second law of thermodynamics.”

This would sound absurd were it not for the fact that a negative gradient is actually observed in the atmosphere, the so-called lapse rate. However absurd it may be, the idea has reality to back it up. Moreover, gradients were also observed in controlled laboratory experiments. All attributed to gravitational effects. Worth mentioning are the works of Graeff [6] [7] and, recently, Jeong and Park [8]. Also, theoretical work was done on this subject. For instance, Hermann writes [9],

“Thus, a linear decrease of the temperature with the vertical coordinate corresponds to a state of equilibrium.”

There where Björnbom writes [10],

“Calculations confirm the classical equilibrium condition by Gibbs that an isothermal temperature profile gives a maximum in entropy constrained by a constant mass and a constant sum of internal and potential energy. However, it was also found that an isentropic profile gives a maximum in entropy constrained by a constant mass and a constant internal energy of the fluid column.”

There are thus some serious doubts about the concept of thermodynamic equilibrium. This document attempts to clear up some doubts by presenting the ideas of Maxwell and Boltzmann applied to the atmosphere.

First of all, let us repeat here the laws of Thermodynamics, presenting their textbook versions:

First Law: Energy is not created or destroyed. It only changes form. This is the classical law of conservation of energy. However, energy likes to preferentially flow to a certain form, for that we use the next law.

Second Law: Entropy (“disorder” or “lack of information”) of the universe cannot decrease. Thermodynamic equilibrium is reached when the entropy is at its maximum and can no longer increase.

Third Law: A perfect crystal at absolute zero temperature has an entropy of zero.

To which conventionally, a fourth law is added, traditionally called the Zeroth Law:

Zeroth Law: If two systems are in thermodynamic equilibrium with a third system, then those two are also in thermodynamic equilibrium with each other.

And it is exactly here where the problem lies. Thermodynamic equilibrium when interpreted as thermal equilibrium, this law is often translated into “If A has same temperature as C and B has the same temperature as C, then A and B have the same temperature”. Is this correct? We will find out. Not relevant to the current work, a fifth law was discovered which states that the universe always tries to produce entropy as fast as possible, which is called the Law of Maximum Entropy Production (LMEP) [11], based on the work of Ilya Prigogine [12]. Yet, before the advent of Thermodynamics another very relevant law was discovered, namely the law of Lavoisier. Because of its relevance, we put it before all the others and will call this the Minus-First Law of Thermodynamics [13]:

Minus-First Law: (Lavoisier). Nothing is created. Nothing is lost. Everything is transformed.

In practical terms this law means that, while chemical reactions take place, no atoms are lost or created. If no chemical reactions take place, even the molecules are invariable. The system described here is of this type; a system with a constant number of invariable molecules. For the current work, only the Minus-First Law through the First Law are necessary. Namely: the conservation of mass (number of molecules) and energy and the concept of equilibrium. We will see that equilibrium entails constant temperature, even in the presence of a gravitational field.

In what follows, first, the method is explained, namely the mathematical method of Statistical Physics. This is then applied in the classical way to derive the Maxwell-Boltzmann distribution of velocities in a laboratory-environment gas. For the thermodynamic analysis, we will also describe the concepts of temperature and heat later on. This method is then applied to the atmosphere, which is different from a laboratory system in that it is a one-sided open system and, moreover, is subject to a gravity field. Having reached in that section the conclusion that even in a gravity field the temperature is constant in equilibrium, rests the task to explain the reality of the observed lapse rate (linear gradient of temperature) in the atmosphere. We will see how this can be explained by the non-equilibrium situation of adiabatic processes, which is the realm of meteorology. In the last section, conclusions relevant to climatology are made about the atmospheric system.

2. Method: Statistical Physics

Let us now take a look at the original ideas of Boltzmann. See how we can use Statistical Physics to come up with the distribution functions of molecules. Note that for this purpose we do not need the concept of probabilities as we are wont to do in our Statistical Physics lectures. These, as argued by us before, do not exist [14]. Probabilities are the parametrization of ignorance, and as Ellis wrote in 1842 [15],

“Mere ignorance is no ground for any inference whatever. Ex nihilo nihil.”

Even while this ignorance has equivalents in Shannon entropy, defined as the probability-weighted average of information, the latter defined as the logarithm of probability, probability continues to be in the realm of abstract mathematics and thus not an adequate tool for the analysis of nature. Any analysis based on the concept of probabilities is intrinsically fallacious. Fortunately, the calculus of thermodynamics can be presented without this dubious concept. It is called Statistical Physics and not Probabilistic Physics. What we are going to do is imagine an infinite number of realizations of the same system and then tally the number of certain configurations. Frequencies of occurrence of configurations then translate into (pseudo)probabilities, and we can then apply our math.

Imagine a system divided into boxes representing states, for instance real space, as shown in Figure 1. This means that logically the system is discrete1. Only (positive) integer numbers are related to the real world. Space, energy, and everything else, including matter (i.e., particles) and energy (i.e., photons) are quantized. We could, however also make this analysis in a continuous form, but the easy-to-understand logic would be lost.

Figure 1. A system with J boxes—“states”—accommodating a constant N distinguishable particles, box i containing n i particles (a box can contain more than one particle), each particle in box i having u i energy. Also a condition of constant total energy U can be included.

Based on this, we can fully describe the concept of entropy, specifically all distribution functions, and reproduce Boltzmann’s narrative. We are going to place particles in these boxes and count the number of particles in each box. The number of particles is assumed to be much larger than the number of boxes. The inverse analysis of the number of boxes per particle is expected to give similar results.

The total number of particles, N , is constant, as per our Minus-First Law (Lavoisier). They are distributed over J boxes; box i contains n i particles, with i running from 1 to J . The sum of these numbers must then be equal to N , of course, which gives our first boundary condition:

i=1 J n i =N. (1)

We will reason on the basis that the number of particles is larger than the number of boxes, NJ . We also assume here that the particles are distinguishable. This classic-mechanics point of view—the only thing known in the times of Boltzmann—relevant for the current work. The number of microstates (possible realizations) of a situation is then given by

Ω= N! n 1 ! n 2 ! n J ! . (2)

The maximum randomness/disorder/chaos/lack-of-information is achieved when this Ω is at its maximum, and that is thus the state of equilibrium according to the Second Law. It is quite difficult to calculate, but we can use a trick. We use a monotonously rising function of Ω and maximize that instead. If this function is at its maximum, then Ω is at its maximum. We could use any such function, but the natural logarithm seems adequate. Let’s call this function S . (Note: for entropy, it would have to be multiplied by the Boltzmann constant, but for laziness and for clarity we’ll omit it here temporarily and add it only at the end).

Sln( Ω ). (3)

This function is at its maximum when all derivatives with respect to n k are zero:

k=1,,J: dS( n ¯ ) d n k = ! 0. (4)

( n ¯ is the vector of numbers n 1 n J ). The exclamation mark above the equal sign denotes “must be”. For this calculation, we can use Sterling’s approximation for the logarithm of a factorial:

lnx!=xlnxx. (5)

We’ll start by substituting the expression of Ω into the definition of S :

S=ln( N! n 1 ! n 2 ! n J ! ) =ln( N! ) i ln( n i ! ) =( NlnNN )( i n i ln n i n i ) =NlnN i n i ln n i , (6)

where the normalization of the vector n ¯ of Equation (1) was used.

Now, we have to solve this for all derivatives dS/ d n k , which is quite a task, especially because we have to keep the normalization condition (constant N ) in mind. Fortunately, Lagrange developed a technique for that, his Lagrange method of undetermined multipliers [16]. We add the condition, multiplied by a constant (here α ), and solve the total equation:

dS d n k = d d n k ( NlnN i n i ln n i α i n i ) = ! 0,

( dN d n k lnN+N 1 N dN d n k )( d n k d n k ln n k + n k n k d n k d n k )α = ! 0,

dN d n k ( lnN+1 )( ln n k +1 )α = ! 0. (7)

Note that by the definition of N as the sum of all n i , its derivative with respect to n k is 1. We then get

lnNln n k α=0. (8)

This can be solved and gives

n k =N e α . (9)

In other words, all n k are equal, a value independent of k . That is what we expected. For large numbers of N , all boxes contain an equal number of particles. That has maximum “entropy”, the maximum number of microstates, also represented in the parameter S .

The above calculation assumes that all boxes are equal. If, on the other hand, each particle in box n k has an energy u k , and we use The First Law of Thermodynamics (constant total energy U ), then we have another boundary condition in our equation,

i=1 J n i u i =U. (10)

(In fact, as will be shown, we can now let J go to infinity, since the sum no longer diverges). In a similar way, we can use Lagrange’s method to solve it by adding this with a factor β to our equation2 (Equation (7)), which will result in

lnNln n k αβ u k =0, (11)

which has a solution

n k = N 0 e β u k , (12)

with N 0 =N e α .

We have forgotten the units here. That is the nice thing about mathematics; we can simplify our work, but we must not forget that it has to be applied to reality. For that, we have to reintroduce our Boltzmann constant in the definition of S , and give the proper dimension to β , namely reciprocal energy. It turns out that this implies that β can be converted into a temperature-like variable through the Boltzmann constant k :

β= 1 kT . (13)

In this T is a parameter, like β , describing the energy-distribution function n( u ) , which we call “temperature”, though its definition is different from that as given through the average kinetic energy u kin = 3 2 kT (to be shown in a moment). Now, if we also go from this discrete system to a continuous Leibniz-Newtonian system of calculus, we get the famous Maxwell-Boltzmann density distribution:

n( u )= n 0 e βu . (14)

The density prefactor n 0 (unit: 1/J) depends on the lower limit of our function. For instance, if the function is defined from u=0 to u= , n 0 will be equal to N/ kT .

In fact, we can write the above equation even more elegantly, and universally, as

g( u )= e βu ,

n( u )=f( u )g( u ), (15)

with f( u ) the density-of-states (DoS) function and g( u ) the occupancy function (shown in Figure 2).

The exact value of β depends on the total energy U ; all values of T and β are solutions to the equation, the correct one is given by the total energy

Figure 2. Boltzmann occupancy function g( u ) given by Equation (15).

U and the limits of the boxes, or to be more exact, the density-of-states (DoS) function f( u ) , which was assumed here to be the uniform step-function of Heaviside. The boundary conditions—Lavoisier for the number of particles, and the First Law of Thermodynamics for total energy—in a continuous system are

f( u )g( u )du =N, uf( u )g( u )du =U. (16)

Velocity and Kinetic-Energy Distributions

In case the density-of-states f( u ) is not uniform, we can get different distributions. Imagine, for example, the situation where the energy is only kinetic energy 1 2 m v 2 (i.e., classic “temperature”). The DoS function f( u ) in three dimensional space can then be calculated. The slight complication with this is that the states are (axiomatically assumed to be) homogeneously distributed in speed space, and not energy space; time and space are linear (not curved). Therefore, we have to do our analysis in speed space.

In three dimensions, the speed is the vectorial sum of the three Cartesian speeds, v= v x 2 + v y 2 + v z 2 . Therefore, the density of states with kinetic energy u( v )= 1 2 m v 2 is proportional to the square of the velocity, as Figure 3 shows. Substituting the energy in g( u ) , and this density of states in f( u ) , we get

n( v )= N 0 v 2 e m v 2 / 2kT , (17)

which, by normalization (Equation (16)) to N=1 , will result in the famous Maxwell-Boltzmann frequency distribution of speed of molecules in a three-dimensional gas.

n( v )=4π ( m 2πkT ) 3 v 2 exp( m v 2 2kT ). (18)

This function is shown in Figure 4. We can also plot this as a function of kinetic energy u . Because we are dealing with a density function, we have to take the change of coordinate system into account. This means that if we integrate dv over a certain interval, this integral should not change when doing integration over the same states in the new coordinate system, du . This implies that n( v )dv=n( u )du . With u equal to 1 2 m v 2 , we have to substitute v 2 with 2u/m and dv for du/ 2mu . This results in

n( u )= 2 π ( 1 kT ) 3 u e u/ kT . (19)

This function is shown in Figure 5.

Figure 3. The density in Cartesian velocity space ( v x , v y , v z ) is considered uniform. The number of states with absolute velocity v is then 4π v 2 dv , and the density of states thus proportional to v 2 .

Figure 4. Maxwell-Boltzmann distribution for a three-dimensional gas; n( v ) : density of frequency as a function of speed v , given by Equation (18), at various temperatures, with the molecular mass m=28 g/mol, which is the case for nitrogen molecules N2 (see Table 1).

Figure 5. Maxwell-Boltzmann distribution; n( u ) : density of frequency as a function of kinetic energy u at various temperatures, given by Equation (19).

Based on this, we can find the average (kinetic) energy of a particle, and that is given by

u kin = 0 un( u )du = 3 2 kT. (20)

Which is the classic Boltzmann definition of temperature, defined as

T 2 3k u kin . (21)

The average kinetic energy is independent of the mass of the molecules in the gas. It only depends on temperature. Or saying it the other way around, the temperature is a measure of kinetic energy. When we also include vibrations inside the molecules, we can say that generally speaking, temperature is a measure of energy of motion.

We can also define some relevant thermodynamic quantities that will be useful later on. An example is the total (kinetic) energy—“heat” Q —of the gas:

Q=N u = 3 2 NkT, (22)

and we can also define a specific heat capacity which is the heat per mole of molecules taking only linear motion into account,

c v,m 1 N dQ dT = 3 2 R[ J/ Kmol ]. (23)

This defines a conversion constant R N A k=8.31446 J/Kmol called the universal gas constant. If we add other types of motion, like rotations and vibrations existing in molecules, R/2 is added to it for each such degree of freedom. The subscript “v” means “at constant volume”. At constant pressure, another R is added to it. Chemists prefer to talk about moles, whereas physicists talk about kilos. Therefore, we can also define a Physics-jargon specific heat capacity defined as the heat per mass, at constant pressure, by:

c p = n+2 2 k m [ J/ kgK ], (24)

with n the degrees of freedom of motion. Table 1 gives some examples.

Here, we have made the link between statistical physics properties, analyzing molecules separately, and thermodynamic properties relevant to macroscopic quantities. More specifically, the thermodynamic property of heat capacity was derived, which will be needed later on when describing the observed properties of the atmosphere.

3. Results: Thermodynamic Equilibrium in the Atmosphere

In the case of particles in the atmosphere, kinetic energy is not the only relevant energy. We now have a two-dimensional DoS, stretching along velocity space ( v ) and conventional space ( z ), schematically presented in Figure 6. Both velocity and space now enter into the energy equation, and things change. To avoid confusion,

Table 1. Properties of some gases. Source: Ref. [17] and [18]. (The specific heat of standard air was reverse-engineered from the lapse rate). The ideal values are based on Equation (24) with rotations included.

Gas

Molecular mass m

Specific heat c p

(g/mol)*

(kJ/kg K)

ideal

real

Air (dry)

28.966

1.005

1.006

Air (standard)

29.03

1.51

CO2

44.01

0.661

0.844

H2O

18.02

1.846

1.93

Ar

39.948

0.520

0.520

He

4.002602

5.193

5.19

C2H6

30.07

1.106

1.75

H2

2.016

14.435

14.32

Ne

20.179

1.030

1.03

N2

28.0134

1.039

1.04

O2

31.9988

0.909

0.919

*: to convert from g/mol to kg, divide by 1000 N A .

Figure 6. A two-dimensional system with J×L boxes—“density-of-states” (DoS) in velocity v and space z —accommodating N distinguishable particles, each box i,j containing n i,j particles (a box can contain more than one particle), each particle having u i,j energy which is the sum of kinetic and gravitational potential energy. Also a condition of constant total energy U can be included.

we retain β and for u will substitute u( v,z )=mgz+ 1 2 m v 2 , then solve again the equation to find β . In the following, we will make use of the following relations (Gradshteyn and Ryzhik [19]).

0 x 2 exp( a x 2 )dx = π 4 a 3/2 . (25)

0 x 4 exp( a x 2 )dx = 3 π 8 a 5/2 . (26)

0 exp( ax )dx = 1 a . (27)

Firstly, we will ignore z dependence and once again calculate the velocity distribution function n( v ) using f( u( v ) ) and g( u( v ) ) in Equation (15). As was shown, the DoS f( u( v ) ) is proportional to v 2 , and g( u )=exp( βu ) , with u= m v 2 /2 , therefore

n( v )=n( u( v ) )=f( u( v ) )×g( u( v ) ) = n 0 v 2 exp( βm v 2 /2 ). (28)

The constant n 0 (unit: s3/m3) can be found by normalization

0 n( v )dv =1. (29)

By Equation (25) ( x=v , a= βm/2 ) we find that

n 0 = 2 π ( mβ ) 3/2 , (30)

and

n( v )= 2 π ( mβ ) 3/2 v 2 exp( βm v 2 2 ). (31)

The average kinetic energy can be found by

u kin = 0 m v 2 2 n( v )dv = m 2 2 π ( mβ ) 3/2 0 v 4 exp( βm v 2 2 )dv . (32)

Using Equation (26) ( x=v , a= βm/2 ) we find

u kin = 3 2β , (33)

which is equivalent to Equation (20) with β=1/ kT . Note also that the distribution of Equation (31) has a maximum, where dn( v )/ dv =0 , at v= 2/ βm .

Now let’s apply it to the total system, the one in which the energy also depends

on height z : u= 1 2 m v 2 +mgz . In that case, we have a two-dimensional system of

velocity-space v and classic space z , the latter we consider linear,

n( v,z )=f( u( v,z ) )×g( u( v,z ) ) = n 0 v 2 exp( β[ m v 2 /2 +mgz ] ). (34)

Once again, the constant n 0 (unit: s3/m4) can be found by stating that the integral should be unity:

n 0 0 [ 0 v 2 exp( βm v 2 /2 )dv ]exp( βmgz )dz =1. (35)

Using Equation (25) and Equations (27) yields

n 0 = 2 π ( βm ) 5/2 g, (36)

and our distribution

n( v,z )= 2 π ( βm ) 5/2 g v 2 exp( β[ m v 2 /2 +mgz ] ). (37)

This frequency distribution function is shown in Figure 7.

Figure 7. Frequency distribution function n( v,z ) of Equation (37). The shape of the velocity distribution, a cross-section at any z , is independent of altitude, signifying that the average kinetic energy, and thus temperature, is constant in the atmosphere.

We can now calculate the total kinetic energy of a layer of thickness δz at altitude z :

δ U kin ( z )= 0 m v 2 2 n( v,z )dvδz = m 2 2 π ( βm ) 5/2 g[ 0 v 4 exp( βm v 2 /2 )dv ]×exp( βmgz )δz = m 2 2 π ( βm ) 5/2 g 3 π 1/2 8 ( βm/2 ) 5/2 exp( βmgz )δz = 3 2 mgexp( βmgz )δz. (38)

Likewise, we can calculate the amount of mass therein,

δM( z )= 0 mn( v,z )dvδz =m 2 π ( βm ) 5/2 g[ 0 v 2 exp( βm v 2 /2 )dv ]×exp( βmgz )δz = 2 1/2 π 1/2 β 5/2 m 7/2 g π 1/2 4 ( βm/2 ) 3/2 exp( βmgz )δz =β m 2 gexp( βmgz )δz. (39)

If we define the temperature by Equation (20), u kin = 3kT/2 , or

U kin =N u kin = M m 3 2 kT, (40)

then

T( z )= 2 3 m k δ U kin ( z ) δM( z ) = 1 kβ . (41)

This means the temperature is constant all over the atmosphere! Note also that the distribution of Equation (37) has a maximum, where dn( v,z )/ dv =0 , remaining at v= 2/ βm , independent of z . The shape of the distribution is independent of height, but the total intensity and density of molecules do depend on it. This is also clearly visible in Figure 7. In equilibrium, the kinetic energy distribution—“temperature”—is equal everywhere. Imagine such a system in equilibrium. Now take N random molecules from part A of the atmosphere and exchange them with N random molecules of part B of the atmosphere. Nothing has changed. If there were a change, the system would not yet be in equilibrium.

The above implies that people like Loschmidt, Graeff and Jeong and Park mentioned in the introduction were wrong. Their laboratory systems must not have been in equilibrium when they observed a temperature gradient.

4. Discussion

Yet, the real atmosphere apparently does not behave like this. As we know, the observed temperature in the atmosphere is not constant, and thus the conclusion is that the atmosphere is not in thermodynamic equilibrium. We will now see how the observed temperature gradient can be explained, and then explain the importance of this for meteorology, finishing with an important observation for climatology.

4.1. The Real Atmosphere

One thing is equilibrium, as shown above, quite another thing is stability. We are talking about mechanical stability here. The atmosphere is heated mostly from the bottom, close to the surface. There is, thus, a gradient in the atmosphere caused by incoming solar radiation that heats the atmosphere from below. (In stars, this gradient comes from nuclear processes inside the core of the star). It can diffuse this heat, trying to thermalize and achieve equilibrium, in three ways: by radiation, by conduction and by convection.

In adiabatic situations, no heat is exchanged (by radiation or conduction) between packages, and we only have convection, the movement of packages, remaining as a process. In this situation, packages of heated-up air close to the surface, compared to their colder neighboring packages, expand and are less dense and thus try to rise. By the expansion, work is done according to W=pΔV , and this comes at the expense of heat; an expanding package will cool down. The question is if the cooling down is rapid enough (in terms of degrees per meter) to stop the process of rising air packages. This can be determined on the basis of thermodynamics. We must analyze packages of air that do not exchange heat in any way, i.e., adiabatic. To see what the situation is in such “equilibrium” (mechanical stability) situations of adiabatic systems, consider Figure 8, adapted from the book of Jacob, Introduction to Atmospheric Chemistry [20].

Figure 8. In a stable atmosphere in a cycle rising a package to a new height with change in temperature and pressure, and then bringing it back to its original situation no change in enthalpy must have occurred. This then results in the lapse rate Γ.

For this calculation, we will let a package at altitude z with temperature T and pressure p adiabatically rise in the atmosphere to the new local temperature and pressure at z+dz , then isothermally compress it back to its original pressure and let it heat up to its original temperature at constant pressure. Processes indicated by 1, 2 and 3, respectively, in the figure. The cycle returns the air package to its original thermodynamic state and must, therefore, have no change in any thermodynamic function.

Starting with an ideal gas,

pV=nkT, (42)

with the enthalpy defined as

H=U+pV, (43)

with U the internal energy of the package, then any change in enthalpy is given by

dH=dU+d( pV ) =dU+Vdp+pdV (44)

Furthermore, the change in internal energy U is given by the heat added to the system dQ and the work done to the outside world, dW=pdV , so for any thermodynamic process, we have

dH=dQ+Vdp. (45)

We can now apply this to the cycle of Figure 8 and sum the changes in enthalpy of the three processes to zero. For the adiabatic process (1), no heat is added to the package ( dQ=0 ), so that

d H 1 =Vdp. (46)

For the isothermal process (2) dT=0 and thus (by the ideal gas law) d( pV )=0 . Since also dU=0 (the internal energy of an ideal gas is a function of temperature only), we have

d H 2 =0. (47)

For the isobaric process (3), dp=0 , we have

d H 3 =dQ= c p MdT. (48)

Then setting the sum to zero, d H 1 +d H 2 +d H 3 =0 , we get

Vdp= c p MdT. (49)

If we also realize that the pressure gradient is given by

dp dz =ρg, (50)

as can be seen in Figure 9, and

M=ρV, (51)

then

Γ dT dz = g c p . (52)

Figure 9. A layer in the atmosphere with area A , thickness z and density ρ feels two forces one gravitational force down equal to the product of total mass and gravitational acceleration g and one force equal to the area and pressure gradient dp . When they balance dp/ dz =ρg .

We could have arrived at this expression much more rapidly when we see that the heat energy of a package can be well described by saying that the specific—per mass—thermal energy of a package is c p T , which includes all forms of motion. The gravitational specific energy is gz , so that the total specific energy of a package is u= c p T+gz . If we say that all packages must have the same specific energy, it implies that

du( z ) dz =0, (53)

which results in the above equation (Equation (52)) as well. It is a mechanical equilibrium—stability—with all packages having the same energy when no heat is exchanged between packages, so not yet a thermodynamic equilibirum.

This lapse rate can easily be determined based on the specific heat of the gas (see Table 1). For dry air c p is 1.006 kJ/kgK, which results in a lapse rate of Γ=9.75 K/km. Standard (humid) air has a specific heat of 1.51 kJ/kgK, resulting in Γ=6.49 K/km. See Figure 10 taken from the author’s treaty on the greenhouse effect in Ref. [21]. Three observations have to be made:

Figure 10. Calculated lapse rate Γ compared to experimental atmospheric data according to the Engineering Toolbox (+) and US Standard (˚). From Stallinga [21].

First of all, the atmosphere in thermodynamic equilibrium means the temperature is everywhere equal. When it is not in equilibrium, when only adiabatic processes take place, we are able to perfectly reproduce the atmosphere, at least the troposphere. Of course, the assumption of ideal gas laws has its limits. Considering the fact that the upper atmosphere no longer follows this simple linear lapse rate, we conclude that the ideal gas laws no longer apply, or that the upper atmosphere is not in thermal equilibrium, nor is stable. In any case, this goes beyond the objective of this work.

Second, some people claim that planetary surface temperatures would be the result of atmospheric pressure [22]. This is a misinterpretation of the gas laws. Gas laws merely state the link between pressure and temperature—as in pV=NkT —and does not attribute causality. With density of the gas defined as ρ= Nm/V , this becomes ρ= pm/ kT . Pressure at the surface, p( z=0 ) , is simply the total mass of the atmosphere divided by area multiplied by the gravitational constant. While T and p are related through this gas law, there is no causality in them.

Third, and probably most astounding, there is the idea that heat can spontaneously be transported by convection from cold places to warm places in the atmosphere. If somewhere in the upper atmosphere heat is added, raising the temperature locally, this perturbation is spread out all over the atmosphere, resulting eventually in the same lapse rate, but with a higher offset temperature at the warm surface. Excess heat from the cold upper atmosphere eventually heats up the warmer surface. This is what the Connollies call “pervection” [23] and Douglas Cotton “heat creep” [24]. See Figure 11. This will be discussed below.

Figure 11. The idea of “heat creep” or “pervection”. Heat added to a cold place in the atmosphere that was in equilibrium will be redistributed along the atmosphere, heating it up everywhere. According to this idea, heat would be able to flow from cold to warm places. Also the surface heats up from T 0 to T 0 .

4.2. The Weather; Meteorology

We have seen in the previous section that a stable atmosphere has a linear temperature gradient given by the lapse rate Γ. Now, if the real atmosphere has a temperature profile that is not the lapse rate, convection might take place.

| dT( z ) dz |>| Γ |:unstable, | dT( z ) dz |=| Γ |:neutral, | dT( z ) dz |<| Γ |:stable. (54)

The latter situation is called “inversion” in meteorology. The derivative of the temperature profile is too small. A too-warm layer on top of a too-cold one. Not necessarily warmer, but too-warm compared to the lapse rate Γ. Vertical convection is inhibited and often air-pollution stays close to the surface since the atmosphere is not mixed sufficiently. Smoggy situations can occur. It is a very stable atmosphere. Not much going on.

In the first situation, of a large gradient, the atmosphere is unstable and prone to convection. It does not mean that convection will take place, but it might. Imagine a hot summer’s day where the surface has warmed up a lot and the air above it has not. The derivative dT/ dz is high and the atmosphere is unstable. We now have a situation in which hot air wants to go up and cold air above it wants to go down. Yet, it cannot do so because of symmetry. It is like two people meeting on a sidewalk, trying to pass each other. They both try on the left, then both try on the right, constantly blocking each other’s way. The same is in the air. “Pressure” builds up in the air, with heat accumulating at the lower layer, without a cloud in the sky. Suddenly, somewhere, the symmetry is broken, and a way is found for the warm air to rise and the cold air to sink. Once this channel is opened, things go very fast. Violent convection can take place, rising air that had all day to accumulate evaporated water cools down, the moisture condensing forming clouds, and heavy rain might ensue. However, if the vertical velocity is high enough, the raindrops cannot fall down, continue to rise, and eventually freeze. At the same time, the convection causes a build-up of electrical polarization by the Van Der Graaff generator effect of ice particles rubbing against each other. If the resulting electric field is big enough, a thunderstorm will result.

We can now analyze again the idea of Douglas Cotton and the Connollies, which they call “heat creep” and “pervection”, respectively. Figure 12 shows a situation in which suddenly, on top of a stable atmosphere, a perturbation of excess heat is added. As can be seen, in the top part of this perturbation, the gradient is larger than Γ, and convection will take place at higher, colder altitudes. The disturbance is spread toward colder places, not to warmer ones, as was claimed by Cotton and the Connollies, and something that was used in the author’s own publication about the greenhouse effect [21].

Figure 12. When part of the atmosphere is suddenly heated up, the upper part of this disturbance is unstable and convection will take place moving some heat to higher, colder, altitudes. No convection takes place of heat to lower warmer places.

Of course, the bottom of the atmosphere continues to heat up by incoming radiation and this heat can no longer move to the upper atmosphere by convection, because this movement is blocked by the stable region of low gradient caused by the perturbation. Eventually, the upper atmosphere is heated up by convection from the excess heat of the perturbation and the lower atmosphere will heat up from the heat of solar radiation that cannot move naturally to the middle atmosphere. In the end, the situation of Figure 11 results, with a temporary increased surface temperature T 0 until the excess heat of the perturbation is radiated away into space. This, however, is not heat creep or pervection.

4.3. The Climate; Climatology

We now come to another relevant subject, namely that of the climate. As we know, allegedly carbon-dioxide contributes to the greenhouse effect, and the burning of fossil fuels increases the amount of carbon-dioxide CO2 in the atmosphere and it is thus concluded that our industrial-economic activity is increasing the surface temperature of the planet. Taking the above into consideration, we can draw some very important conclusions.

The most important to note is that the atmospheric system is not in equilibrium, and thus any sort of calculation is as good as impossible, since scientists always consider the system in equilibrium for their reasoning. As in, “What would the surface temperature be in equilibrium if we double the amount of CO2 in the atmosphere?” Then, they base their calculations on radiation balances, etc. Because the atmosphere is not in equilibrium, such calculations are as good as impossible, for we do not even know how far off it is off-equilibrium, and what are the obstacles hindering it from reaching equilibrium. The latter is possibly more important than the former, and calculating equilibria is rather fruitless.

The author himself has been endeavoring such an extensive analysis in the work titled Comprehensive Analytical Study of the Greenhouse Effect of the Atmosphere [21]. That study was based on the assumption—erroneous as has now become clear and will be shown here—that the adiabatic lapse rate was, in fact, equilibrium and that adding any optically active components will make the atmosphere opaque and shift the radiative-equilibrium layer—the layer in radiative equilibrium with the universe at 254.0 K—to a higher altitude in the atmosphere and the lapse rate thus has more distance over which to do its work, and the surface temperature will rise. See Figure 13.

It works like this: We start by finding out what is the equilibrium temperature of the planet by setting the radiation balance. The solar radiation density in space at the Earth orbit—called the solar constant—is W=1361 W/m2. The Earth, with radius R receives a total of π R 2 W radiation, because of its albedo a —“whiteness”—only a power equal to P in =( 1a )π R 2 W is absorbed and converted into heat. The rest is reflected back into space. In this analysis, it does not matter where the radiation is absorbed. It does not have to be necessarily at the surface, as is commonly pictured. It only matters that it is absorbed by the system and not directly reflected back into space.

According to Stefan-Boltzmann, a black-body at temperature T rad radiates a power density equal to σ T rad 4 per square meter, with σ the constant of

Figure 13. The (oversimplified) idea of the effect of increasing the opacity of the atmosphere by adding greenhouse gases. The universe is radiatively linked to a layer z 0 . Radiation from above this threshold layer (emissive layer) can escape. This is at T rad =254.0K in adiabatic stability. From that point down the lapse rate actuates down to the surface that is at a temperature T 0 given by Equation (56). When the opacity is increased, the threshold layer is higher up in the atmosphere and thus the lapse rate has a longer distance to actuate so that the surface increases to T 0

Stefan-Boltzmann (5.670374419108 W/m2K4). The Earth has a surface area of 4π R 2 and, assuming it is a black body, thus emits a total outward radiation of P out =4πσ R 2 T rad . In radiative balance the two must be equal, P out = P in and with a=0.306 we get a radiation balance (steady state) temperature of

T rad = ( 1a )W 4σ 4 =254.0[ K ]. (55)

The question is, where in the system is this temperature? In an atmosphereless Earth, or one in which the atmosphere is optically inactive, a.k.a. transparent, this would be the steady-state temperature of the surface. If we add an optically active atmosphere, some of the outward radiation is absorbed by this atmosphere and partially sent back to the surface. And here comes the standard reasoning: this is added to P in and we thus get a new steady-state temperature according to the above equation. All fancy models are now modeling how much is radiated back. In the author’s own work mentioned above, the idea was presented in a form that opacity of the atmosphere makes the equilibrium layer higher up in the atmosphere, at a higher altitude z rad (radiation above this threshold altitude, the emissive layer, can escape to the universe), and from there the lapse rate has a longer distance to actuate, and the steady-state (not equilibrium) surface temperature is given by

T 0 T( z=0 )=( 254.0K )+ z rad ×| Γ |. (56)

Now, z rad depends on the absorption coefficients of the gas molecules in the atmosphere. If the atmosphere gets opaque, for instance, by adding the optically-active CO2 to it, z rad increases and the surface temperature T 0 with it, as shown in Figure 13. Most traditional models, however, state that this effect is well saturated, as was already observed by Knut Ångström [25].

The total absorption [of Earth’s radiation] is very little dependent on the changes in the atmospheric carbon dioxide content, as long as it is not smaller than 0.2 of the existing value.

In other words, the greenhouse effect—if caused by radiation balance—is saturated for CO2 and only starts dropping if the CO2 concentration drops below 20% of the value at the times of Ångström, which would possibly be 60 ppm. Radiation has other optical windows to leave the atmosphere. Moreover, different correlation coefficients between T 0 and [CO2] have been observed at different time scales [21], which make us scratch our heads and remind of the statistics motto “correlation is not causation”.

Yet, in this analysis, we made the logic fallacy, thinking that the adiabatic lapse rate is the realization of equilibrium. In reality, in equilibrium the lapse rate is zero (constant temperature), and the above equation is incorrect. It represents some radiative steady state. With that, the entire reasoning crumbles. In reality, we have to make this important observation that the greenhouse effect in (local) thermodynamic equilibrium is zero:

1) In thermodynamic equilibrium, the temperature is constant everywhere Γ dT/ dz =0 .

2) In radiation balance (steady state), this uniform temperature is T rad =254.0K .

That would then be the surface temperature if the atmosphere system is itself in thermodynamic equilibrium, but not in thermodynamic equilibrium with the universe, only in steady-state with it. The surface temperature would then be 254.0 K. This, irrespective of the constituents of the atmosphere or their optical activity. Adding carbon-dioxide, or any other molecules, to the atmosphere in thermodynamic equilibrium would not change the surface temperature. The temperature of the surface is 254.0 K without an atmosphere or an optically inactive temperature, and the surface would also be at 254.0 K with any atmosphere, as long as it is locally in thermodynamic equilibrium.

5. Conclusions

This work addressed the important question in the science of the atmosphere—both meteorology and climatology—namely, what entails thermodynamic equilibrium? It was shown by a classic analysis of Statistical Physics that thermodynamic equilibrium actually means constant temperature, even in the presence of a gravitational field. The important conclusion is then that the observed linear temperature gradient implies that the atmosphere is not in equilibrium. We have seen that this gradient—the lapse rate—can be explained when limiting the system to adiabatic processes. Yet, the important conclusion for climatology is that an atmosphere in local thermodynamic equilibrium and in radiation balance with the rest of the universe would have zero greenhouse effect, and the surface temperature would be 254.0 K, irrespective of the composition of the atmosphere.

The questions for climatology are then

  • How far are we from equilibrium?

  • What processes enable/block reaching equilibrium?

We have to point out here that reasoning in terms of a system of equilibrium is much easier than reasoning in one that is merely in steady state. Equilibrium can relatively easily be calculated, steady state not so much. This might be the reason that no conclusive value for the greenhouse effect has been determined after centuries of study.

And there are still many enigmas. An example is Mars, which has an atmosphere with much more carbon-dioxide (both in density as well as in absolute terms of mol/m2, the latter being 3.77 kmol/m2 for Mars and 0.58 kmol/m2 for Earth), has no greenhouse effect whatsoever. It is well possible that the atmosphere of Mars, a system of much lower density and pressure, is in thermodynamic equilibrium. For Earth, it is clear that a non-equilibrium (non-zero) lapse rate is observed, so we can expect a greenhouse effect as described in the above-mentioned work, yet it is not clear how big it is and how much the contribution of carbon dioxide to it is. It remains an important, open question.

Declaration

No AI tools were used in any way except a spelling checker (en_US.dic).

NOTES

1Although for all logical purposes, we can consider space continuous, since the box size is Planck’s Scale. This refers to quantities of space, time, energy and other units with lengths around 10−35 m.

2Have you ever wondered why Statistical Physics textbooks all use β ? Now you know! It is simply the second boundary condition in the Lagrange method of undetermined multipliers, and β is the second letter in the Greek alphabet.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Eddington, A.S. (1928) The Nature of the Physical World: Gifford Lectures 1927. Cambridge University Press.
[2] Swenson, R. (2000) Spontaneous Order, Autocatakinetic Closure, and the Development of Space-Time. Annals of the New York Academy of Sciences, 901, 311-319.
https://doi.org/10.1111/j.1749-6632.2000.tb06290.x
[3] Tolman, R.C. and Ehrenfest, P. (1930) Temperature Equilibrium in a Static Gravitational Field. Physical Review, 36, 1791-1798.
https://doi.org/10.1103/physrev.36.1791
[4] Balazs, N.L. and Dawson, J.M. (1965) On Thermodynamic Equilibrium in a Gravitational Field. Physica, 31, 222-232.
https://doi.org/10.1016/0031-8914(65)90089-3
[5] Trupp, A. (1999) Energy, Entropy: On the Occasion of the 100th Anniversary of Josef Loschmidt’s Death in 1895: Is Loschmidt’s Greatest Discovery Still Waiting for Its Discovery? Physics Essays, 12, 614-628.
https://doi.org/10.4006/1.3028792
[6] Graeff, R.W. (2002) Measuring the Temperature Distribution in Gas Columns. Quantum Limits to the Second Law: First International Conference on Quantum Limits to the Second Law, San Diego, 29-31 July 2002, 225-230.
https://doi.org/10.1063/1.1523808
[7] Graeff, R.W. (2007) Viewing the Controversy Loschmidt—Boltzmann/Maxwell through Macroscopic Measurements of the Temperature Gradients in Vertical Columns of Water. Version: Descript 372_dec6 December 9, 2007.
https://tallbloke.wordpress.com/wp-content/uploads/2012/01/graeff1.pdf
[8] Jeong, H.M. and Park, S. (2022) Temperature Gradient of Vertical Air Column in Gravitational Field. Scientific Reports, 12, Article No. 6756.
https://doi.org/10.1038/s41598-022-10525-0
[9] Herrmann, F. (2008) Equilibria in the Troposphere. arXiv:0 810.3375.
[10] Björnbom, P. (2015) Temperature Lapse Rates at Restricted Thermodynamic Equilibrium in the Earth System. Dynamics of Atmospheres and Oceans, 69, 26-36.
https://doi.org/10.1016/j.dynatmoce.2014.10.001
[11] Martínez-Kahn, M. and Martínez-Castilla, L. (2010) The Fourth Law of Thermodynamics: The Law of Maximum Entropy Production (LMEP). Ecological Psychology, 22, 69-87.
https://doi.org/10.1080/10407410903493160
[12] Prigogine, I. (1978) Time, Structure, and Fluctuations. Science, 201, 777-785.
https://doi.org/10.1126/science.201.4358.777
[13] Lavoisier, A. (1789) Traité élémentaire de chimie (Elementary Treatise on Chemistry). Chez Cuchet.
[14] Stallinga, P. and Khmelinskii, I. (2017) Perils and Pitfalls of Empirical Forecasting. European Scientific Journal, 13, 18-46.
https://doi.org/10.19044/esj.2017.v13n18p18
[15] Rowlinson, J.S. (1970) Probability, Information and Entropy. Nature, 225, 1196-1198.
https://doi.org/10.1038/2251196a0
[16] Wikipedia. Lagrange Multiplier.
https://en.wikipedia.org/wiki/Lagrange_multiplier
[17] The Engineering Toolbox. Specific Heat and Individual Gas Constant of Gases.
https://www.engineeringtoolbox.com/specific-heat-capacity-gases-d_159.html
[18] The Engineering Toolbox. Molecular Weight of Substances.
https://www.engineeringtoolbox.com/molecular-weight-gas-vapor-d_1156.html
[19] Gradshteyn, I.S., Ryzhik, I.M., Jeffrey, A. and Zwillinger, D. (2000) Table of Integrals, Series and Products. 6 Edition, Academic Press.
[20] Jacob, D.J. (1999) Introduction to Atmospheric Chemistry. Princeton University Press.
[21] Stallinga, P. (2020) Comprehensive Analytical Study of the Greenhouse Effect of the Atmosphere. Atmospheric and Climate Sciences, 10, 40-80.
https://doi.org/10.4236/acs.2020.101003
[22] Volokin, D. and ReLlez, L. (2014) On the Average Temperature of Airless Spherical Bodies and the Magnitude of Earth’s Atmospheric Thermal Effect. SpringerPlus, 3, Article No. 723.
https://doi.org/10.1186/2193-1801-3-723
[23] Connolly, M. and Connolly, R. (2014) The Physics of the Earth’s Atmosphere III. Pervective Power. Open Peer Review Journal, 25, 1-18.
http://oprj.net/articles/atmospheric-science/25
[24] Cotton, D.J. (2013) Planetary Core and Surface Temperatures. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.2876905
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2876905
[25] Ångström, K. (1900) Ueber die Bedeutung des Wasserdampfes und der Kohlensäure bei der Absorption der Erdatmosphäre. Annalen der Physik, 308, 720-732.
https://doi.org/10.1002/andp.19003081208

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.