Riemann Hypothesis, Catholic Information and Potential of Events with New Techniques for Financial and Other Applications

Abstract

In this research we are going to define two new concepts: a) “The Potential of Events” (EP) and b) “The Catholic Information” (CI). The term CI derives from the ancient Greek language and declares all the Catholic (general) Logical Propositions () which will true for every element of a set A. We will study the Riemann Hypothesis in two stages: a) By using the EP we will prove that the distribution of events e (even) and o (odd) of Square Free Numbers (SFN) on the axis Ax(N) of naturals is Heads-Tails (H-T) type. b) By using the CI we will explain the way that the distribution of prime numbers can be correlated with the non-trivial zeros of the function ζ(s) of Riemann. The Introduction and the Chapter 2 are necessary for understanding the solution. In the Chapter 3 we will present a simple method of forecasting in many very useful applications (e.g. financial, technological, medical, social, etc) developing a generalization of this new, proven here, theory which we finally apply to the solution of RH. The following Introduction as well the Results with the Discussion at the end shed light about the possibility of the proof of all the above. The article consists of 9 chapters that are numbered by 1, 2, …, 9.

Share and Cite:

Papadopoulos, P. (2021) Riemann Hypothesis, Catholic Information and Potential of Events with New Techniques for Financial and Other Applications. Advances in Pure Mathematics, 11, 524-572. doi: 10.4236/apm.2021.115036.

1. Introduction

We will be symbolizing the prime natural numbers as: q 1 = 2 , q 2 = 3 , q 3 = 5 , and with e every SFN which is a product from even multitude of primes (e.g. 6, 10, 14, 15, …., 210, …) and with o every SFN which is a product from odd multitude of primes (e.g. 2, 3, 5, …, 30, …., 154, …).

We will point out that from bibliography [1] that is much known that if the distribution of the events e, o of SFNs (that we also referred in abstract before) is of type H-T then the Riemann’s Hypothesis (RH) will be valid. Let symbolize [see relation (54) in 6.7 of Chapter 6] this sufficient proposition of RH as bellow:

Type [ Distr ( G / N S F , f ) ] = H-T (1)

But independent from this knowledge of bibliography in Appendix at the end of this article we will give a proof of the sufficiency of (1) for the validity RH.

Suppose one common box (e.g. in shape of cube) is divided in a finite number of small and equal between them sub-cubes ω i , i = 1 , 2 , 3 , inside of which ( ω i ) and after of an experiment EX can occur only two types of events H, T, e.g. throwing inside of some of these sub-cubes a non ideal special coin and supposing every time this special coin interacts in differed way relating by the positions of sub-cubes. We also gave to this experiment the name EX. Then we come to calculate the probabilities of H, T in a chosen sub-cube ω m = ( x m , y m , z m ) in which we haven’t yet thrown our specific coin or generally we may have thrown the coin but we don’t know this specific result. If in a 1st case we have as information only the numbers μ, ν from the two types of results H, T in the total box, then at position ω m we know that the two probabilities for these events H, T correspondingly will be: p ( H ) = ν / ( ν + μ ) and p ( T ) = μ / ( ν + μ ) . Where ν , μ are the multitudes of H, T from EX. The question which now arises is how these two probabilities change if in a 2nd case we additionally take into account all the information coming of those positions in which took place the experiment EX in relation to the special position of this specific sub-cube ω m = ( x m , y m , z m ) under forecast, i.e. additionally putting the question “how we could calculate them”? For example, if the position of reference sub-cube ω m is near sub-cubes with more results H type than T type then we wait to have greater probability for H type than for T type in the sub-cube ω m . This is a three dimensional problem because the volume, let be | Ω | , of the box Ω has three dimensions. Below, generalizing we will examine this problem in spaces of arbitrary number of dimensions and finally we will work on the simplest of one dimension problem concern the distribution of e, o events of SFNs on the one dimension axis Ax(N) of natural numbers.

Below in Chapter 2 we will see a new idea proving how we can accumulate information onto some particular position m through the counting of the frequency of fractional events [of some type e (or o etc)] which are appeared onto the position m but always according to the axioms of the classical theory of probability. At the end of the part 2.3.2 we will give a prototype (different) example for this etc.

In the following, in part 2.3, we will assume that under each natural number there is one of two different types of magnets E, O. We hypothesize that these E, O interact differently with a special coin C. Then, we corresponding the e, o of SFNs to Η, Τ of C we will assume that throwing the coin C once only over each SFN position we paradoxically record that the results form the real, known, distribution of e, o on the SFNs. In the part 2.3 and in the chapters 4, 5 and on the basis of singular and precise definition of probability we will conclude that if on basis of e, o distribution exclusively onto SFNs, we have calculated the probabilities for the possible results e, o of C onto the position any natural number m [accumulating every time onto the m the information from all e, o onto the SFNs], then on the basis of definitions (or the rigorous axioms) of probability’s theory we will find that these 2 probabilities are equal, so by definition the distribution of events e, o on SFNs will be H-T. And also the same type of H-T distribution will be expanding (obviously now hypothetically only) throughout the total axis Ax(N) of natural numbers.

But what about the distribution I of even (2, 4, 6, 8, …) and odd (1, 3, 5, 7, 9, …) natural numbers on the Ax (N) axis? The answer is that it is as well an H-T distribution that obeys relation (70) of the Appendix at the end of this paper (which is also used and at the end of Chapter 7) but now by replacing the first member of relation (70) with the function that now measures the difference between the 2 sums of even and of odd natural numbers. The inequality of this new relationship will be valid again but obviously will be stronger, because now the first member will be exactly zero, which means that the distribution is H-T. But what will be the difference of H-T distribution I from the H-T distribution II (of e, o onto the SFNs) if we prove below that the distribution II of e, o of the SFNs is also H-T? The answer is that the I distribution encloses finite information until to infinity as opposed to II which we can show that encloses infinite information until to infinity. Really: Firstly we observe that every natural number written in the binary system (0, 1) demands at least one additional bit of information compared to any other natural number, from the infinite natural numbers, so that this natural number can differentiate its binary code from every one of the natural numbers separately. Because now the binary codes of natural numbers per two represent independent codes (i.e. combinations of 0, 1), we conclude that infinite bits of information will be accumulated. However, we will prove in Chapter 7 (on basis of what we will prove in Chapter 5) that the same is true of the distribution of prime numbers. That is, we will prove that the prime numbers have an impartial distribution, which as an impartial series until to infinity encodes infinite information. This proof will be depended on my initial and basic proof in Chapter 5 that the distribution II of e, o onto SFNs includes infinite information because this is an infinite but simultaneously this is and one H-T distribution. And here the above of I, II imply the next difference: That is, if we know that an unknown natural number is an even natural number we will be sure that its next natural number is odd, but we cannot do the same for an unknown e-SFN, i.e. to can say by certainty (on the basis exclusively of the knowledge of what type is the previous) that its next SFN will be one o-SFN.

At this critical point we can define an infinite H-T distribution with infinite information as a Random Distribution relative to the e, o. Therefore the above distribution II will be Random but the same thing must not be valid with the other simple distribution I. But the two parts (components) of even and odd distributions, comparing each other, as in I as well in II, will be equivalent. It should be noted that it is known in bibliography that in order to prove RH it is sufficient to simply prove that the II distribution of e, o onto the SFNs is H-T, and not that this II distribution has infinite information. This is also evident from the use of relation (70) in Appendix where the infinite information of distribution II is not required for proving RH on the basis of (70). On the contrary in this research we first prove in Chapter 5 that the distribution II of events e, o onto the SFNs is H-T (independently of if the information of II is infinite or not) and next we conclude that this II distribution includes infinite information in Ax(N) as an H-T infinite series.

2. The Theory of the Problem and Definitions

2.1) In the below we will symbolize with p ( e i ) the probability (or frequency) of appearance of an event e i into set A. Also with p ( e i | | e j ) we will symbolize the probability of appearance of e i provided that during the appearance of e i there is also the appearance of another event e j .

2.2) General Definitions

2.2.1) In the classical theory of probability we know: a) The probability p ( e λ ) οf an event e λ is function of mathematical propositions produced as information of an experiment that in the next we will call EX experiment and which counts the events of all types, so to we can find the relative multitude of e λ type to the multitude of the all other types of events which are competition to e λ . b) The probability p ( e λ ) changes if, and only if, are changed the above prosthetic information in from the counting the events, that is, if, and only if, the information of counting changes. So Shannon expressed the information Ι [2] in the units of bits and as a function of its corresponding probability which is the reduction of system entropy ΔS after a measurement:

I = log 2 [ 1 / p ( e λ ) ] = Δ S (2)

2.2.2) Let be G = { e 1 , e 2 , , e k } a set in which the e 1 , e 2 , , e k represent a number k of repeatable phenomenons or events defined by k distinct per two mathematical propositions. Let be Ω = { ω 1 , ω 2 , , ω M } another set with components representing other events defined by a multitude of M distinct per two propositions which are also distinct by all the elements of previous G set. We define here the events of G as competitive (per two) to the set Ω if, and only if, for set Ω happens that in every its element ω i can be mapped (or take place) exclusively only one element of G. In other words the element e j is a function of the element ω i : e j = F ( ω i ) . For example ω i could represent the cubes of the box that we described at the beginning in Introduction, and so the e j will represent one of the two (k = 2) events H, T, that is, in this example is G = { H , T } and Ω is the above box.

Generally, for our case here of natural numbers, we will define as space Ω with n dimensions an Euclidian manifold by a set with a multitude of elements or points, where every one of them “i” is an arrangement of n natural numbers ( x 1 , , x n ) i that defines an elementary area ω i around it in connection with neighboring points. Next defining the function F ( ω i ) on to some of these elements of space Ω according to the definition of probability we can define as “Density of Frequency of Appearance from an event ej in this space Ω” or AFD (for sort) the density of its frequency for its appearance in referring to an observer Ob(m) sitting inside someone ω m of the position ( x 1 , , x n ) m or “m” for short. This AFD in relation to the AFDs of all the other competitive events in set G easily drives to the probability of the event e j after normalization. We will name the definition of AFD as:

Proposition of AFD (3)

2.2.3) Let be the special case where Ω is a manifold of number n (multitude) of dimensions. Let we divided Ω in a number of M areas ω i without common points (per two) in every one of which can take place only one of the events e j of previous G set. a) If Ω is Euclidian in to all its area and is divided into a multitude M of n-cubes ω i [which not they crossest between them (per two)], then we define the Ω as “space of events” and its ω i n-cubes as “positions of events” or positions ω i for short. Below we will refer only in such Euclidian space. b) In such space Ω every position ω m in which we ignore what event took place, we can be choose it as a the position of an observer who we will symbolize as Ob(ωm) or Ob(m). Thus the observer with every other position ω j will define a self-area Ω m , e ( j ) which includes all the positions of events (e.g. of all n-cubes) which either are crossed and either are included inside to a spherical (n− 1)-surface [or n-sphere] that has as center the position ω j of some happened event and as radius R the distance between the centers of the positions ω m and ω j . Let be | Ω m , e ( j ) | the volume of this self-area Ω m , e ( j ) of the event at ω j . Therefore for a 3-dimensional space, if | ω j | = 1 × 1 × 1 is elementary (small) j , we will approximately have: | Ω m , e ( j ) | ( 4 / 3 ) π R 3 .

2.2.4) Main proof: We will first refer to one-dimensional problem of e, o of SFNs on Ax(N), where the proof is simple and is the only one we are interested in below for RH. So we will symbolize the one-dimensional region with L m , e ( j ) instead of Ω m , e ( j ) and we will put Δ i instead of ω i because here the space of the events Ω = L has only one dimension and so ω i = Δ i . This position Δ i = [ i , i + 1 ) with length 1 referring to the natural number i we can call below simply as i position for short. The i , i + 1 , i + 2 , represent successive natural numbers on their axis AX(N). Let us choose the symbol e λ ( j ) for the set of n coordinates coding the event e λ , and therefore for the Ax(N) with n = 1 in all the next we will agree that the symbol e λ ( j ) is an natural number (a code of number-event) which carrying rigorously only one specific and temporary information telling simply us that an e λ type event has been counted (from the Ex experiment) exclusively and only at the position x = e λ ( j ) . The j is the serial number of type e λ events in an interval [ a , b ] AX ( N ) . For example if [ a , b ] = [ 4 , 10 ] then the e SFNs in it will be e ( 1 ) = 6 , e ( 2 ) = 10 and for o SFNs in it will be o ( 1 ) = 5 , o ( 2 ) = 7 . In the following we will keep this symbolism. And without harm to the generality let be e λ = e . Based on the previous proposition (2.2.3, b) the change in the information +1 that initially added to Ob(m) [who is sitting just on the natural number m] from the code position e λ ( j ) for its existence (as a particular event e), will be coming in this 2nd case only from the new extra information of his new knowledge of the distance R = | m e λ ( j ) | .

In order to now calculate this new quantity that he will add to his position m inclusively from the code e λ ( j ) alone, instead of the quantity +1, the observer will work as follows: He will start from the position e λ ( j ) where he will be symbolized as Ob [ e λ ( j ) ] and will move to its initial position m. He knows that only one event from the k = 2 competitive e, o can occur within each intermediate position of interval Δ i = [ i , i + 1 ) (or simply i) of its unique path as he moves on a straight line along R. We know that the probability by its very definition express a relative frequency of iterations with respect to the competitive events. So as the observer passing through the next point e λ ( j ) 1 he is forced axiomatically by these 2 basic previous propositions to estimate that for a possible EX (of his special coin) in future at e λ ( j ) 1 will have as information [which as we said is coming exclusively and only from the code e λ ( j ) ] the probability: 1 / | e λ ( j ) ( e λ ( j ) 1 ) | = 1 / 1 = 1 for this specific type e λ = e only at x = e λ ( j ) . Because the e λ happened once in the area [ e λ ( j ) 1 , e λ ( j ) ] which refers to two positions (of events) from which only in one he ignores the result of Ex experiment, because in his specific course of connection m and e λ ( j ) he continuously refers (exclusively and only) to one given event of EX which is the code e λ ( j ) . Similarly he passing from the next natural number e λ ( j ) 2 the information coming (exclusively and only) from the known e λ ( j ) will be the probability: 1 / | e λ ( j ) ( e λ ( j ) 2 ) | = 1 / 2 for the type e λ = e , because e λ happened once but now in an area of three positions which refer to the new region [ e λ ( j ) 2 , e λ ( j ) ] and in two of them he ignore the results of Ex experiment, because he refer again exclusively and only to the same event which is this code e λ ( j ) . Therefore, reaching its initial position m, the observer based exclusively on the classical probability definition and according to previous part 2.2.2 he will estimate a self-AFD: ρ m , e λ ( j ) = 1 / | e λ ( j ) m | that refers exclusively and only to the type e λ event and on the given position x = e λ ( j ) , i.e. to the particular code or number-event e λ ( j ) and so this AD is the self probability of number-event e λ ( j ) projected on m or simpler the Appearance Density (AD) on m. We will name this definition as:

Proposition of AD (4)

Therefore the quantity ρ m , e λ ( j ) represents the AD as a self-density of code event e λ ( j ) that it appears on the position Ob(m). In other words depending on the basic axioms of information theory and on the definition of probability we proved in the 2nd case that every one code e λ ( j ) of the event e λ appears at the position m as fractional event ρ m , e λ ( j ) smaller than +1. We observe that in the special 1st case of a space Ω with zero dimensions, and with e.g. two events H, T, the observer (Ob) standing at singularity point of space, with “multitude of events ν + μ ” he add one by one the equal ADs of the event of the same type e λ = H : 1 / ( ν + μ ) + 1 / ( ν + μ ) + , so to find for H its AFD: ( 1 + 1 + 1 + + 1 ) / ( ν + μ ) = ν / ( ν + μ ) . In the 2nd case there is no reason to be changed this prosthetic property that characterized the densities in the 1st case with Ω with 1 dimension, because the AD ρ m , e λ ( j ) in 2nd case is the same magnitude with the AD equal to 1 / ( ν + μ ) of the 1st case. In other words at the position x = m the observer Ob(m) simply percept the events e, o as fractional events in contrary to the case where he had no information of their positions and so he added the events e, o as units, that is, 1 + 1 + 1 + Therefore this difference in 2nd case changes only the quantities and so from all definitions and axioms of information theory do not implies modification of the prosthetic property between fractional and not fractional events. So the observer Ob(m) adds the fractional events ρ m , e λ ( j ) to find the ratio between the two populations (the two multitudes) of the types e, o, knowing that this ratio, exclusively alone, define the probabilities of e, o. The normalization is given every time exclusively and only from this ratio and nothing more, [3]. Therefore the observer Ob(m) maintains the same rule (the same statistical law) to sum fractional events e, o as have also happened with the classical observer for the 1st case of non fractional events H, T of the 1st case. So the Ob(m) in 2nd case adds the fractional events e, o just as in the 1st case of non fractional events, having again the same right, as we proved, and so finding the new ratio that depends now and from the relative to him positions of axis Ax(N) [where the events e, o have been coded from the EX experiment] the Ob(m) finally after a classical normalization he easily arrives to the Equation (23) of the below Chapter 4”.

2.2.5) From the basic knowledge of probability theory we know that the three dimensions x i (with i = 1, 2, 3) of a cube Ω (that we refer before as a box) will can correspond to three random variables that represent any magnitudes, but now here these can refer not only to one event for every type that can be measured by EX experiment in the total Ω, as we know happen in spaces from random variables in probability theory, but now this space Ω has the strange property to refer to many N λ number of events e λ type which all measured by Ex in the Ω. Evidently, space Ω can be made from an arbitrary n number of dimensions and not only from three. Generalizing the above, we can define the density ρ m , e λ ( j ) of the specific event e λ that measured by EX experiment at position ω j = [ Δ x 1 ( j ) , Δ x 2 ( j ) , Δ x 3 ( j ) ] of the previous box (e.g.) relative to the observer Ob ( ω m ) , that is, now we define at the ω m position of space Ω: ρ m , e λ ( j ) = 1 / | Ω m , e ( j ) | = Δ p λ . The self-area Ω m , e ( j ) (which have been defined in the section 2.2.3 before) has a volume | Ω m , e ( j ) | which obviously is defined as a function of these random variables x i . If we divide the space Ω of the box in a number of elementary (small) cubes and this number tends to infinity then the quantity ρ m , e λ ( j ) will tend to the elementary probability: d p λ = f λ d Ω of the theory of probability. Where the function f λ = F λ ( x 1 , , x n ) referring to the position ω m = ( x 1 , , x n ) m corresponds to the much known density of probability. The only difference here (with the infinitely small parts where in every which of them only one event occurs) in contrary to the previous case is that the normalization may needs handle infinite quantities. It is also known that the elementary quantity d p λ as function of x 1 , , x n coordinates will remain unchanged when we transform the x 1 , , x n coordinates of event space Ω in to some new x 1 , , x n : d p λ = f λ d Ω = f λ d Ω using the known Jacobian.

2.3) Special analysis for SFNs

2.3.1) It has been shown that if the distribution of events e, o of the SFNs on the axis Ax(N) of the set N of natural numbers is Heads-Tails (H-T) type then the Riemann hypothesis will be valid. In this article we will prove that this distribution of e, o events onto the SFNs is indeed H-T type.

Let us imagine that we have a specific special coin C with a homogeneous distribution of the density of its metal, but endowed with the property that its two sides interact magnetically in a different way in two types of magnets E, O. Suppose that on the axis Ax(N) and below from the position of each SFN we have placed one of the two magnets E, O. Additionally we correspond to the two results H, T the events e, o of the SFNs. Suppose that starting from the first SFN, which is the natural number 2, we reach the infinity by dropping this coin C on the position of each SFN, under which we said someone has placed one of the two magnets E, O. It is obvious that the distribution of results will be H-T type if, and only if, the placement of the E, O magnets was done in a completely random way. In other words, the distribution of the results will be H-T type if, and only if, the result of the spins of the specific coin C statistically is not affected by the respective positions of the SFNs where we toss this special coin. And that is exactly what we will prove. Suppose that in the above experiment, throwing this special coin until to infinity, comes (paradoxically) the real arrangement of e, o of SFNs (known from number theory on axis of N) which is dictated by the property e (even) or o (odd) of the every time respective SFN. Let call this distribution e-o-SFN. So how could we know if this distribution of e, o on the SFN positions is random, that is, if is it H-T type? It is a fundamental question.

We will go deeper into the definition of probability. If an observer Ob(m) at a specific position m within an area [α, β] of Ax(N), with α < m < β , notices that within this interval [α, β] there is a number (multitude) ν of events of the type e, and also a number μ of events of the type ο, then the information that this (finite or tending to the infinity) area gives to the observer Ob(m) is that the probability that have every of the events e, o to occur again on his position m (after throwing the special coin on m) are respectively: p ( e ) = ν / ( ν + μ ) and p ( o ) = μ / ( ν + μ ) . The position m in [α, β] as well may be a SFN position where e.g. the observer ignores the result (on m) and calculates it as a function from the results of EX experiment in the area [α, β] where the position m also belongs, but the m in [α, β] obviously can also be generally any position of a natural number, which can may not correspond to some SFN, assuming that the magnets E, O are placed below any natural number and every magnet affect every time only to the special coin C that is tossed over this number. In this 1st case we suppose that the Ob(m) at his position m has no information for the distribution of e, o in the range [α, β] and therefore expresses the probability p(e) simply as a relative frequency of repetition of e with respect to the total repetitions of e, o in the range [α, β]. As we know, the AFD (that we defined in 2.2.2 before for an event, e.g. of e onto position m) as prosthetic part of probability changes when the information changes about the existing iterations of the event e in the range [α, β]. For example when in addition, in a general 2nd case now, we take in account and the distances of all events of type e (in the [α, β]) from the position m of the observer. Of course, equivalence between the distribution of magnets E, O below of all natural numbers and the distribution of the magnets E, O below the SFNs subset is exactly the generalization of the distribution that is defined from the specific total magnet distribution of the SFNs alone. In other words the hypothetical equivalence between the results e = H, o = T of special coin C from the EX experiment onto the SFNs and on the subset of all the other natural numbers, by definition implies that the magnets E, O below all the natural numbers must follow the same distribution as they have over the subset of SFNs alone, e.g. an H-T distribution or not. Because we are interested to find what exactly is the distribution of these specific two types from the relative perception of the Ob(m) who can be sitting at the position of any natural number m = κ N . Obviously if we choose as m = κ any SFN then we must neglect from EX data the knowledge of the specific type (e or o) of this specific SFN. As we said the EX experiment is the set of all data (results) from the previous throwing that have been realized by the specific coin C only over all SFNs. We must point that any κ = m which is non SFN number has no type e, o, but here we are interested for the relative perception from Ob(m) about e, o events exclusively and only of all SFNs, because if this perception of e-SFNs distribution is the same with the o-SFNs distribution from the position Ob(m), and from every one natural number m, then we conclude that the perception of e-o-SFNs distribution from the position Ob(m) and from every one natural number m is finally H-T, then we conclude that the distribution e, o of SFNs alone which is investigated from every position m N [i.e. the distribution e-o-SFN into the axis Ax(N)] by definition will be H-T also, because this is the definition of H-T distribution taking in account and all the distances of events e, o from the every time position κ = m of any natural number.

We discussed the previous because here we obligate to take into account all the existing information of infinite SFN’s (resulting of specific coin C of EX, previously described, only onto the SFNs) for the calculation of probabilities p(e), p(o) on the position m where the EX don’t yet don’t yet been executed, supposing as well that the distribution of e, o onto the SFNs expanded in some hypothetical way (useful only for our calculations) over all Ax(N). So to know how the C coin results affect the forecast of e, o (or H, T) on position m. That is, we ask their probabilities on point m, where we don’t yet have tossing the coin C, supposing that the magnets E, O under the set of all natural numbers (including m) have the same distribution with the subset of magnets E, O of SFNs that as we said gave paradoxically the absolutely same arrangement of H, T with the existing (real) arrangement of the e, o of SFNs. Therefore we must take into account additionally the positions of e, o in N (generalizing the [α, β]) and in reference to m. However, the positions of e, o with respect to m are by definition the distances of each event e, o on Ax(N) from this position m of the observer. How, then, would the previous simple definitions of probabilities p(e), p(o) must now change as relative repetition frequencies in the infinite region N taking in account the positions, (that is, taking in to account all the existing information in N) so to finally we answer the initial question of whether the distribution of e, o appears to be the same type from the observer position m or not? We will call it typically as “H-T type” from m, (enclosing the phrase in the symbols) because the H-T distribution refers on to all positions m and not to one m. In other words, utilizing all the available information in the infinite range N for the distribution of the events e, o we must define their probabilities on the observer’s position m. If we achieve this then after this it suffices to show that the two probabilities of e, o are equal at each point m of N, because this is by itself an exact definition of the H-T type distribution of events e, o onto the SFNs. Thus we result to the next Main Proposition:

2.3.2) Main Proposition: “According to the section 2.2 and the part 2.3.1 before, having any interval Ω = [ α , β ] of Ax(N) with M multitude of natural numbers accompanied all by hypothetical magnets E, O under them, and by presupposition that the experiment EX took place in to some of them which have multitude N < M, then we conclude that the distribution of results of H, T on these M numbers, will be of type H-T if, and only if, the reception of all information about the all results (data) of experiment EX, and from any place m [ a , b ] , gives to its observer Ob(m) the information of equal probabilities. That is, if, and only if, this distribution of N results of EX relative to their distances of Ob(m) seems as ‘H-T type’, i.e. if, and only if, is: p H ( m ) = p T ( m ) m [ a , b ] . These 2 probabilities are defined below [I.e. the relations (23), (24) in Chapter 4] and for every m in event’s space Ω = [ α , β ] ”.

In order now to generalize the definition of the two probabilities p(e), p(o) using at the same time all the spatial information of the distribution of events e, o within a range [α, β] and with respect to a selected position m [ α , β ] of observer Ob(m), we first observe that in the previous definitions of 1st case p ( e ) = ν / ( ν + μ ) , p ( o ) = μ / ( ν + μ ) the repetitions of the events e, o that took place in [α, β] are simply added: ν = 1 + 1 + 1 + + 1 and μ = 1 + 1 + 1 + + 1 . Then we observe that each of the events e takes place in a special position (with respect to the position m of the observer) which as we said we will initially coding here with e ( i ) , and so the distance from the m will be | m e ( i ) | . Where i is an index that runs over the events e of [α, β], so that the serial number for i = 1 gives us the first event e of [α, β], the serial number i = 2 gives the second event e of [a, b] etc. Therefore in this symbolism the positions of all natural numbers in general, regardless of whether they are SFN or not, are numbered inside of [α, β] in order with κ 1 , κ 2 , κ 3 , which represent consecutive natural numbers and so simply the serial number i in the code e(i) or o(i) locates the events e or o at κ j = e ( i ) or κ j = o ( i ) respectively. For example, the [3, 9] contains 7 natural numbers and only four of them are of type SFN where in order they are of type o, o, e, o with values 3, 5, 6, 7, therefore we conclude that o ( 1 ) = κ 1 = 3 , o ( 2 ) = κ 3 = 5 , e ( 1 ) = κ 4 = 6 , o ( 3 ) = κ 5 = 7 . Also for the populations of the events e, o in [α, β] will we define the symbols N e , N o respectively. So in this example will be N e = 1 and N o = 3 for [3, 9]. Also the position m can be any natural number from these seven natural numbers of [α, β]. In the case where the position m is an SFN position, e.g. the 5th number ( κ 5 = m = 7 ), then in this special case obviously this position should be excluded from EX data and put N o = 2 in the following sums that we will mention immediately.

In the 2nd case of generalization we proved in (2.2.4) that in the previous sums ν = 1 + 1 + 1 + + 1 and μ = 1 + 1 + 1 + + 1 each such added unit (+1) must be replaced from the corresponding self-AFD ρ m , τ ( i ) of the event τ ( i ) [which is the e ( j ) for the sum of ν and the o ( i ) for the sum of μ] that obviously transports the position information (to the observer of the location m) that the occurrence of the event τ ( i ) happens at a distance | m τ ( i ) | from him. We clarify the Main Proof of (2.2.4). Suppose that [ α , β ] = [ 4 , 10 ] with only two events 6, 10 of type e and that the observer is in position m = 8, and as we said we will symbolize him at this location as Ob(8). A specific event now, e.g. the e type of special e SFN code e ( 1 ) = 6 is displayed at a distance | 8 6 | = 2 from him. If the observer Ob(8) at m = 8 takes into account and the distance, he now cannot claim for e(1) that it is simply an event of type e occurred on unknown position in [4, 10] and thus to add it as a new unit to the calculation of ν. But he realizes that e(1) in position 6 gives to him a probability smaller than 1 that an event of type e may occurs in his m position when he will throw the specific coin C at his position. And in this sense the event e(1) now behaves as a fractional event on m where is standing the Ob(m), i.e. smaller than 1 event or in other words as a new term in the sum of ν which is smaller than unit.

The reason is that the observer Ob(8) forced by the classic definition of probability itself to work as follows: “If I start from my initial position Ob(8) and goes to position 6 of e(1) I will pass successively through intermediate positions, which here is obviously only one and it is my intermediate position 7. Therefore he claims that taking only one step toward e(1) he don’t meet e(1). But continuing to the same direction, finally he finds that he meets, this specific code e(1), after from 2 consecutive steps. That is, he prove that achieves the number-event e(1) in two consecutive attempts in a measurement of classical theory of possibility”. Every such measurement gathers prosthetic information. E.g. here prosthetic information to other measurements will be on the same type number-event e ( 2 ) = 10 in the same interval [4, 10].

Therefore the observer Ob(8) calculates that the code e(1) from itself appears on him (onto m = 8) as a virtual fractional event of the same type equal to its corresponding probability onto m: 1 / | 8 6 | = 1 / 2 . The obvious reason is that exclusively, and only, the code number-event e(1) contributes (by itself) to the observer Ob(8) this probability 1/2 as a component of information which next he can use to calculate the sum (including the ρ for e ( 2 ) = 10 for [4, 10]) on m (little below) of the final probability p e ( m ) which has the result e to appear at m [after throwing the specific coin C of EX at his place m = 8 e.g. in the future]. According to the all above symbolizations of m [ α , β ] , for the numerated observer’s position m and for this fractional (or virtual) event as component probability corresponding only to one code e(j) of [ α , β ] , generally for the AD he must put:

ρ m , e ( i ) = 1 | e ( j ) m | , and similarly for the event o: ρ m , o ( i ) = 1 | o ( i ) m | (5)

Now for the Ob(8), and similarly with the calculation in 1st case of p(e) of partial information, we can prove that the generalized probability p e ( m ) of an event of type e to occurring at position m= 8 (based on the total information now of all the events e(j) that occurred in space [4, 10]), will be proportional to the sum of all the fractional events ρ m , e ( j ) (of type e) which transfer all together their respective real number-events e(j) to Ob(8). Actually. Each fractional virtual event ρ m , e ( j ) is the event density of the corresponding real e(j) in the one-dimensional self-space L m , e ( j ) = | m e ( j ) | , because we said according to the definition itself the probability is always proportional to the occurrence density AD [here it is 1 / L m , e ( j ) ] of the event γ which refers to a space of its potentially repetitions, where as well the density ρ of γ is defined. And is well understand from the axioms of the theory of probability.

On the other hand the equivalent problem with the special coin C we have described clearly shows that the actions of the virtual magnets below the positions of the SFNs are independent of each other, because each coin tossed over any particular SFN is affected solely from the magnet located below that SFN and not by the infinite others. The reason is that this modeling with magnets, including the condition of this independent action of the magnets between them, leaves the total action of the magnets to be determined by the distribution of the magnets, i.e. by their positions relative to the observer and not by hidden interactions between them, because by definition this is exactly what the problem we are looking at requires, that is, the problem of checking whether the distribution is “H-T type” with respect to the random m position of the observer.

Considering in the previous the AD [of (4) in 2.2.4] as a part of the AFD [of relation (3) mentioned in 2.2.2] for any event, e.g. the e, and for some position m of Ob(m) we essentially used the definition of the probability as the process of counting this events e as waiting partial probability on m, that is, a fractional event, onto the position m. And then based on axiom of prosthetic property of appeared e (from all space Ω of events) on m we are resulted to the formation of AFD for e onto the m as a sum (Σ) from all its ADs for e on to Ob(m), i.e.: ρ e ( m ) = AFD ( e , m ) = Σ [ AD ( e , m ) ] and similarly for the o: ρ o ( m ) = AFD ( o , m ) = Σ [ AD ( o , m ) ] . Therefore, similar to the previous, we will have the next relations (23), (24) of the following Chapter 4, setting:

ρ e ( m ) = Σ ρ m , e ( j ) = Σ [ 1 / L m , e ( j ) ] = V e ( m ) (6)

ρ o ( m ) = Σ ρ m , o ( i ) = Σ [ 1 / L m , o ( i ) ] = V o ( m ) (7)

And then, for the normalization of probabilities according to the previous analysis we get:

p e ( m ) = ρ e ( m ) / [ ρ e ( m ) + ρ o ( m ) ] and p o ( m ) = ρ o ( m ) / [ ρ e ( m ) + ρ o ( m ) ]

In the following (and similar to the known Potentials of Fields in Physics) we will agree to name the quantities ρ e ( m ) , ρ o ( m ) of two competitive AFDs (that we defined at the end of part 2.2.2) as Potential of Events e, o at m position respectively.

Example: To show the prosthetic property of the density of events from a different point of view we will give an example. Suppose we throw a coin in three experiments 5, 10 and 21 times respectively, with corresponding results for H, T: (2, 3), (7, 3), (9, 12). We observe that the densities of events in the three experiments are proportional to the numbers 2 + 3 = 5, 7 + 3 = 10 and 9 +12 = 21. This means that the corresponding 30 ADs for H, T will be: ρ m , H ( j ) = 5 : j = 1 to 2, 10: j = 3 to 9, 21: j = 10 to 18 for H, and ρ m , T ( i ) = 5 : i = 1 to 3, 10: i = 4 to 6, 21: i = 7 to 18 for T, in the 3 experiments correspondingly. So if we place these 18 + 18 ADs of H, T on an axis with the observer in a position m then obviously the 18 + 18 distances from position m will be respectively: L m , H ( j ) = 1 / ρ m , H ( j ) , L m , T ( i ) = 1 / ρ m , T ( i ) . And applying the relations (6), (7) and the normalization that we quote after them we find the correct probabilities in Ob(m): p H ( m ) = 269 / 566 0.475 0.5 and p T ( m ) = 297 / 566 0.524 0.5 .

We observe that the distribution of H, T events here differentiates from the distribution H-T where p H ( m ) = p T ( m ) = 0.5 onto the position of Ob(m). This verifies the previous analysis here, because according to it we expected the two above inequalities because we made the choices of the 3 distances 1/5, 1/10, 1/21 without any special care that drives to the special target H-T. And indeed that is how the data of our Example were chosen, randomly. But as well we see that we have an almost H-T distribution due to the random but finite number of selections. The small differences (about 0.025) show the different affects from the distances themselves with respect to Ob(m). This seems more clearly if we choose equal distances, e.g. 5, 5, 5 and therefore with equal affects so that now the distribution will be the ideal H-T one. The small differences will be now 0. But as such distribution H-T we expect a distribution [with respect to the Ob(m)] if we randomly choose a very large number from lengths of distances (instead of 3 random here) so that their multitude to tend to infinity, while the equality between the multitudes of results H, T of tosses will be ensured by the “ideal of the coin”, as it is and the coin of the above Example, thing that here by definition is born exclusively from the equality between the multitude of events (18 + 18). This special definition (through the equality of multitudes between only two competitive events) of “ideal coin” is unique for any case of finite data, as here (18 + 18).

2.3.3) Application: And so finally we can calculate and then analyze these two probabilities for the case of observer Ob(8) on the position m = 8 of range [ α , β ] = [ 4 , 10 ] with codes e ( 1 ) = 6 , e ( 2 ) = 10 , o ( 1 ) = 5 , o ( 2 ) = 7 . Thus applying the above we easily find:

ρ e ( 8 ) = ρ 8 , e ( 1 ) + ρ 8 , e ( 2 ) or ρ e ( 8 ) = 1 | 8 6 | + 1 | 8 10 | = 1 (8)

And similarly ρ o ( 8 ) = ρ 8 , o ( 1 ) + ρ 8 , o ( 2 )

An therefore

ρ o ( 8 ) = 1 | 8 5 | + 1 | 8 7 | = 1 3 + 1 1 = 4 3 (9)

And after these we conclude:

p e ( 8 ) = ρ e ( 8 ) [ ρ e ( 8 ) + ρ o ( 8 ) ] = 1 [ 1 + 4 3 ] = 3 7 (10)

p o ( 8 ) = ρ o ( 8 ) [ ρ e ( 8 ) + ρ o ( 8 ) ] = 4 7 (11)

This is an application for the above and for the Main Proposition, useful below. First we verified that p e ( 8 ) + p o ( 8 ) = 1 . We before called the two quantities V e ( κ 5 ) = V e ( 8 ) = ρ e ( 8 ) and V o ( 8 ) = ρ o ( 8 ) as Potentials of the Events e, o on the observer’s position m = 8. Additionally we observe that these are not equal in this simple example and for this we have p e ( 8 ) p o ( 8 ) . Also for this reason in interval [4, 10] substituting the event e by the electric charge +1 nC and substituting the event o by the electric charge −1 nC we conclude that the total electrical potential on the position m = 8 doesn’t be zero as will happen e.g. in the special case where will be p e ( m ) = p o ( m ) . Another observation here is that although the average distances of number 8 from points of events e and o respectively are both equal to 2 [because ( 2 + 2 ) / 2 = ( 3 + 1 ) / 2 ] in the 1st situation the p e ( 8 ) is smaller than the p o ( 8 ) of the 2nd situation. The reason must be found on the entropy, because the 1st situation of “2, 2” distances of distribution is more symmetrical than the 2nd situation of “3, 1” distances of distribution, that is, the entropy of the 1st more symmetrical situation is smaller than the entropy of 2nd situation, i.e. Δ S 1 < Δ S 2 . But that is known [(relation (2) of 2.2.1] that the entropy is the hidden (non measured) average information. So, after measuring of the entropy, the receiver Ob(8) gains smaller average information or entropy (for the e distribution relative to its position m = 8) in 1st situation than the entropy of the 2nd situation. Therefore, the observer Ob(8) by gathering these two values Δ S 1 , Δ S 2 of information causes the known collapsing of this entropy (in his system making these zero, like in quantum mechanics and everywhere) just after from his two pairs of measurements onto the e ( 1 ) = 6 , e ( 2 ) = 10 and ο ( 1 ) = 5 , ο ( 2 ) = 7 concluding that the event e of the 1st situation is less probable than the other event o of the 2nd situation: p e ( 8 ) < p o ( 8 ) , because these two values of the information, just he gained at m = 8, will be:

p e ( 8 ) = Δ S 1 = ( 3 / 7 ) log 2 ( 7 / 3 ) Bits (12)

p o ( 8 ) = Δ S 2 = ( 4 / 7 ) log 2 ( 7 / 4 ) Bits (13)

3. Financial and Other Applications

In this expansion we will suggest to apply the method only for finite number of data from partial (not absolute) action of the values of these n dimensions of the space Ω, and therefore this application will be a new approximation method of forecasting in contrary to the application in main article on infinite SFNs that according to the theory of probability includes the absolutely action of distances of SFNs from the Ob(m) until to infinity and therefore with absolutely sure result with probability 1 (i.e. 100%). starsee@outlook.com.gr.

Similarly to the box we mentioned in the Introduction we will refer here to a multi-dimensional event space (with Euclidian properties as we defined it before). Suppose a space of events Ω = { ω 1 , ω 2 , , ω M } with n number of dimensions and volume | Ω | is divided in to M number of distinct elementary equal regions of n-cubes ω i within each of which these n-cubes can occur only one of the competitive events of set G = { e 1 , e 2 , , e k } , as we said before in section 3 of the “General Definitions”. The n multitude of dimensions of this space correspond to n number of economic or other physical quantities that let have been statistically proved to affect the distribution of G inside the space Ω. E.g. The set G could include 10 regions of possible percentage changes of values of a product P1 relative to some previous value at defined distance Δt of time in stock market, and the Ω could have three dimensions corresponding a) to the value and b) to the volume of sales for the P1 that both refer to the past at Δt distance before and c) to someone Moving Average of this value of product P1 related with this time distance Δt. Generalizing all the above, the AFD ρ e λ of probability (under forecast) of any event e λ of the set G on to some observer Ob(m) sitting at the position ω m of the Ω (according to all the above in the General Definitions) will now be generalized as a function ρ m , e λ of ω m as follows:

ρ e λ ( m ) = j = 1 j = N λ ρ m , e λ ( j ) = j = 1 j = N λ 1 | Ω m , e ( j ) | = V e λ ( m ) (14)

where | Ω m , e ( j ) | will be the volume of self-area Ω m , e ( j ) with a number N m j of n-cubes ω j defined before in part 2.2.3 of General Definitions. So, similarly to the one dimension problem of relations (23), (24) in Chapter 4 bellow, now for the forecasting on the position ω m , where obviously we ignore what financial or other type of event e λ is going to happen, we result to the probability:

p e λ ( m ) = V e λ ( m ) i = 1 i = k V e i ( m ) (15)

4. Potential of Events

An observer counts the glowing meteors in an area of the celestial sphere from a position m on Earth for the duration of one hour and finds that on average 4 meteors per minute fall. If that is due to in a specific phenomenon that forces this rate to be dependent from its time and place, then it is absolutely logical for the observer to presume that in the next minute the most possible number that he is going to count is 4. But if the last 15 minutes the average was 8 per minute and there is a dependency of the rate from time and place (from the observers position), then it is also logical for him to presume that for the next minute on average being expected more events from 4 and less than 8. So, this observer is tempted to find a mathematical function of dependency of this rate to the time for this celestial phenomenon of this specific spacetime position.

We will here remind the “potential of events” that was defined at the end of 2.3.2. We will start from one case where the events are two e = +1, o = −1 and are realized on the positions of SFNs. The SFNs (as generally every number) have various properties such as e.g. are a) the e, o and b) their numerical values that from now on we will symbolize μ ν . We can imagine next, two distinct events τ n { e , o } (or something more general) which were repeated over a finite multitude N of positions of a straight line x (e.g. over the SFN) and so will be distributed on the distinct positions n = n 1 , n 2 , , n N (which now not defined as successive) of an EX experiment (that counting by observation) and which are all enclosed in an area ΑΒ = Δx of x. These positions of n are not mandatory to be successive. We will also symbolize any two successive positions as κ , κ + 1 , with κ N . Let be the m a specific position on the line x for which we do not know which event from e, o corresponds. The m could be also one of the additional positions n = n s , e.g. for n = n N + 1 where the event τ n has already been realized but it happens to be unknown to us or that in general case it simply is another position of natural number that the event τ n is not yet measured by the EX experiment.

Let’s consider that the start x = 0 is in the position A. An observer Ob traces all the positions of the area ΑΒ = Δx and tries to gather information from the events with a multitude N, in order to, based on this information, calculate the probabilities p e ( m ) , p o ( m ) on the position m, as we have proved through the relations (6), (7) in the part 2.3.2 of Chapter 2. How this observer Ob will now be working? His initial thought could be to estimate for these values: p e ( m ) = N e / N , p o ( m ) = N o / N , where the values N e , N o are the multitudes of appearance of e, o respectively on Δx. But that way Ob ignores the position of appearance of every event that obviously constitutes very important information. Let Μ be the midpoint of ΑΒ = Δx and let e.g. the m position to belong into the area ΜΒ = Δx/2 where e.g. the e is happened to come up more times than the event o in comparison to the other area AM. Then it is reasonable that the observer Ob(m) will think to correct the previous values with new ones as follow:

p e ( x m ) = ( N e ( M B ) | m 3 Δ x 4 | + N e ( M A ) | m Δ x 4 | ) / D (16)

p o ( x m ) = ( N o ( M B ) | m 3 Δ x 4 | + N o ( M A ) | m Δ x 4 | ) / D (17)

And where

D = N e ( M B ) | m 3 Δ x 4 | + N e ( M A ) | m Δ x 4 | + N o ( M B ) | m 3 Δ x 4 | + N o ( M A ) | m Δ x 4 | (18)

The observer Ob considered that the events e, o of the areas ΑΜ, ΜΒ are concentrated over the centers of ΑΜ, ΜΒ respectively. The value D concerns the known requirement of normalization: p e ( m ) + p o ( m ) = 1 . And where N e ( M B ) , N e ( M A ) are obviously the populations of the e event in the intervals ΜΒ and ΜΑ with their respective centers 3Δx/4, Δx/4 and similarly with the symbols for the event o. But again Ob doesn’t have the complete utilization of the total information of distribution. So until where will he continue to subdivide the interval ΑΒ = Δx? In 4 parts instead of 2, in 8, or more? The method of utilization of all of the information demands until to infinite. But because in our specific case the positions that the events are being realized are discrete, this can happen more simply. Continuing this process we can understand that we will come to the following relations (23), (24), which also have been referred in Chapter 2 before.

Based on the definition of AD for the event of (4) in 2.2.4 of the Chapter 2 we have: ρ m , e ( j ) = 1 / Δ x , because it is referred to the enumeration of only one event from distance Δx of Ob(m), who is sitting at the position m. In the part 2.3.2 of Chapter 2 we defined Δ x = L m , e ( j ) and according to the definition of AD of the event e we will have: ρ m , e ( j ) = 1 / L m , e ( j ) with L m , e ( j ) = | m e ( j ) | . However because (as we have already said) this density is proportional to the partial (component) probability p m , e ( j ) we must write:

p m , e ( j ) ~ ρ m , e ( j ) = 1 L m , e ( j ) (19)

where “~” is the known symbol of proportionality. And we found ourselves now in a crucial point: “When a value v of a magnitude A is proportional as to a set of values of the same magnitude A it must be that value v will be proportional also to the sum of values of the set”. This proposition [as we said with the relations (6), (7)] defines necessarily also the AFD of the e event in the position m because of all the occurred events of e type in an interval [α, β], that is:

ρ e ( m ) = j = 1 N e ρ m , e ( j ) = j = 1 N e 1 L m , e ( j ) = V e ( m ) (20)

Let consider as an example the distribution of only two events e on the axis Ax(N): electing m = 0, e ( 1 ) = 2 , e ( 2 ) = 3 . Based on the relations (19), (20) it comes that ρ e ( 0 ) = 5 / 6 . Lets continue considering also a second example, of a different distribution for e with new 2 positions: m=0, e ( 1 ) = 1 , e ( 2 ) = 3 , where it results: ρ e ( 0 ) = 4 / 3 . If now we use their average density into the common area of these two examples we will find ρ e ( 0 ) = N e / Δ x = 2 / 3 . We observe that the definitions (19), (20) give us the specific total information of the distribution, because they utilize the density in relation to the distance, as it should. I.e. it is 5/6 < 4/3 because in the second case we moved one of the two events e closer to the constant receiver (observer) Ob(0) at m = 0. However this distinction would be lost in the definition based on the average density in this area [0, 3], where we saw that it results to ρ e ( 0 ) = 2 / [ 3 0 ] = 2 / 3 that also represents together these two cases (that correspond to the results 5/6 and 4/3) and which don’t include the total information of distribution of the event e relative to the Ob(0).

Also, the transport of the information to Ob(m) equivalents to the transporting of a messenger from the source, at the position e(j), to the receiver Ob(m) at the position m. Due to the importance of this subject we are going to analyze it little further. Based on the definition of probability in Chapter 2 we proved the fact that the AD of an event e occurring at some position on the Ax(N) axis, solely due to the measurement by EX of the same event e but at another position n = e ( j ) on the Ax(N), will be inversely proportional of the distance Δ x = L m , e ( j ) between them. We will analyze this basic proposition here again, but little differently and briefly.

Let the messenger begins from some initial position n = n s = e ( j ) where it has already occurred e.g. the e and then let him to move toward the position m > n s . Based on the definition of the probability of appearance of an event he is obligated to claim that “on his new successive position κ = 1 + n s the event of type e has probability 1/1, an information which (from the above supposition) comes exclusively from the selected source n s ”. The reason, as we said in 2.2.4 of Chapter 2, is that the event e occurred only one time into the step [ n s , 1 + n s ] with length | ( 1 + n s ) n s | = 1 in regards (as we said many times) exclusively just the position n s . Obviously, the (κ + i) in not necessary to be identified every time with some of the values of n, because, contrary to (κ + i), i N where we consider that for i = 1 , 2 , 3 , will take successive values of natural numbers, the natural number n concerns only the positions where the events {e, o} already have been occurred, and as we said this n not obligatory to has successive values. Continuing, the messenger as he is passing over the next distinct position κ = 2 + n s will claim that “in his now new position relative to e, (which again due exclusively to the executed event e on n s ) will have probability of appearance equal to 1/2, because the e appeared again 1 time (on the source e n s ) but now into this new interval [ n s , 2 + n s ] with length | ( 2 + n s ) n s | = 2 ”, and so on. So, reaching finally on the final position of the receiver Ob(m) the messenger [of the event ( n s , e ) ] will claim that “the probability of appearance of e, on this special position m of the observer and which due only to the event e of the source the n s obviously will be: ρ m , e ( j ) = 1 / | m n s | , because the event e that due to what happened exclusively on the position n s occurred again one time but finally now into the special length | m n s | ”. Assuming that every event τ n can depend exclusively from the corresponding position of its occurrence, that is to say the n, this calculation will take into consideration only the positions n, as we already have started to do. Therefore, summing all these probabilities components p m , e ( j ) = ρ m , e ( j ) for all the positions n = e ( j ) of events, where the e appeared, it follows that all the messengers together that equivalent with an observer Ob(m), find the probability of e in the position m due to all of the positions n = e ( j ) , if of course in addition this total result was also normalized. Similarly the observer Ob(m) locked on position m will use again his virtual messengers for the values n = o ( i ) that concern the other event o. So the AD is:

ρ m , e ( j ) = 1 / | m e ( j ) | (21)

And similarly for the other event o we get:

ρ m , o ( i ) = 1 / | m o ( i ) | (22)

Considering finally also the normalization, as we referred in Chapter 2, and then summing for all the values of j, i we end up easily in the relations:

p e ( m ) = 1 D j = 1 N e 1 | m e ( j ) | , p o ( m ) = 1 D i = 1 N o 1 | m o ( i ) | (23)

D = j = 1 N e 1 | m e ( j ) | + i = 1 N o 1 | m o ( i ) |

As we said in Chapter 2 the D = | Ω | is defined for the normalization and as we said the natural numbers e ( i ) , o ( j ) correspond to the events e, o with serial numbers j, i in [ α , β ] . We define the two quantities:

V e ( m ) = j = 1 N e 1 | m e ( j ) | , V o ( m ) = i = 1 N o 1 | m o ( i ) | (24)

These two quantities we have named in Chapter 2 at the end of 2.3.2 (and we used in Application 2.3.3) respectively “Potential of Events e, o” onto the position m.

If the relation (23) gives on a random position of the straight line x equal probabilities, then the distribution of events e, o by definition will be Heads-Tails (H-T). The general definition of the probability of the relation (23) for a random position is a representative property of the distribution and according to our definitions (in part 6.3 below in Chapter 6) this property will belong to the CI of the set of this distribution. In the case where is valid that V ( m ) = V e ( m ) V o ( m ) = 0 we see that the «informational trace on the position , (which we defined to stem from the projection of the total information of the distribution on the position m) will be “H-T type”, that is, we will have the “equality of the two probabilities at position m”. That is the equating of the two probabilities at some position m that refer at Main Proposition in part 2.3.2 of Chapter 2. We work here by definition (as in the classical theory of statistical variables of probability with the known probability density in these Euclidean spaces) with Euclidean spaces [4] [5]. The reason is that the axis Ax(N) has in its every position as basis of vectors only a constant vector with measure 1 and constant direction. [We point out that if we defined the length of the self-area of m, e(j) as L m , e ( j ) / 2 , instead of previous L m , e ( j ) , i.e. considering that the “information source” e(j) “acts” from the middle of area of its two arguments m, e(j), then, because the division will be made by a new D’ of normalization we see that the relation (23) will remain the same, i.e. unchanged].

(*) In the next chapter 5 we will use an observer at a position μ N that corresponds to the position m of Ob(m) using normally all the above.

5. The Proof of H-T Distribution of e, o on SFNs

We will prove here the proposition (1) referred in Introduction which (as we said there) it is the sufficient proposition for the validity of RH. We will start here reminding the “Main Proposition” in the part 2.3.2 and the “Main Proof” in part 2.2.4 of chapter 2 and as well the main relations (23), (24) in Chapter 4, pointing out that the position m of the observer here can be the position of any natural number μ (as we will see at the end of this Chapter it can be also and the position of a SFN) on its axis Ax(M) in an effort to we prove (according to our method here) that the distribution of all infinite SFNs appears on Ob(m) as “H-T type” (as we called it in 2.3.1) from every μ = m in N. I.e. the two probabilities of e, o are equal on the position μ N . So, the distribution of e, o by definition will be H-T type on the total Ax(N).

From the theory of Riemann’s ζ function we know the proven relation:

ν = 1 μ ( ν ) ν = 0 (25)

where μ(ν) is the known Möbius function [1]:

μ ( 1 ) = 1 , μ ( 2 ) = 1 , μ ( 3 ) = 1 , μ ( 4 ) = 0 ,

and where ν N

Here ignoring the non SFN (natural numbers), we will modify the symbols and from now on will agree to represent the above relation as:

ν = 1 λ ν μ ν + ν = 1 λ ν μ ν = 1 (26)

where μ ν will represent the numerical values of successive terms of the sequence SFN(e): μ 1 = 6 , μ 2 = 10 , with λ ν = + 1 , ν N * and where μ ν are the successive terms of the sequence SFN(o): μ 1 = 2 , μ 2 = 3 , with λ ν = 1 , ν N * .

However according to the definitions in Chapter 4 [relation (24)] we can observe that if we include in the relation (26) the pseudo-SFM μ 0 = 1 , λ 0 = + 1 , then this new relation that would come up, would tell us specifically that the sum of the potentials of events e = + 1 , o = 1 on the position μ = m = 0 of the axis Ax(SFN) is zero: V e * ( 0 ) + V o * ( 0 ) = 0 .

The total information that the observer gathers on the zero, from the distribution of all the events e, o over the events μ ν of Ax(SFN), shows that on μ = m = 0 the two events e, o are equally probable, that is: p e ( 0 ) = | V e * ( 0 ) | / [ | V e * ( 0 ) | + | V o * ( 0 ) | ] = 1 / 2 , and p o ( 0 ) = 1 / 2 . That is to say, the point 0 is a “position H-T type”. The observer in position μ = 0 (as well as in any other position μ) is indifferent to the way that the positions μ ν were selected on the Ax (N). Because this way of selection does not enter itself in the calculation of p e ( 0 ) , p o ( 0 ) , but only the positions μ ν , thing that comes here from the application of chapter 4 in (25) (26). What enters directly in these calculations is the information of distribution of e, o of Ax(N) as to the position μ, so that based on the typical definition of the probability the p e ( 0 ) , p o ( 0 ) to be calculated. While the way of selection of μ ν is based exclusively on the fact that the events e, o are defined only on the positions μ ν . In other words this way of calculation is indifferent to the method of selection of events because exactly this way can calculate exactly the requested probabilities and for any other event we define on whatever other subset A of the set N. For the two absolute potentials | V e * ( 0 ) | , | V o * ( 0 ) | (which are formed by the sum of all the corresponding probability components) we observe that on the basis of (25), (26) each one of them takes infinite value. However, their difference is exactly zero. That is to say the infinite number of counts of the observer eventually gives equal probabilities. As the classic theory of probabilities demands, to be absolutely accurate the values of probabilities when the counts (tests) are infinite. This is not a coincidence. This is an amazing phenomenon of numbers.

Continuing, we are interested to examine if, due to the total information that collects the observer from the total distribution Distr ( { e , o } / N S F , f 1 ) on every position μ of the axis Ax(Ν), it comes that the position μ is “H-T type”. In other words we interested of the relation (1) in Introduction. That is to say if the e, o on every position μ N are equally probable. If the answer to this question is positive then according to the analysis that we made in the Chapter 4 we will have:

Type [ Distr ( { e , o } / N S F , f 1 ) ] = H T , that is to say then we will have proven the requested for the validity of the Riemann Hypothesis, because as we said this is the relation (1). For this, we will try to compare the potentials of e, o on a whichever position of natural number μ(initially except of SFNs and next at the end of this Chapter we will examine the case where μ S F N s ), due to the total information which is accumulated on this position μ from the total distribution of events τ { e , o } over all the infinite μ ν of Ax(Ν):

V e ( μ ) = ν = 1 λ ν | μ ν μ | , V o ( μ ) = ν = 1 λ ν | μ ν μ |

Obviously the μ in the sums above is selected each time so that the denominators wouldn’t be zero. Finally for these potentials and their related probabilities etc, we will agree to state with μ ¯ ν , μ ¯ ν respectively the positions of e, o, SFN that are located on the left of the position μ on the Ax(Ν), that is to say μ ¯ ν < μ , μ ¯ ν < μ . And alike, with μ ¯ ¯ ν , μ ¯ ¯ ν respectively the positions of e, o, SFN that are located right of the position μ, that is to say μ ¯ ¯ ν > μ , μ ¯ ¯ ν > μ . Based on these we will have for the uses of (26) that | μ ¯ ν μ | = ( μ ¯ ν μ ) , | μ ¯ ν μ | = ( μ ¯ ν μ ) and similarly also | μ ¯ ¯ ν μ | = + ( μ ¯ ¯ ν μ ) , | μ ¯ ¯ ν μ | = + ( μ ¯ ¯ ν μ ) .

That is, we adopt the use of single and double accents as well as single and double hyphens over the symbols to have reference correspondingly in functions of e, o and of the SFN that are left, right of the position μ, respectively. With the help of the Taylor expansion we can easily get these four results, noting however that from these are valid (at first glance) those for which the series will converge, that is if x < 1 :

1 1 x = 1 + x + x 2 + x 3 + x 4 + (27)

For the rest of the cases there will be an error, but as we will see this error will be finite amount for a finite position of μ and so this finite amount of error can be ignored in relation to other infinite quantities. Let firstly consider that μ be a finite position of a natural number (including also the natural μ = 0 ) on the axis of natural numbers Ax(N). We will also set λ = λ ν = 1 , λ = λ ν = 1 , ν N * .

Using the (27) with y ¯ ¯ ν = μ / μ ¯ ¯ ν < 1 for the events e in the positions μ ¯ ¯ ν , which are on the right of μ, we will have:

λ Σ 1 = ν = N e R λ ν | μ ¯ ¯ ν μ | = λ ν = N e R 1 μ ¯ ¯ ν 1 1 y ¯ ¯ ν = λ ν = N e R 1 μ ¯ ¯ ν s = 0 ( y ¯ ¯ ν ) s = λ ν = N e R 1 μ ¯ ¯ ν s = 0 ( μ μ ¯ ¯ ν ) s = λ s = 0 ν = N e R μ s ( μ ¯ ¯ ν ) s + 1 (28)

The relation of λ Σ 1 with double accents for the events of type o, which are again on the right of μ, will be, relatively to the rest, the same, by putting N o R instead of N e R and λ instead of λ . The natural numbers N e R , N o R are e, o SFN which are respectively the two immediate larger numbers than μ. However if we try to take also and the another piece Σ 0 in the first equation (28) that refers on the left of μ, then, because for the Σ 0 is valid that y ¯ ν = μ / μ ¯ ν > 1 , it follows that now we cannot expand the Σ 0 according to (27), but symbolizing with N e L the immediate next smaller e-SFN of μ, we can simply write for the Σ 0 its function:

λ Σ 0 = ν = 1 ν = N e L λ ν | μ ¯ ν μ | = λ ν = 1 ν = N e L 1 μ ¯ ν 1 1 y ¯ ν (29)

However if, by bypassing this restriction and we expand the series of the second member in the last equality of (28) [of the 5 equalities of (28)] then we will introduce an error. That is to say instead of λ Σ 0 we will expand the (28) by the respective wrong expression:

λ Σ 2 = λ s = 0 ν = 1 ν = N e L μ s ( μ ¯ ν ) s + 1 (30)

Obviously, if there was not such error the Σ 0 and Σ 2 would be equal. The important thing here is, that because the position μ is a finite natural number, it follows that the two results Σ 0 , Σ 2 of the relations (29), (30) respectively will also be finite numbers, and as finite they will introduce simply a finite error that derives from the relations (29), (30). Also, obviously it is: Σ 1 , Σ 2 , Σ 0 , Σ 1 , Σ 2 , Σ 0 > 0 . Therefore for the potentials in the position μ, which correlate with the first two terms of the first part of (26), we will have:

V e ( μ ) = λ ( Σ 1 + Σ 0 ) = λ ( Σ 1 + Σ 2 ) λ ( Σ 2 Σ 0 ) ,

and

V o ( μ ) = λ ( Σ 1 + Σ 0 ) = λ ( Σ 1 + Σ 2 ) λ ( Σ 2 Σ 0 ) .

As we previously said, due to the finite position of μ, it follows that the two quantities Σ 2 Σ 0 , Σ 2 Σ 0 will be finite. Therefore finally, either the other two quantities Σ = Σ 1 + Σ 2 , Σ = Σ 1 + Σ 2 are infinite or they are finite [as we are going to see by proving bellow the relation (35) they will be infinite], it follows that the two absolute potentials will be written:

| V e ( μ ) | = Σ C e , | V o ( μ ) | = Σ C o .

where C e , C o are two finite quantities that are stem from the finite terms Σ 2 Σ 0 , Σ 2 Σ 0 . Therefore the absolute potentials, which as we said accumulate the exact statistical information from the distinct actions, without exception, of all of the events e, o of the infinite distribution of SFN on the position μ, will be written:

| V e ( μ ) | = C e + s = 0 ν = 1 μ s ( μ ν ) s + 1 (31)

| V o ( μ ) | = C o + s = 0 ν = 1 μ s ( μ ν ) s + 1 (32)

In the relations (31), (32) we nullified the hyphens over the μ ν , μ ν , because now we extended the interval in all of the range of the axis of the natural numbers: [ 0 , + ) . Next, if we observe the two absolute potentials of events of the relations (31), (32), we will find out that each one is consisted from one “principal part” of zero order power of μ, that is, with exponent s = 0 and also a series of sequential terms for s = 1 , s = 2 , that correlate with the function ζ ( s ) of Riemann. That is to say:

A ( ) = | V e ( μ ) | = C e + ν = 1 1 μ ν + μ ν = 1 ( 1 μ ν ) 2 + μ 2 ν = 1 ( 1 μ ν ) 3 +

B ( ) = | V o ( μ ) | = C o + ν = 1 1 μ ν + μ ν = 1 ( 1 μ ν ) 2 + μ 2 ν = 1 ( 1 μ ν ) 3 +

And by symbolizing in abbreviation the terms of the sums above with μ s M τ ( s ) ( N ) and replacing the ∞ (symbolically here) with N we will get:

A ( N ) = C e + M e ( 0 ) ( N ) + μ M e ( 1 ) ( N ) + μ 2 M e ( 2 ) ( N ) + (33)

B ( N ) = C o + M o ( 0 ) ( N ) + μ M o ( 1 ) ( N ) + μ 2 M o ( 2 ) ( N ) + (34)

The key question now is if the absolute potentials of (33), (34) tend to become equal when N . Continuing let’s first remember the known property:

ν = 1 [ 1 + ( q ν ) 1 ] = ν = 1 [ ( 1 ( q ν ) 2 ) / ( 1 ( q ν ) 1 ) ] = ν = 1 [ 1 ( q ν ) 2 ] / ν = 1 [ 1 ( q ν ) 1 ] = ζ ( 1 ) / ζ ( 2 ) = / ( ε + π 2 / 6 ) =

From this result and the relation (26) we will now prove that the “principal parts” (zero order as to μ) will satisfy the relations:

lim N M e ( 0 ) ( N ) = lim N M o ( 0 ) ( N ) = (35)

For the proof of (35) we observe that firstly the two terms χ > 0 , ψ < 0 in the first part of (26) have λ ν = + 1 , λ ν = 1 , ν N * and they define two new positive terms α = χ > 0 , β = ψ > 0 which correspond to the first two parts of the relation (35), and obviously equivalently in (26) the terms α, β are subtracted. If however, instead of subtracting them we add them, then, based on (25), (26) and the analysis that was just previously made they will give the result:

ν = 1 [ 1 + ( q ν ) 1 ] 1 = [ ζ ( 1 ) / ζ ( 2 ) ] 1 , that we saw that it is infinite. The proof, that the first term of this last relation coincides with the α + β , is obtained immediately by developing its product ν = 1 [ 1 + ( q ν ) 1 ] , where so all the

numerical values μ ν of SFNs together with the unit “1” are produced. On the other hand however, these two terms α, β are differing by “−1”. Therefore because they are positive and have infinite sum and finite difference “−1” they will tend to become two equal and infinitely large positive numbers as the multitude N of terms, as well of this series α as also of the β, tends to infinity.

A more detailed proof is the following: Writing the terms α, β as functions of N based on the previous result [ ζ ( 1 ) / ζ ( 2 ) ] 1 for the sum and also the (26) for the difference, we will have for N respectively I ( N ) = α ( N ) + β ( N ) and J ( N ) = α ( N ) β ( N ) 1 . Therefore ε > 0 whatever large, there will be two natural numbers ν 01 , ν 02 so that: I ( N ) > 1 + 2 ε + ( 1 / ε ) , ν > ν 01 and simultaneously 1 ( 1 / ε ) < J ( N ) < 1 + ( 1 / ε ) , ν > ν 02 . These two relations however will be valid simultaneously ν > ν 0 = max ( ν 01 , ν 02 ) . Therefore from these three above inequalities of I ( N ) , J ( N ) with a simple addition by members of the two inequalities with the same direction, we easily we get: α ( N ) > ε , ν > ν 0 . Repeating this procedure again for the I ( N ) but electing I ( N ) > 1 + 2 ε + ( 1 / ε ) , ν > ν 01 and also 1 + ( 1 / ε ) > J ( N ) > + 1 ( 1 / ε ) , ν > ν 02 for the equivalent relation J ( N ) 1 , we get: β ( N ) > ε , ν > ν 0 . Therefore according to the definitions of limits we will have α ( ) = , β ( ) = . However we said before, that for N we get α ( N ) β ( N ) 1 , so finally for N : [ α ( N ) β ( N ) ] / α ( N ) 1 / α ( N ) and therefore because we have just shown that α ( ) = , we result to the relation: 1 [ β ( N ) / α ( N ) ] 0 . That is to say α ( ) = β ( ) .

Now based on the definitions of limits, the relation α ( ) = β ( ) = , that we just showed, states that with ε whatever small positive real number there will always be three natural numbers ν 1 , ν 2 , ν 3 so that for N > ν 1 , N > ν 2 , N > ν 3 will stand respectively:

M e ( 0 ) ( N ) > 1 / ε , M o ( 0 ) ( N ) > 1 / ε , | | M o ( 0 ) ( N ) / M e ( 0 ) ( N ) | 1 | < ε

And so for ν 0 = max ( ν 1 , ν 2 , ν 3 ) these three relations will be valid simultaneously. This complex proposition means that as the N tends to infinity the percentage difference (and this is important) of the two “principal parts” M e ( 0 ) ( N ) , M o ( 0 ) ( N ) of the relation (35) will become more and more smaller, while simultaneously these two “principal parts” will grow infinitely. Based on all this we want to check if the same applies also for the absolute potentials | V e ( μ ) | , | V o ( μ ) | .

What is entered to the calculation of probabilities on the finite position μ, for check if this position is of “H-T type”, is (as we said in Chapter 4) the limit lim N [ | V o ( μ ) | / | V e ( μ ) | ] . So based on (33), (34), by symbolizing the absolute potentials with A ( N ) = | V e ( μ ) | and with B ( N ) = | V o ( μ ) | we will try for N to check if is valid the next:

B ( N ) / A ( N ) 1 (36)

For checking the (36) it is enough to prove that for N it is valid:

| V o ( μ ) | | V e ( μ ) | 1 or | V o ( μ ) | / M e ( 0 ) ( N ) | V e ( μ ) | / M e ( 0 ) ( N ) 1 1

The analytical expression 1/1 above and not simply 1 has the obvious meaning here that the numerator and denominator, each separately, must be 1. But because we have shown before that for N is M e ( 0 ) ( N ) / M e ( 0 ) ( N ) 1 and M e ( 0 ) ( N ) and on the other hand we said that C e , C o are two finite numbers, we conclude that the requested sufficient condition of (36) is evolving

into the relation: 1 + [ Σ o / M e ( 0 ) ( N ) ] 1 + [ Σ e / M e ( 0 ) ( N ) ] 1 + 1 1 + 1 where here the analytical expression ( 1 + 1 ) / ( 1 + 1 ) and not simply 1 has the obvious meaning as we explain just before, and where additionally we symbolize Σ o = μ M o ( 1 ) ( N ) + μ 2 M o ( 2 ) ( N ) + and Σ e = μ M e ( 1 ) ( N ) + μ 2 M e ( 2 ) ( N ) + , using also (33), (34). Therefore, the equivalent sufficient condition for (36) ends up being the relation:

Σ o Σ e M e ( 0 ) ( N ) 0

or in more detail the relation:

μ [ M o ( 1 ) ( N ) M e ( 1 ) ( N ) ] + μ 2 [ M o ( 2 ) ( N ) M e ( 2 ) ( N ) ] + M e ( 0 ) ( N ) 0 (37)

The “→” becomes “=”, when N = , and this will be defined through the known method by electing ε > 0 that we used previously. But there is a known property of SFNs, that comes directly from the expansion

ζ ( s ) = 1 / ν = 1 [ 1 ( q ν ) s ] :

M o ( s ) ( ) M e ( s ) ( ) = 1 [ 1 / ζ ( s + 1 ) ] , s = 1 , 2 , 3 , 4 , (38)

So by setting: ε s ( N ) = M o ( s ) ( N ) M e ( s ) ( N ) = [ 1 1 Z ( N , s + 1 ) ] , for N the (37) will be equivalent to the relation:

G ( N ) = s = 1 μ s M e ( 0 ) ( N ) ε s ( N ) 0 (39)

The function G(N) is expanding into infinite terms of the (39). Here is the crucial point. In order to be born the limit G(N) the index N must have run to infinity for all, without exception, the terms of G(N). But based on the properties of the function ζ ( s ) and of the (38) the sequence ε s ( ) , as to s, will be bounded: 1 < ε s ( ) < 0 and furthermore will have as a limit the zero when s . So the M e ( 0 ) ( N ) will be complete as to N in the sense that N arrived in infinity before the index s began to run. We could justify this in more analytical as follows: “To run the index s, the relations that the index s describes must have their required completed structure. And this specific completed structure is achieved with the necessary precondition that all the SFNs of the axis Ax(N) participate in this structure. That is to say, the structure of these relations is complete when the other index N encloses all his infinite values in these relations of reference of indexs”. It is about a deterministic, therefore logical, series of running of indexes. Therefore in the next calculations we must take into consideration the observation that the (39) is completed by running first the index N and then by running the index s . That is to say we consider s N , as it is obviously needed, that M τ ( s ) ( N ) M τ ( s ) ( ) , τ { e , o } , and likewise for the same reason with N will be

ε s ( N ) = [ 1 ( 1 / Z ( N , s + 1 ) ) ] [ 1 ( 1 / ζ ( s + 1 ) ) ] = ε s ( ) . And because, based on the relation (38), in the (39) will be valid that: lim N [ Z ( N , s + 1 ) ] = ζ ( s + 1 ) .

Let’s examine first by supervising the (39). From the properties of the function ζ ( s ) we conclude that the factors ε s ( ) are positive numbers smaller to 1, which all tend to the 0 as the s . They are factors ε s ( ) which are bounded from 1 and they are multiplied by the corresponding factors μ s / M e ( 0 ) ( ) to form like this the expansion of (39). But the factors μ s / M e ( 0 ) ( ) are all zero for every finite s. The reason is that the N first has completed already its course to infinity in the denominator M e ( 0 ) ( N ) (as we explain before) and then the index s runs to infinity. Therefore as the index s the finite terms of the sequence μ s as they simultaneously are developing and tending to an infinite quantity, they are divided over each position of their development (where they have finite value) by the already infinite quantity M e ( s ) ( ) , because this last quantity is already completed as to N, while as well simultaneously they are multiplied by the positive factors ε s ( ) , which have been also completed as to N and therefore they will have been bounded from 1. So all the terms (of every finite position) of the expansion of the limit of the relation (39) will be of the form ( μ s / ) × ε s ( ) , with 0 < ε s ( ) < 1 and so therefore, like this, finally all these terms of the limit of the relation (39) will be equal to zero.

We checked the terms of the first part of (39) for finite values of s. Lets now see what happens in limit L of these terms of the relation (39) as the s will tend to infinity. We will examine this firstly supervisory. So we observe that [ μ s / M e ( 0 ) ( ) ] 1 , because in this the denominator, due to the “action of all the SFNs together”, will be constantly larger of the numerator μ s . That is, the numerator run to reach the result of the denominator but without to can the numerator overcome this result, because by definition of the limits nothing is greater from this infinite result. Also, as we said, as well will be ε s ( ) 0 . Therefore the action of the total distribution of all the SFNs will give us L = 0. We worked here with the quotients because these are exactly the ones we will need for the calculation of probabilities below. Lets now see the same limit in detail based on the definition of limits:

α) Because we have shown that lim N M e ( 0 ) ( N ) = we conclude that we have for every u = μ s > 0 , and therefore also s = ln ( u ) / ln ( μ ) , that there is N s N : M e ( 0 ) ( N ) > μ s , N > N s . β) But because we said the ε s ( ) is a null sequence of the index s we conclude that ξ = 1 / ν , ν N there will be indexes s 0 ν = s ( ν ) , N 0 ν N , where s ( ν ) is a function of ν, so that: ε s ( N ) < ξ , s > s 0 ν , N > N 0 ν . We define ν N the indexes: N 0 ν * = max ( N 0 ν , N s ( ν ) ) , s 0 ν * = s ( ν ) . From all the above we see that the u in (α) is an increasing function of the index s, and on the other hand in (β) we observe that the index s > s 0 ν = s ( ν ) is an increasing function of ν. Therefore the u will be an increasing function of the index ν also. Based on all this, and setting finally D ( N , s ) = [ μ s ε s ( N ) ] / M e ( 0 ) ( N ) , from the two last cases (α) και (β) we conclude that D ( N , s ) < ξ = 1 / ν , N > N 0 ν * and s > s 0 ν * and ν N , with ν any large natural number. Taking in to account that M e ( 0 ) ( ) = lim N M e ( 0 ) ( N ) , ε s ( ) = lim N ε s ( N ) we conclude that:

lim s [ lim N μ s M e ( 0 ) ( N ) ε s ( N ) ] = lim s μ s M e ( 0 ) ( ) ε s ( ) = 0 (40)

We proved in that way that all the terms of the first part of (39) till infinity will be zero. We conclude that the relation (39) is valid, and because it is equivalent N to the relation (37) which is a sufficient condition of the validity of the relation (36), we reach to the conclusion that the (36) is valid.

The fact that first the index N runs to infinity, and just then the s runs to infinity expresses here the introduction of the total distribution of SFNs in the logical propositions which we operate here, because just all these propositions are consequences of the distribution of all SFNs together.

After all these, we can show in detail that the series in the first part of (39) becomes null for ν = N . Indeed, for this to happen all it takes is to show that: ε > 0 there is a s 0 N so that because the ν ran all the values till infinity this proposition is true: k > s 0 there can be found always an index ν 0 N * , so that ν > ν 0 is valid:

1 M e ( 0 ) ( ν ) | s = 1 s = k μ s ε s ( ν ) | < ε (41)

The fact that the index ν ran all the values until infinity means that in the formation of following propositions will contribute all the distribution from the SFNs. The proof for (41) is as follows: “The sequence x ν = 1 / M e ( 0 ) ( ν ) we said that is zero, therefore E > 0 will exist n 0 N * : x ν < E , ν > n 0 . Also, the sequences | ε s ( ν ) | have as limits L s = | 1 [ 1 / ζ ( s + 1 ) ] | which we already said that they are all bounded from 1. Therefore by selecting a ξ s : 1 > L s + ξ s > 0 , will exist the corresponding n s N * so that: | ε s ( ν ) | L s + ξ s 1 , ν > n s . Therefore ν > n = max ( n 1 , n 2 , , n k ) will be valid:

| s = 1 s = k μ s ε s ( ν ) | s = 1 s = k | μ s ε s ( ν ) | s = 1 s = k μ s = F ( k ) .

Continuing, setting ν 0 = max ( n , n 0 ) we have that ε > 0 and k > 0 = s 0 there will be E = ε / F ( k ) so that: “The inequality (41), which comes from the multiplication by members of the two previously mentioned inequalities, will be true ν > ν 0 = f ( k , ε ) ”. The F(k) is a function of the index k, because based on the previous the F(k) is defined by the running k. While ν 0 is a function of k, ε, because n 0 is a function of Ε and therefore also of k, ε, due to the definition E = ε / F ( k ) , and because the n is a function of k (since is a function of the limit L k of L s ) and of ξ s selected to be smaller than 1 L s , s < k. We conclude that ε > 0 any small real number and simultaneously also k > 0 , any large natural number independent of ε, there will be the index s 0 = 0 as well as the index ν 0 = f ( k , ε ) so that the relation (41) to be valid”.

Based on what we mentioned for (39), (41) and the properties of the limits of the series M o ( 0 ) ( N ) , M e ( 0 ) ( N ) that we showed before, in total we will have: k > 0 , that defines a random front of μ s from any multitude of terms for s = 1 , 2 , 3 , , k , and ε > 0 , any small, there will be three natural numbers N 1 , N 2 , N 3 = f ( k , ε ) , where f ( k , ε ) function of k , ε , so that N > N 1 , N > N 2 , N > N 3 to be valid respectively the relations:

max ( | C e | , | C o | ) M e ( 0 ) ( N ) < ε , | | M o ( 0 ) ( N ) / M e ( 0 ) ( N ) | 1 | < ε , s = 1 s = k [ μ s ε s ( N ) M e ( 0 ) ( N ) ] < ε

Therefore k > 0 there will be also a N 0 = max ( N 1 , N 2 , N 3 ) for which there will be valid simultaneously these last three relations N > N 0 . This states directly that the relation (39) not only will be verified for whatever large multitude k of its terms, and also with any desired accuracy, but will simultaneously be sufficient condition of the relation (36). Therefore the relation (39) really is true for the running of the indexes N , s . We also saw before, in the relation (40), that all the terms of the series in the relation (39) are null for k , thing compatible with the convergence of this series.

The previous cumulative formulation using N 1 , N 2 , N 3 generally as functions of ε, k, is valid obviously also for every logical connection of implication among (36), (37), (38), (39), formulated as relations with arbitrarily large N. Because the indexes N 1 , N 2 , N 3 as well as the ε, k refer always to arbitrarily small or large numerical quantities, but based on the standard logical properties of infinitesimal calculus will be referred always to finite quantities. Therefore the crucial percent differences of these two absolute potentials A ( N ) , B ( N ) will be null finally for N , exactly as we agreed to state through the relation (36). This numerically can be expressed as: “Any multitude of terms we select for s = 1 , 2 , , k and whatever small percent difference Δx of A ( N ) , B ( N ) we select as a limit of their ‘equation’, we can find always a N0 beyond which all these selections will be satisfied from the relation (36), formulated of course by functions of N (instead of ∞)”.

Finally, due to the importance of this point, let’s study the validity of the relation (36) also from another point of view. Let be ω ν = ε s ( ν ) / M e ( 0 ) ( ν ) the sequence of positive terms. Because as we said ε s ( ν ) is bounded from 0, 1 and M e ( 0 ) ( ν ) is becoming infinite for ν , we conclude that ω ν is null sequence. From the two previous levels of analysis and replacing in the absolute potentials of events of (33), (34) the N with ν, easily we find that is valid the next:

lim ν | V o ( μ ) | | V e ( μ ) | M e ( 0 ) ( ν ) = lim k [ lim ν s = 1 s = k μ s ω ν ] (42)

Because ω ν is null and the μ is finite, it follows that k N * and ε < 1 , ε > 0 , there will be every time also a corresponding ν k N * , so that is valid that: μ [ ω ν ] 1 / k < ε < 1 , ν > ν k . Therefore, ν > ν k N * we will have:

s = 1 s = k μ s ω ν = s = 1 s = k [ μ ( ω ν ) 1 / s ] s s = 1 s = k [ μ ( ω ν ) 1 / k ] s = [ μ ( ω ν ) 1 k ] k + 1 1 μ ( ω ν ) 1 k 1 1 (43)

The relation (43) tells us that the first part of (42) is also zero, because the last part of (43) for every value of k is a null sequence of ν. From the relation (33) we observe that by ν = N we have | V e ( μ ) | = C e + M e ( 0 ) ( ν ) + Σ e . And because e.g. for μ > 6 is μ / μ 1 = μ / 6 > 1 , we will have in (33) that, at least for μ > 6 , the sequences μ s / ( μ 1 ) s + 1 will be increasing and therefore the Σ e will tend to the infinity. Therefore now, because the C e as we previously said is finite, and for ν the Σ e tends to infinity, we come to the conclusion that at least for infinite positions of μ > 6 will be valid that: δ + C e > 0 , with δ any suitable positive number, there will be index ν 1 N so that:

Σ e > δ + C e Σ e C e > δ > 0 , ν > ν 1 , while obviously simultaneously will be valid that M e ( 0 ) ( ν ) > 0 , ν > ν 1 . From these last two inequalities we get now that | V e ( μ ) | = Σ e C e + M e ( 0 ) ( ν ) M e ( 0 ) ( ν ) , ν > ν 1 and regardless if the amount

C e is a positive or negative number. Finally, from the last relation is produced the following inequality (44), defining however in it that the comparison of its two pure fractions (that is to say without the existence of two executers “lim” in front of them) to being happening ν > max ( ν 1 , ν ε ) . Because these two pure fractions are been led like this simultaneously in the definition of their limits in the next relation (44). That is to say it happens ν > ν ε N to be these two pure fractions smaller from any given (and whatever small) ε > 0 and furthermore to be simultaneously valid for them, our last inequality: | V e ( μ ) | M e ( 0 ) ( ν ) , ν > ν 1 . Thus finally, will be valid the next inequality:

lim ν | | V o ( μ ) | | V e ( μ ) | | M e ( 0 ) ( ν ) lim ν | | V o ( μ ) | | V e ( μ ) | | | V e ( μ ) | (44)

Therefore, we conclude that because it is zero the first part of (42), it follows that also the second part of (44) will also be zero, thing that implies the (36). Moreover, and completely independent from all that, if the second part of (44) was not zero then, because the denominator in the second part of (44) is becoming infinite for ν , we would have come to the paradox conclusion that the limit lim ν [ | V o ( μ ) | | V e ( μ ) | ] necessarily would not have been finite, but it would have diverged in +∞ or in −∞, thing that is obviously invalid. Because in such case (based as well on our analysis in the below Chapter 7) we would soon have come to the illogical conclusions that it is either p e ( μ ) = 0 , p o ( μ ) = 1 or p e ( μ ) = 1 , p o ( μ ) = 0 , thing that implies that finally either e-SFNs or o-SFNs will have been disappeared from the axis Ax(N) and that finally into their infinite multitude their analogy would not have been 50:50. This last observation leads us to the conclusion that necessarily the relation (36) is valid. At this point this last observation tells us that the lim ν | | V o ( μ ) | | V e ( μ ) | | will be finite even though we subtract two infinite quantities, just as that is to say, happens with the limit lim ν [ M e ( 0 ) ( ν ) M o ( 0 ) ( ν ) ] = 1 based on the relations (26), (35). The importance of this observation comes from the definition of the absolute potentials of the Chapter 4, thing that took advantage the properties of the distribution of the numerical values of all of the SFNs on Ax(N) in a way that the typical definition of the probabilities e, o through the infinitation of the limit lim ν | V e ( μ ) | forces every logically consistent result to lead us to an “H-T type” distribution of e, o. We concluded that the only case is the one that we showed before in another way, that is to say that the first part of (42) is zero due to the fact that the second part of the relation (42) became zero, thing that as we said implies that eventually through the (44) the relation (36) is valid. So now the requested probabilities at any finite position of a natural number μ, and based on the normalization that we refer in Chapter 4 and with the relation (36), finally we will have:

p e ( μ ) = lim N A ( N ) A ( N ) + B ( N ) = lim N A ( N ) / A ( N ) [ A ( N ) / A ( N ) ] + [ B ( N ) / A ( N ) ] = 1 1 + 1 = 1 / 2

Similarly we find p o ( μ ) = 1 / 2 . Therefore the distribution, on every finite position μ N του Ax(N) and due to the total information of e, o of all the SFNs of Ax(N), will be of “H-T type”. Therefore, the basic relation (1) of the Introduction, that we wanted to show, has been proven for all finite positions of μ.

The question that now remains to be answered is if the movable finite position μ “H-T type” can be as well considered that defines also a position in infinity μ . If yes, then the previous proof covers all the cases. If not, we must check the case μ based on the sole definition that remains: μ > μ ν , ν N * . So now we will examine this last case.

Based on this last definition of μ , the complex potential V ( μ ) at “infinitely distant” μ position from zero can be written:

V ( μ ) = V e ( μ ) + V o ( μ ) = ν = 1 λ ν | μ ν μ | + ν = 1 λ ν | μ ν μ | = ν = 1 1 μ λ ν 1 y ν (45)

With λ ν = 1 for the types e and λ ν = 1 for the types o of μ ν = 2 , 3 , 5 , 6 , , with ν = 1 , 2 , 3 , 4 , and y ν = μ ν / μ < 1 , ν N * . Making again Taylor expansion on the (45) from s = 0 until infinity, as before, easily we conclude in the relation:

ν = 1 1 μ λ ν 1 y ν = 1 μ ν = 1 λ ν + 1 μ 2 ν = 1 λ ν μ ν + 1 μ 3 ν = 1 λ ν ( μ ν ) 2 + (46)

Putting again Ν instead of ∞ as before, we would want to check if the first part is infinite for N :

ν = 1 ν = N 1 μ λ ν 1 y ν = 1 μ ν = 1 ν = N λ ν + 1 μ 2 ν = 1 ν = N λ ν ( μ ν ) 1 + 1 μ 3 ν = 1 ν = N λ ν ( μ ν ) 2 + (47)

So based on our assertion that the μ represents infinity, we assumed before that the μ will be larger than all the values μ ν , that is to say it will be larger than μ ν ν N * . In other words the μ will be a natural number larger than the front μ N of every interval ( 1 , μ N ] in every use of this interval. But, on the other hand, because λ ν = 1 , for the relation (47) we conclude that N N * is valid:

| ν = 1 ν = N λ ν ( μ ν ) s | < ν = 1 ν = N ( μ ν ) s < N ( μ N ) s (48)

According to the definition of μ before, the μ can be considered variable position larger than every value μ ν . Therefore by defining also the null sequence of positive terms ξ ( ν ) = 1 / μ ν , on the basis of the previous definition of μ we can conclude that 1 / μ = ξ ( ) = 0 or equivalently:

1 / μ < ξ ( ν ) , ν N * (49)

That is to say just like before, the index ν of μ ν has completed for the 1/μ [through the sequence ξ ( ν ) of SFN] its path till infinity for every value of the index Ν of the relation (48). Putting ε = 1 / μ N , then on the basis of the relations (48), (49) we conclude that s 0 N , which defines as before a front in the terms of (47), and s < s 0 and N N * that:

1 μ s + 1 < ( ξ ( Ν ) ) s + 1 = ε s + 1

Because the sequence ξ ( ν ) has all the terms 1 / μ ν until infinity, which as we said, by definition, they have been ran by the indeterminate μ. Therefore thus due to the relations (48), (49), we will finally have:

s < s 0 , N N * : ( 1 μ s + 1 ) | ν = 1 ν = N λ ν ( μ ν ) s | < ε s + 1 N ( ξ ( N ) ) s = ε s + 1 N ε s = ε N = N / μ N (50)

The relation (50) implies that as N will stand that: [ N / μ N ] 0 , and therefore we will have: ( 1 / μ s + 1 ) ν = 1 ν = N [ λ ν ( μ ν ) s ] 0 , s N . That is to say all

the terms of the relation (47) will tend to zero as N , and therefore all of them will be exactly equal to zero. Therefore onto the μ = the potential of events V ( μ ) of the relation (45) will be zero, thing that implies that onto the μ = the two probabilities for e, o will be equal between them, equal to 1/2 each of them. The proof “that μ = is ‘H-T type’” is a result of the annihilation of the terms of the relation (45) based on the sole definition that remained for the utmost case: μ = . But we must not ignore that the inequality (48) hides a very favorable inequality for this particular proof, because we ignored the decreasing of the sum of (48) from the total statistical which enforce decreasing in this sum from λ ν with the opposite signs.

We have shown that whether the μ is finite or infinite (or zero) the information of the observer from the distribution of all e, o projected on his position μ, will imply mathematically that the μ will always be a position of “H-T type”. In other words we have shown that the informational trace in every position of μ of Ax(N) is “H-T type”. Therefore the relation (1) which we wanted to show, have been proven.

We can also note that because in the proof of the distribution “H-T type” did not use as a fact that the μ is a natural number, we conclude that this distribution will be “H-T type” also for every rational number μ. This last one though, is not necessary in our problem, where we work with the set Ν of natural numbers, but simply we observe that also the points (positions) of rational numbers of Ax(Ν) will be “H-T type”. Also, if we abstract from the axis Ax(N) some SFN with numerical value μ κ , then the following will happen: α) In the relation (39) the two limits of the sequences M e ( 0 ) ( N ) , M o ( 0 ) ( N ) for N will be subjected to small and finite alterations and therefore these will continue to be infinite with zero percent difference. β) The limit of the square bracket for N will be subjected also just one finite alteration and therefore will continue to be finite and bounded but this time from some other bound that is defined from the value μ κ . Therefore, the proving process leads again to the same conclusion, that the relation (36) will continue to be valid, as much for the position μ = μ κ as also for every other position of natural μ μ ν , ν N { κ } . This specific μ = μ κ SFN obviously was previously evacuated by its SFN counterpart [as we said before and also in (2.2.3, b) of Chapter 2 etc.] so to calculate on it the potential of events and so the two probabilities of e, o (on this empty μ position of SFN) because of the action from all the other SFNs will be equal again each other. We considered here the Ax(N) as Euclidean axis, because the observer studies the distribution of e, o as a classical traveler of the axis Ax(N) where the events preexist over him along with the Euclidean properties of all its distances. Therefore, the total information of the distribution projected on the position μ κ will make this position, as well as the positions μ of all non SFNS that we mentioned, to be of “H-T type”. We finally conclude that the observer which runs the Ax(N) (as sole and exclusive representative of the system of information) using the typical definition of probability in any positions μ, finds that the distribution will be of “H-T type”. Therefore the total distribution of SFNs will be H-T, and that we want to prove has been proved

6. Catholic Information

6.1) In this Chapter we will give the main definitions concern the introduction of term “Catholic Information” that is useful firstly for the bellow Chapter 7 of connection between RH and prime’s distribution. But this definition of Catholic Information might be proven useful and in a variety of other mathematics themes.

6.2) We will symbolize with F ( N , N ) every function F : N N . In the symbols of all the maps that we define here, the N inside of the parenthesis of the symbols F ( N , N ) etc, states the maps of N or of subset of N. So the F selects a multitude of the elements of N that are its maps. E.g. the function of selection of Prime Numbers from N by using the “Sieve of Eratosthenes”. But, we can also define functions symbolized here as F : N S a ( n ) type, where the set S a ( n ) will be consisted by a multitude K = f ( n ) of subsets (where every one of them consists from elements a i A ) that has a multitude of n elements. E.g. K = 2 n . We will symbolize these functions as F ( N , S ( A ) ) . The counting of the elements of A defines a counting of “magnitude” A from an observer, with a unit M equal to a multitude of ν elements of A, provided that the multitude of all of the elements of A was selected to be a multiple of ν. Last, we will symbolize with F ( S ( A ) , S ( B ) ) every function that has the form F : S a ( n ) S b ( k ) , that is to say, maps of subsets of A in subsets of B. Thus we can, generalizing these definitions, to even more composite ones, to construct complex sets X which will generally be S x ( n ) , using exclusively the N or generally the set Z of the Integer Numbers.

For that reason, we will from now on consider that every element a m of A = { a 1 , a 2 , , a n } = { a m : m = 1 , 2 , , n } , will also be by itself a set which in final analysis will have as elements an arrangement multitude of n true logical propositions (special properties): LP m i ¯ ( A ) , m = 1 , 2 , 3 , , n which were born, and therefore proven, generally from the previous maps which were in-between till the last formation (that is to say definition) of A. Where i = 1 , 2 , , m 0 is the index of numeration of a multitude m 0 of LP ¯ for every element a m . The m 0 generally will be different between different a m .

The dash over the symbols LP, CLP of a logical proposition we will agree that states that it is a true logical proposition while LP, CLP without a dash over them will state that a logical proposition may be true or false. LP, CLP will be constructed as is well known with the logical links OR, NOT, AND, IF, THEN, etc that will connect final elements of sets to suitable maps. If the sub-elements of every element of the set S x ( n ) are ordered in relation to the properties of definition, then we can define spaces (manifolds) of multiple dimensions, considering an order of κ sub-elements inside every element of S x ( n ) which will define κ multitude of coordinates. Every good definition must be based after all on to the general properties of set Z and its subsets and not on to perceptions of senses. We know that every measurable magnitude has as its fundamental definition that it can be divided and as well expanded to other parts with the same properties, as happens with the sets we refer here.

6.3) By saying below that the logical proposition LP2 derives “exclusively” from another logical proposition LP1, we will strictly mean a) that the truth of LP1 entail the truth of LP2 using Mathematical Logic and furthermore b) that the truth of LP2 is impossible to derive (be proven) by only using Mathematical Logic without using the truth of the proposition LP1.Therefore, for this definition the necessary precondition is that every one of the LP1, LP2 cannot be proven just by the use of Mathematical Logic but the system of LP1 and LP2 encloses new information that is not included in the rules of the Mathematical Logic itself.

6.4) We said that every element a m of a set A will be considered as a set of true propositions LP m i ¯ ( A ) . The propositions that are common for all the elements of A will be named Catholic true Logical propositions CLP j ¯ ( A ) , j = 1 , 2 , . These logical propositions will be valid therefore for the random element of A, that will also be called catholically chosen element (CC) of A. The minimum necessary multitude from the common CLP j ¯ ( A ) that are needed for the definition of A, and so these will constitute a particular basic set Q ( A ) = { CLP q ¯ ( A ) : q = 1 , 2 , 3 , , N } and obviously these propositions will derive from the general propositions that were used by an observer Ob for the progressive formation of A, e.g. the previous method of using some of the three types of maps F. The Q(A) will be named quantum of A. But every one element a m of A will include in-between the LP m i ¯ ( A ) also true propositions which don’t belong to quantum of A and which differentiate its corresponding element a m from all the other elements of A. E.g. two molecules of water belong in a set A of molecules of a drop of water that each one of them differentiates by the propositions of its unique position in space, because each one has a different position in space from all the other molecules of this drop. But all the molecules have the same set Q(A) of the specific properties of water. We will also agree that the elements of Q(A) that define the A will be named properties of A. It is logical that for the observer Ob to gathering all the elements of A by using the set of definition of Q(A), but simultaneously he differentiates these elements also each from another (e.g. during their counting) based this time on their non common properties symbolized as DLP m j ¯ ( A ) . Therefore, we conclude the obvious relationships: Q ( A ) a m and DLP m j ¯ ( A ) a m . But any such set A, because it has measurable elements, it can also define measurable magnitude via the map F: N S a ¯ ( n ) that we previously mentioned, with unit of measurement equal to a multitude ν of its elements, selecting the n to be a multiple of the multitude ν and where S a ¯ ( n ) generally includes all the possible subsets of A with a multitude ν of different elements each one subset of them. The set of all of the true propositions CLP j ¯ ( A ) , that derive (proven) exclusively from the properties (that is to say the true propositions) of quantum Q(A) will be named Catholic Information (CI) of A and will be symbolized with CI(A), [6] PDF pages 794, 797, 815, 816, etc, [7] PDF pages 562, 563 etc, [8]. Based on this previous definition for the Quantum of a set, evidently will be valid the next two properties: a) Q ( A B ) = Q ( A ) Q ( B ) and b) Q ( A B ) = Q ( A ) Q ( B ) .

6.5) a) Let be the A one set with finite number of n elements. As a “random selection of an element” from the set A we define a set of CLP of which the truth construct a function of the set of Natural numbers N in the A which selects every time (by map ν = 1 , 2 , 3 , 4 , ) only one element of A in an impartial way. Impartial way will be that, that by definition, into a multitude of M selections tending to infinity, every element of A will tend to be selected as much times, as in the meantime any other element of A was selected from this map of selections. If the A has infinite elements, then getting a part of set A with n finite and then letting the n to tend progressively to infinity and repeating the definition for every value of n, then this definition is generalized also for the set A with infinite elements. It is understandable that the selection every time does not remove the selected element from A but it is repositioned in its position, so that it is present also in the next selection.

b) Let be A one set generally with infinite number of elements and let be a function f : A B . Where B = { β 1 , β 2 , , β n } is a set with a finite multitude of n elements. As “random or impartial distribution of B over A” we define the function of distribution f, if, and only if, the function f was defined in a way so that in every its map, it maps every selected element of A on a randomly selected element of B. In the special case where the set B contains only two elements β 1 , β 2 , then this “random distribution of set B over the set A” will be named distribution Head-Tails (H-T) type, symbolized:

Type [ Distr ( B / A , f ) ] = H-T .

Let also be CLP ¯ ( A , B ) the set of true catholic propositions of the sets Q(A), Q(B) that are absolutely necessary in the definition of function f. We will name catholic information C I ( A , B , f ) of f, the set of all of true catholic propositions, that we will symbolize with CLP i ¯ ( A , B , f ) and will be those that result “exclusively” from the set of propositions CLP ¯ ( A , B ) W . Where W is the set that contains only all the absolutely necessary propositions of the definition of f. Therefore, every true proposition CLP i ¯ ( A , B , f ) will result from the map of the randomly selected element from the domain of f. The term result “exclusively” was defined previously on 6.3.

6.6) Because “in Mathematics nothing happens without a cause” we will formulate a proposition that is a direct consequence (paraphrase) of this quote. We will name it Proposition of Mathematical Consequence (PCM): Let two functions f 1 : A B και f 2 : A Γ = { γ 1 , γ 2 , , γ n } where A, B have infinite multitude of measurable elements, while Γ has finite number of elements and f 1 is a function 1-1. On every element of B is mapping only one element from the set Γ through f = ( f 1 ) 1 o f 2 : B Γ . Now if we prove that the set of Catholic Information CI ( B , Γ , f ) = CI ( A , B , f 1 ) CI ( A , Γ , f 2 ) of f 1 , f 2 does not contain CLP ¯ proposition that dictate through the function f one specific distribution (that is to say by definition one “non random” distribution) of the set Γ on the set B (i.e. of the set Γ on the set of maps of f 1 that belong to B), then the distribution of Γ on B will be by definition “a random distribution as to the map of the function f”, and then f 1 , f 2 will be called independent between each other, symbolized as f 1 < > f 2 .

6.7) As an application we will make an introductory reference to the two fundamental maps of the main article, where it will be better shown the meaning of this reference. Let S , S N the two sets of all the subsets of the set of the Prime numbers N q = { 2 , 3 , 5 , 7 , } and all of the subsets of the set N of the natural numbers respectively. We will be using from now on these symbols below and we will also be symbolizing the prime natural numbers as:

q 1 = 2 , q 2 = 3 , q 3 = 5 , .

Now, according to the definitions in 6.2 part before, we define the functions:

F 1 = F 1 ( N , N ) : N N N q and F 2 = F 2 ( N , S ( N ) ) : N S N (51)

F = F ( N , S ( N ) ) = [ F 1 ( N , N ) ] 1 o F 2 ( N , S ( N ) ) : N q S N (52)

f 1 : S N N S F and f 2 : S G = { e , o } (53)

For the F 1 ( N , N ) and all the rest similar functions the symbols were explained in 6.2. The first map of (51) is the known sieve of Eratosthenes that selects from the set of Natural numbers the Prime numbers, while the second map constructs the set SN of all of the subsets of N. The composition (52) constructs the set S of all the subsets of the set of Prime numbers Nq replacing every one of the natural numbers ν of the set SN with the corresponding prime number q ν . The relations (51), (52) can be presented with an association diagram of the N , N q , S N sets with connections by 3 vectors of the these 3 maps, so will be formed the triangle N N q S N : N F 1 N q F S N F 2 N . The other triangle S N S F G of the relations (53) similarly will be the next: S f 1 N S F f f G f 2 S that can also be placed in the same diagram with the first triangle by the connection map S N F S where the SN being transformed into S by the map F. The first function of the relations (53) constructs the set NSF of Square Free Numbers (SFN) and the composition f = ( f 1 ) 1 o f 2 distributes over the NSF the two events even (e) and odd (o) of the two subsets e - S FN = SFN ( e ) = [ 6 , 10 , 14 , ] , o -SFN = SFN ( o ) = [ 2 , 3 , 5 , , 30 , ] .

Previously, in the Chapter 5, we proved that the distribution of e, o on the N S F is of Heads-Tails type relative to the function f, that is, the relation referred in Introduction as (1) by form:

Type [ Distr ( G / N S F , f ) ] = H-T (54)

Evidently, the (54) will be equivalent to the Catholic Proposition (CLP):

p [ μ ν ( e ) > μ ν ( o ) ] = p [ μ ν ( e ) < μ ν ( o ) ] (55)

The μ ν ( e ) and μ ν ( o ) state numerical values of SFN (of e, o type) selected nevertheless randomly respectively from the sets SFN(e), SFN(o) of N S F = SFN ( e ) SFN ( o ) . To clear the meaning of this impartial distribution of e, o onto the arithmetical values μ ν of SFN (i.e. independently of great or small values of μ ν ) we’ll give another inverse example, i.e. where now (in contrary to the previous distribution of e, o on values of μ ν ) will not be satisfied the corresponding Catholic Proposition p [ α ν ( E ) > α ν ( O ) ] = p [ α ν ( E ) < α ν ( O ) ] , i.e. will be not valid the counterpart of (54). By the symbol E we declare the classically even (2κ) natural numbers α ν ( E ) { 4 , 80 , 100 , 112 } and with O we declare the classically odd (2κ + 1) numbers α ν ( O ) { 5 , 9 , 15 , 21 } with A ( E ) = { 4 , 80 , 100 , 112 } , A ( O ) = { 5 , 9 , 15 , 21 } . I.e. with α ν ( E ) , α ν ( O ) we symbolize two elements of types E, O for which we suppose that these selected randomly from the set A = A ( E ) A ( O ) , with the presupposition that we already did the reposition of the first element when we selected (randomly) the second from the set A = { 4 , 5 , 9 , 15 , 21 , 80 , 100 , 112 } .

We observe that for this selection of set A is valid that p [ α ν ( E ) > α ν ( O ) ] = 3 / 4 > p [ α ν ( E ) < α ν ( O ) ] = 1 / 4 . That is to say the classically even numbers here have the tendency to be distributed in the large numerical values of the set A and therefore the distribution of the two events O, E over the elements of A is not numerically impartial, although (as we said) our selection of O, E (which we suppose that came randomly as O, E and not E, E or O, O) by repositioning was impartial as randomly one. But according to (55) the same will be not happening with the e, o distribution over the μ ν values of SFNs in a finite but randomly selected interval of natural numbers from axis Ax(N).

We will define as “impartial logarithmic distribution” of prime numbers in set N one of those distributions of prime numbers where the set of prime numbers will be dictated by a mental roulette that selects them in a way that in a random interval δ n = [ ( q n ) 2 , ( q n + 1 ) 2 ) the prime numbers will have the statistical property [1] [2] [3] [6] to be neither concentrate nor dilute, but simply the primes follow the distribution which is dictated by the relation p ( ν δ n ν = q k ) ν = 1 n [ 1 ( q ν ) 1 ] , with all the others true CLP of its correction into this randomly selected (CC) interval δ n . The known theorem of logarithmic distribution of prime numbers is necessary consistency of the 1st theorem of prime numbers but not sufficient proposition for it, and so not equivalent. This equivalency as we prove in the next Chapter 7 demands the validity of RH, which according to the conclusion of previous Chapter 5 is valid. Based on the above probability we can conclude that only by the transition from one interval δ n on to its next δ n + 1 the value of the probability of appearance of prime numbers changes according to the above relation:

p ( ν δ n ν = q k ) ν = 1 n [ 1 ( q ν ) 1 ] (56)

This proof for the (56) catholic relation is simple [6], PDF Chapter 2 pages 798-801. The almost equal “≃” and not “=” here, is due to the fact that the frontiers (limits) of all of the multiples of prime numbers 2 , 3 , , q n into the δ n don’t coincide with the limits of the interval δ n that the perfect calculations demand, that is, for the exact equality etc. [1] [2] [3]. The previously mentioned impartiality demands for another information not to be introduced except from this that is introduced by completely general CLP (e.g., inequalities) that correct the “≃” and transform it to equality and that will stand in a random δ n of N. The theorem therefore of the logarithmic dilation of prime numbers we expect to be valid statistically in a large multitude of intervals δ n for n . As we will examine in the next Chapter 7, the impartial distribution of prime numbers preconditions that the Riemann hypothesis is valid. In that case the question that is raised is if we can have special relations of forecasting of prime numbers. E.g. if a relation of the form ν = 2 κ 1 could forecast with accuracy 100% the positions ν of a group of infinite prime numbers ( N κ ). For, just like in the impartial roulette, where none one of these particular relations can’t keep indefinitely their absolute validity into an infinite distribution, so and here we also wonder if the same happens. And according to the proof of RH in Chapter 5 the answer is no.

7. Connection between RH and Prime’s Distribution

In this Chapter using the results of Chapters 5, 6 we will prove that the “logarithmic distribution” that dictated from 1st theorem of prime numbers will be additionally and “impartial”, a problem that has been discussed in Chapter 6.

We will present here a method of progressive construction of the set SFM (Square Free Numbers). We will define the set S q ( n ) with elements to be sets s ν of 2 n multitude:

S q ( 1 ) = { s 1 , s 2 } = { { ( 1 ) } , { 2 , ( 1 ) } }

S q ( n ) = S q ( n 1 ) [ ν = 1 2 n 1 { s ν { q n } } ] , n , ν N , n 2 (57)

s 2 n ν = s 2 n 1 ν { q n } , n , ν N , n 2 , ν = 0 , 1 , 2 , , 2 n 1 1 (58)

Based on the above we define the set S = lim n [ S q ( n ) ] .

Every collection S q ( n ) defines a topological structure. Finally, Table 1 of creation of S q ( n ) is formed.

The S that comes from Table 1 is therefore the set:

S = { { ( 1 ) } , { ( 1 ) , 2 } , { ( 1 ) , 3 } , { ( 1 ) , 2 , 3 } , } (59)

We note that if we construct another set S1 of all of the subsets of SFNs and we map its elements on N [again with a corresponding f 1 function of the multiplications of relation (53) of 6.7 in the previous Chapter 6] we must cover the entire N, that is, we expect to emerge the set N, but this last hypothesis is need a proof which is useless in our present research.

Every element of S is a set and will be symbolized as we said with s ν , ν = 1 , 2 , 3 , . Τhe products of the elements of the sets s ν produce progressively the set N S F of the SFNs. Because the number 1 in multiplication is a neutral element the {(1)} will correspond the null set with multitude of elements equal to zero, so the set {(1)} will be considered with “even” multitude of elements considering the 0 as even 2κ with κ = 0, while {(1), 2} that has one element (the number 2) will be considered with “odd” multitude of elements and so on. In every step n, the elements of S q ( n ) are duplicated. The set S q ( n ) will be named from here on as phase n of S. In the part 6.7 the function f 1 : S N N S F of relation (53) defines by the act of multiplication the numerical values μ ν of SFNs. In a random phase n if we place the maps of μ ν = f 1 ( s ν ) on an axis Ax(SFN) of SFNs, we will observe that there are SFNs of the phase n with numerical values μ κ where, from inside the interval ( 1 , μ κ ]

(that defines each one of them) they are missing other SFNs with smaller numerical values which will be found in a next phase N > n as Table 1 will be enriched with more and more “new” SFNs. Every SFN of the phase n with numerical value μ ν of Ax(SFN) that will have this property will be named non completed SFN of the phase n, and contrariwise every one which does not have that property will be named completed SFN of the phase n. The subsequent now phase N ( n , μ κ ) > n where the SFN with value μ κ would be completed for the first time (that was found not completed in the previous phase n) will be called Front N ( n , μ κ ) of this specific μ κ of the phase n. We’ll now define two kinds of distribution of prime numbers in Table 1.

A) The distribution where the events m i = " q m i " are behaving to their distribution into the elements s ν = { ( 1 ) , q m 1 , , q m ρ 1 , q m ρ } of the S “as codes of discrete events m i , i = 1 , 2 , , ρ N , defining here by term ‘code of discrete event’ an event without hidden numerical properties”, of the function (52) of 6.7 in previous Chapter 6.

B) The distribution of the “numerical events” q m i × that come from the replacement of q m i of s ν of S with their corresponding q m i × . The symbols q m i × state not only their correlation based on their distribution in S (that is if they are independent events of codes as to the e, o or between them), but simultaneously also state if they are independent as to the numerical results μ ν which come from them in N S F .

So with p ( " q m n " | | e ) we will represent the probability p for the prime q m n

Table 1. Creation of set S.

to belong, as a code of a discrete event (that we defined at A previously) of appearance in someone randomly selected element s ν S obviously with the precondition that this set s ν to be of e type. And the p ( " q m n " | | " q m j " ) will be the probability p the prime number q m n to be enclosed, as a code of a discrete event, into a randomly selected s ν with the precondition that the q m j will be enclosed also, also as a code of a discrete event, into this same s ν .

Next, the two events e, o will correspond to ρ = 2 κ , ρ = 2 κ + 1 (with κ N ) in the formula s ν = { ( 1 ) , q m 1 , , q m ρ 1 , q m ρ } of S.

Easily we can now find that in S stand the four relations (60), (61), (62), (63):

p ( " q m n " | | e ) = p ( " q m n " | | o ) = 1 / 2 (60)

p ( " q m n " | | " q m j " ) = p ( " q m n " | | " q m k " ) = 1 / 4 , n j , j k , n k (61)

Also, by naming ρ(e), ρ(ο) the populations of prime numbers in two randomly selected s ν from the subsets SFN(e), SFN(o) [or e-SFN, o-SFN] of SFN’s set N S F respectively, it will be valid:

p [ ρ ( e ) > ρ ( o ) ] = p [ ρ ( e ) < ρ ( o ) ] (62)

Finally, we will include also the fact that because no prime number is a product that has a factor another prime number it follows that:

“By erasing all the symbols: ‘ " ’ in relations (60), (61), that is, p ( q m n | | e ) = p ( q m n | | o ) = 1 / 2 etc we see that the arithmetical values of corresponding primes (considering them as possible products of primes) will be directly (i.e. consider them as codes of discrete events) independent from any other prime number, but not additionally and indirectly (i.e. considering them not now as codes of discrete events but with possible special hidden relations acting in next multiplications each other for formation of values μ ν N S F ) independent through of other hidden arithmetical properties between the prime numbers”. Proposition: (63).

The relation (60) states that the events e, o are independent to the codes of discrete events " q m i " in the set S. The relation (61) states the independence of the codes of discreet events " q m i " , by each other, considering them as quanta i.e. without hidden arithmetical relation between them, in the set S. Based on the propositions (60), (61), (62), (63) now we will prove that the relation (54) or (1): Type [ Distr ( G / N S F , f ) ] = H-T will be equivalent to the “impartial logarithmic distribution” of primes in N, as we defined the last in the part 6.7.

Proof: “According to what we said in the parts 6.5, 6.6 let we symbolize with Ω = { CLP Δ i ¯ ( N S F , G , f ) : i = 1 , 2 , , N } the set of all of the supposed “secret (hidden) numerical propositions” of prime numbers (if they do really exist) that define to prime’s distribution into the set N (of natural numbers) not be an “impartial logarithmic distribution” of the prime numbers. Respectively now of the two cases A, B that we mentioned previously, the events e, o will be mapped in N S F with two ways:

G = { e , o } A1 S A2 N S F , supposing that in A2 step is: Ω = (64)

G = { e , o } B1 S B2 N S F , supposing that in B2 step is: Ω (65)

In these 2 maps (64), (65) the first steps A1, B1 include only the propositions (60), (61) where the events e, o are distributed in S “randomly”, according to the definition of the “random distribution” in 6.5. In the relation (64) we assume in the step A2 that the Ω = and that additionally are valid the (62), (63) which together with the relations (60), (61) of A1 before, apart from the statements of independence that they define, they will also “determine” the numerical values μ ν . Because in this case of the relations (64) these are the only four conditions that determine the numerical values μ ν , because we consider impartial the distribution of the prime numbers in Ν (because we supposed that Ω = ) and so the numerical acts of the multiplications in the determination of values μ ν will as well be impartial. Therefore, in this first steps A1, B1, due just to this impartial “determination” of the arithmetical values μ ν , necessarily the proposition Type [ Distr ( G / N S F , f ) ] = H-T of the relation (1) will be valid. Because according to the PMC that we mentioned in 6.6 there is absolutely no reason for the distribution H-T not to be valid. However the case (65) differentiates from the (64) exclusively and only between the second steps A2, B2 where is Ω only for the B2 of (65). In this case the Ω can define whichever non “random distribution”, that is to say a “non impartial logarithmic distribution” of prime numbers q m i in N. E.g. it could define in the random interval δ n (that was defined in the part 6.7) for the prime numbers to condense (these primes) around the center of the random δ n . But this kind of Catholic Statistical Property of the prime numbers, again based on PMC, deterministically would be imprinted also on the distribution values μ ν as products of non impartial distributed prime numbers. This result would cause a unique differentiation between the two steps (64), (65). Based on again the proposition of causality PMC this change deterministically would upset the distribution H-T of e, o of the case (64) because this change is the only differentiation of the case (64) from the numerical version of (65). It would therefore be imprinted necessarily (that is to say deterministically) as a “mathematical x-ray” of the relation (65) over the distribution μ ν . This is an important observation.

Therefore our question is if the e, o are independent between them, not only in S, but also if the e, o are independent between them also in their distribution in the set N S F of SFNs. However the map of S in N S F comes via the f 1 of the relations (53) with multiplications of prime numbers q m i between them, thing that requires extra propositions except of the independences of (60), (61) to expand their validity also and for the numerical acts of multiplications. Because we do not know if the independences that are stated by (60), (61) continue to be valid for the N S F , that is when, as we defined in cases (Α), (Β) previously, the distinctive events “ q m i ” of the prime numbers are replaced by their corresponding numerical events q m i × which progressively map the S in N S F . E.g. there is a chance that after these replacements the resulting events q m i × in (60) to have “hidden” numerical relations between them, which concern directly also the distribution of prime numbers in set N, so that the initially hypothetical independence relation p ( 5 × | | e ) = p ( 5 × | | o ) = 1 / 2 for the prime number 5 to enclose “hidden” numerical information for another relation, e.g. for the p ( 3 × | | e ) = p ( 3 × | | o ) = 1 / 2 of 3, if for example the prime numbers 3, 5 have special relations so that when they are multiplied then to introduce, from each one of the above two relations, some additional information into the other relation, via new true CLP which are born by the multiplications. In this case the 3×, 5× are not independent and the new (60), that comes from the addition of “×”, will not be now valid for the numerical results of the events of multiplications. But in this way will eventually be influenced, in an unknown way, also the not yet proven independence between e, o in their distribution on the numerical events μ ν of SFNs and therefore the propositions (60), (61), (62), (63) by themselves cannot guarantee anymore the H-T distribution of distinct events e, o on μ ν . That is, we must prove this distribution to be H-T with another way, thing we did successfully previously in Chapter 5.

Also the relation (63), e.g. for the prime numbers q m n = 3 , q m j = 7 , q m k = 101 , states that the truth of (61) doesn’t give “directly” information for other prime numbers, because they are not directly hidden in the form of factors of the products of other prime numbers inside these three prime numbers. But evidently this doesn’t mean that it is excluded “hidden indirect numerical relations” to exist e.g. between 3, 7, 101 and other prime numbers which can dictate as we explained indirect dependence via multiplications. If this last question has a negative answer for all the prime numbers, then the distribution of e, o of the μ ν on the axis Ax(SFN) of SFNs will be H-T type:

Type [ Distr ( G / N S F , f ) ] = H-T , which is the relation (1) of Introduction 1 that we proved in Chapter 5.

Therefore because in Chapter 5 we proved the relation (1), we conclude that the properties (60), (61), (62), (63) that (as we said) initially consist the impartial distribution e, o in S will also expand their validity into the semantic proposition of “impartial logarithmic distribution” of the prime numbers, because as we showed previously, in this case only this one is missing for the validity of the relation (1). However on the other hand, because the same relation (1) which implies the impartial distribution of e, o is as well equivalent with the Riemann Hypothesis (see Appendix at the end), we verify like this that indeed Riemann was right in his claim that the hypothesis (RH), for the real part of the non-trivial zeros of the function ζ, leads to the “impartial logarithmic distribution” of the prime numbers in the axis Ax(N) of the natural numbers. But because we proved the crucial relation (1) by using the definition of potential of events that we gave in the Chapters 2, 4 we will make briefly some useful notes. We observe that in every phase of n, as we previously defined this, the distribution of e, o on the corresponding μ ν of S q ( n ) are Heads or Tails. The proof for this is the following:

“Let’s calculate only for the values μ ν of the phase n of the set S q ( n ) , the function M q ( n ) that corresponds (but this not coincides) to the known Mertens function: M q ( n ) = κ = 0 2 n λ κ . Where natural number λ κ have the values +1, −1 respectively for the e, o, that is, ( μ 0 = 1 , λ 0 = + 1 ) , ( μ 1 = 2 , λ 1 = 1 ) , . This function will have absolute value:

| M q ( n ) | = | 0 | < O ( ( 2 n ) 1 / 2 ) = O ( 2 n / 2 ) , n N , O is Landau’s symbol”.

It is known in bibliography (e.g. [1] ) that a sufficient condition for the validity of Riemann Hypothesis is also this relation lim ν [ M ( ν ) < O ( ν 1 / 2 ) ] , by the exclusive precondition that this relation is referred to a random selected interval ( 1 , μ ν ] which necessarily defines a completed SFN (as we defined it before), with the value μ ν . But we know that the larger μ ν of the phase n of Table 1 generally will not be a completed SFN, and therefore we do not know if the previous relation of “the sufficient condition” is valid for the front N ( n , μ ν ) , as we defined it before. We also observe, that the distribution of e, o of every phase n, let be symbolized it Distr ( e , o : n ) , comes from the mix of the corresponding distribution of the previous phase Distr ( e , o : n 1 ) with the transitional distribution { q n } × Distr ( e , o : n 1 ) , which comes from the numerical values μ ν of SFNs of Distr ( e , o : n 1 ) multiplied by q n . In this case the events e, o are reversed into { q n } × Distr ( e , o : n 1 ) in relation to Distr ( e , o : n 1 ) thing that tends to correct to infinity the deviations of the directly previous distribution from the ideal form of H-T, mixing it with its directly “reverse”, so that it tends infinitely to H-T, as if it tries to mimic the tosses of an ideal coin. This procedure of infinite tendency to correct is compatible with the previous ascertainment that every random phase n with numerical value μ ν has distribution of e, o H-T type. But we don’t know from this if the same is valid also for ( 1 , μ ν ] in front N ( n , μ ν ) in which, as we previously defined, would be completed the μ ν for the first time. So, if we select a random and non completed SFN of a random phase n with numerical value μ ν we can contemplate if the interval of ( 1 , μ ν ] inside front N ( n , μ ν ) has distribution H-T for μ ν . If we prove this directly and independent of (1), then, as we said before will be valid not only the relation (1) but additionally as its result and the “impartial logarithmic distribution” of the prime numbers in N will be as well truth, thing that is equivalent to RH. We point out the “logarithmic distribution” which is given from the theorem of prime numbers is not equivalent to RH, but is demanded for the RH and the proof of term “impartial” as we analyzed here.

However the torturous question that comes again is: “Are there hidden numerical relations of prime numbers that make the distribution of SFN of the front N ( n , μ ν ) such so that the e, o they do not have H-T distribution in ( 1 , μ ν ] ”? As much as our intuition wants a negative answer, nothing can guarantee that the special phase of front N ( n , μ ν ) is a random phase and so it does not correlate the e, o between them in their distribution in ( 1 , μ ν ] based on hidden properties of prime numbers. Because there may be valid that ( 1 , μ ν ] is not random, because e.g. it has the property “this is one ( 1 , μ ν ] that was just completed with all the SFNs inside”. Our problem at this point becomes too difficult, because (as we said in Chapter 6) to we answer these dark questions it needs to we have all the useful necessary information which is hidden deep inside the set CI ( N , N S F , G , f 1 , f 2 ) .

As we saw in previous Chapter 5, we bypassed this hurdle by proving the key relationship (1) using a new idea of “potential of events” that we defined. Because as we already explained in Chapters 2, 4 this potential of events draws all the hidden “information of positions” from a distribution of events e, o just like it is projected on any position m (of a natural number) on the axis Ax(N). And since we proved in Chapter 5 that the “infinite distribution of e, o is appeared as ‘H-T type’ from any position m = μ”, then, by definition this will be H-T, because this proposition (which we proved for every natural number μ) is the definition of H-T, that is, the definition of the relation (1).

We proved that the distribution of primes in N is really impartial.

8. Results

Α) We assumed initially that the infinite multitude distribution of events e, o on the μ ν has dependence from the positions μ ν of SFNs and continuing by using the potential of events we have proven that this is not happening, in other words we have shown that this distribution is one of the infinite possible ones of the H-T type. The case that remains is that this dependence from the positions does not exist. But in this case by definition the distribution will be of H-T type. Therefore the distribution of e, o on SFNs in any case is of H-T type. This last conclusion, as known, means that the Riemann Hypothesis is true, that is to say, that is the RH from a hypothesis becomes theorem [1]. However, this conclusion proves, as we have proven above in Chapter 7, that the distribution of prime numbers on the axis N is an “impartial logarithmic dilution” [1] [9] thing that secures absolutely the solution of the “twin’s problem” of articles [6] [7] that initially based there in another concept: [7], PDF Chapter 3, pages 548, 562-568, 578, 579 etc.

Β) In the Chapter 3 we gave also applications of probabilities with the potential of events, of general interest (such as e.g. in the Financial and Technical problems). In such applications that have finite number of events there can be done experimental checks of verification. But in the Chapters 2, 4 we have proven for our problem the useful proposition: “this method of ‘Potential of Events’ constitutes the definition of probabilities of a set with elements competitive events on any position m of the distribution, in a way so that this definition to transports computationally the information from all the distribution of every event onto the m”.

9. Discussion

Quantum computers are today maybe the best proof that an event can co-exist before its realization (quantum measurement) in a hyperspace, the Multiverse, along with all its other competitive events in the vector ψ, [10]. Therefore, like that, the Multiverse is defined in the future of the observer. On the other hand, even though we do not possess yet experimental proof, it is difficult to assume that if another competitive event occurred, instead of that that already has occurred, the known material universe will had been “collapsed” or will be disappeared.

Look like the Multiverse exists also in the past of the observer, as a mental pool of information. How large “large” though is it? The Gödel theorem makes very possible that all this information to be infinite. It is an important question that we referred in the end of Introduction.

But in this way already we have defined the Multiverse as the set of all information which exists, let to say, as a kind of perception of felt but also and mental entities. E.g. possible histories of connected events, possible physical laws, independent mathematical theorems, ideas etc.

In the study of this article (as we said in Introduction) among other things we showed that the total distribution of e, o over μ ν will be H-T with infinite information until infinity. But this also means that any other finite distribution of H-T type will exist inside this infinite distribution of H-T, thing that can be proven by using the basic law of probabilities. That is to say, this distribution encloses infinite information (in bits) and it includes any finite code which is translated into the binary system. Therefore inside this infinitesimal H-T distribution is coded infinite information from an ideal world that can compose infinite forms, one of which is also our observed universe of matter and energy. The proofs in this paper show that this is the case. We also do not know whether the physical laws of the universe allow to an ideal coin the production of the above infinite H-T distribution after infinite theoretical tosses. Because if the information of the universe is finite, based on this finite information in the universe, this infinite distribution can repeat finite its parts so that it finally do not include infinite information in these infinite tosses. But the proposition that we have shown that “The distribution of e, o events on the Ax(N) is infinite and also H-T” certainly leads in infinite information of distribution, because this includes the set of all the infinite ideal H-T results from tosses of an ideal coin. On the other hand, the natural numbers seem to be a map of the objective reality in the human mind, a paradox mirror where the Multiverse is mirrored, and not just a human conception independent from the objective reality of the deep nature. Therefore we have valid indications that the infinite multitude of distribution H-T reflects the Multiverse, an infinite land that is inhabited by mental entities of infinite independent logical propositions or theorems [11] by infinite information (in bits), a horrible memory of mental numbers that encloses the infinite relations of their differentiation and of their coexistence together, Pythagoras’ dream.

Appendix

For a complete picture of the above study we will give a brief description [1] for the proof that the relation (1) mentioned in the Introduction, as we said implies the Riemann hypothesis (RH). By this chance we will abstractly correlate the function ζ with the probability of finding the zeros in every area of set Ν. The approximate relation of the 6.7 of the distribution of prime numbers into the intervals δ ν can be assumed that is corrected with the introduction of a trigonometric factor of their statistical variance into the δ ν , that equivalently will be written in complex form k n = x n + i y n . Putting also z n = A n i B n = F ( k n ) , with F a function of the complex number k n , the relation of this corrected probability in the δ n will be written equivalently:

p ( n ) = ν = 1 n [ 1 k n q ν ] = ν = 1 n [ 1 1 ( q ν ) z n ] = ν = 1 n [ 1 1 ( q ν ) ( A n i B n ) ]

And symbolizing s = z , a = A , b = B we get for the limit for n :

p ( ) = ν = 1 [ 1 e i b ln ( q ν ) ( q ν ) a ] = 1 ζ ( s ) = ν = 1 e i b ln ( ν ) μ ( ν ) ν a (66)

where μ ( ν ) is the known Möbius function [12] .

Let remember also a theorem that refers to the pair of the integral of a decreasing function f ( x ) of positive values and of a corresponding series f ( ν ) of positive terms [13]:

x = 1 f ( x ) = is equivalent to ν = 1 f ( ν ) = (67)

And the well known relation for complex numbers:

| ν = 1 N z ν | ν = 1 N | z ν | (68)

But for the Mertens function is known that:

M ( N ) = ν = 1 N μ ( ν ) = O ( N 1 / 2 )

{ ( 10 ) : Type [ Distr ( G / N S F , f ) ] = H-T RH } (69)

The relation M ( N ) = O ( N 1 / 2 ) , [1] will be valid when exist N 0 R so that:

| ν = 1 N μ ( ν ) | < N , N > N 0 (70)

Symbolizing the average of μ ( x ) with μ ( x ) , for the continuing spectrum of natural numbers that is appropriate to represent a very large multitude of natural numbers, we will have for the elementary step of the transformation:

μ ( ν ) μ ( x ) = d [ M ( x ) ] ,

thus the (70) will be transformed as:

There is

x 0 R : | d [ M ( x ) ] | < d [ x ] = d x / ( 2 x 1 / 2 ) , x > x 0 > 1 (71)

On the basis of the above, the function ζ ( s ) 0 in (66) will be equivalent to:

1 / ζ ( s ) (72)

The symbol “≠” states simply that the measure of the function converges.

Now for the checking of the zeros of the function ζ, from (66) using the theorem (67) for continues spectrum of numbers, we conclude that the (72) has as sufficient condition the complex below proposition (73).

For the composition of the below propositions (73) we will put: Δ [ M ( x ) ] = x / j 0 , j 0 N , x j + 1 = x j + Δ [ M ( x ) ] , x 0 = 1 , j = 1 , 2 , 3 , and we will transform in that way the first integral into a series, so that for the inequalities we can later use the relation (68) with the relation | e i b ln ( x ) | = 1 .

Then, we will transform the series again into an integral, so that we can use the (71). This complex proposition (73), that as we said is a sufficient condition for the (72) [i.e. (73)⟹(72)] is the following:

“There is x 0 R so that x > x 0 > 1 :

| x = 1 e i b ln ( x ) x a d [ M ( x ) ] | = | lim x ( lim j 0 x j = 1 x ( e i b ln ( x ) x a Δ [ M ( x ) ] ) ) | < lim x ( lim j 0 x j = 1 x | e i b ln ( x ) x a Δ [ M ( x ) ] | ) = lim x ( lim j 0 x j = 1 x | 1 x a Δ [ M ( x ) ] | ) = lim x x = 1 | d [ M ( x ) ] | x a < | lim x x = 1 d x 2 x 1 / 2 x a | = | lim x x = 1 d x 2 x a + 1 / 2 | (73)

Calculating the last integral of (73) it comes that the (73), therefore also the (72), has as sufficient condition the relation:

lim x x ( 1 / 2 ) a | ( 1 / 2 ) a | (74)

Because x > 1 the (74) for ( 1 / 2 ) a < 0 obviously is valid. Therefore there are no non-trivial zeros of the function ζ with real part greater than 1/2. However due to the known relation:

ζ ( s 1 ) = ( s 1 ) ! ζ ( s ) 2 s 1 π s sin [ ( 1 s ) π 2 ] , we easily conclude that if the function ζ of Riemann does not have as zero the s 1 = a + i b it implies that the ζ will not have as (trivial) zero also the other complex value

s 2 = 1 s 1 = ( 1 a ) i b = a 2 i b . We observe that because the ζ haven’t zero the s1 with a > 1 / 2 we conclude that the ζ can’t have also as zero the s2 with a 2 = 1 a < 1 / 2 . Therefore there will also not exist zeros of the function ζ with real part smaller or greater than 1/2. On the other hand however we know that there are infinite non-trivial zeros of the function ζ of Riemann. Therefore all the non-trivial zeros of the function ζ necessarily will have real part equal to 1/2, thing that is the truth of the RH. We conclude that the relation (1), that we have proven and that implies the distribution H-T of the events e, o on the N S F , it implies that ultimately the RH is true.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Derbyshire, J. (2003) Prime Obsession. Plume, New York, 448 p.
[2] Wiener, N. (1948) Cybernetics. MIT Press, Cambridge, Massachusetts 212 p.
[3] Spiegel, M.R. (1975) Probability and Statistics. McGraw Hill, New York, 372 p.
[4] Schutz, B. (2009) A First Course in General Relativity. Cambridge University Press, Cambridge, England.
[5] Einstein, A. (1922) Four Lectures on the Theory of Relativity. Princeton University, Vieweg, Braunschweig.
[6] Papadopoulos, P. (2019) A Solution to the Famous “Twin’s Problem”. Advances in Pure Mathematics, 9, 794-826.
https://doi.org/10.4236/apm.2019.99038
[7] Papadopoulos, P. (2020) Clarifications of the Published Article “A Solution to the Famous Twin’s Problem” in the APM of SCIRP at 24 September of 2019. Advances in Pure Mathematics, 10, 547-587.
https://doi.org/10.4236/apm.2020.109035
[8] Papadopoulos, P. (2015) The Twin of Infinity and the Riemann Conjecture. Ziti Edition, Thessaloniki, 661 p.
[9] du Shautoy, M. (2003) The Music of the Primes. HarperCollins Publishers, New York, 368 p.
[10] Trachanas, S. (2007) Quantum Mechanics II. U University Editions of Crete 2007, Heraklion, Crete, 727 p.
[11] Dunham, W. (1990) Journey through Genius. The General Theorems of Mathematics. John Wiley and Sons, Inc., New York, 287 p.
[12] Pickover, C. (2006) The Mobius Strip. Thunder’s Mouth Press, New York, 244 p.
[13] Themistocles, R. (2018) Mathematics I. Tsotras Editions, Athens, 936 p.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.