Zero Divided by Zero Equals One
Ilija Barukčić
Horandstrasse, Jever, Germany.
DOI: 10.4236/jamp.2018.64072   PDF    HTML   XML   1,340 Downloads   14,091 Views   Citations

Abstract

Objective: Accumulating evidence indicates that zero divided by zero is equal to one. Still it is not clear what number theory or algebra is saying about this. Methods: To explore the relationship between the problem of the division of zero by zero and number theory, a systematic approach is used while analyzing the relationship between number theory and independence. Result: The theorems developed in this publication support the thesis that zero divided by zero is equal to one. Furthermore, it was possible to define the law of independence under conditions of number theory and algebra. Conclusion: The findings of this study suggest that zero divided by zero equals one.

Share and Cite:

Barukčić, I. (2018) Zero Divided by Zero Equals One. Journal of Applied Mathematics and Physics, 6, 836-853. doi: 10.4236/jamp.2018.64072.

1. Introduction

The question of the nature of independence and the plausibility of scientific methods and results with respect to some theoretical or experimental investigations of objective reality is many times so controversial that no brief account of it will satisfy all those with a stake in the debates concerning the nature of truth and its role in accounts of classical logic and mathematics. Independent of the issue about the relationship between objective reality and a theory of objective reality scientific conclusions of investigations should at least be truly independent of anyone’s beliefs, anyone’s ideological position or mind. Many times scientific conclusions rest on mathematics which itself is not free of assumptions.

There are several distinct ways in which a great deal of debate of the relationship between mathematics and objective reality can be analyzed. Mathematics as such may enjoy a special esteem within scientific community and is more or less above all other sciences due to the common belief that the laws of mathematics are absolutely indisputable and certain. In a slightly different way and first and after all, mathematics is a product of human thought and mere human imagination and belongs as such to a world of human thought and mere human imagination. Human thought and mere human imagination which produces the laws of mathematics are able to produce erroneous or incorrect results with the principal consequence that even mathematics or mathematical results valid since thousands of years are in constant danger of being overthrown by newly discovered facts. In addition to that, acquiring general scientific knowledge by deduction from basic principles, does not guarantee correct results if the basic principles are not compatible with objective reality or classical logic as such. In other words, if mathematics has to be regarded as a science and not as religion formulated by numbers, definitions, equations, functions et cetera, the same mathematics must be open to a potential revision. In general and from a theoretical point of view, mathematics or a mathematical theorem characterized by denial(ism) and resistance to the facts which do not offer itself to a potential refutation would not allow us to distinguish scientific knowledge from its look-alikes. From a practical point of view, it is not enough to (mathematically) define how objective reality has to be, even mathematics itself must discover how nature really is. Due to the high status of science in present-day society, even mathematics itself must pass the test of reality and does not stand above all and outside of reality. The principles of mathematics should be logically compatible and receive strong experimental confirmation as much as possible. In this context, objective reality or practical or theoretical experiments as such is a demarcation line between science and fantastical pseudo-science. The conflict between science and pseudoscience is best understood with respect to the notion of independence. What is objective reality? What are human perception, human mind and human consciousness? What is independence?

The concept of independence is of fundamental importance in philosophy, in mathematics and in science as such. In fact, it is insightful to recall Kolmogorov’s theoretical approaches to the concept of independence.

“In consequence, one of the most important problems in the philosophy of the natural sciences is in addition to the well-known one regarding the essence of the concept of probability itself to make precise the premises which would make it possible to regard any given real events as independent.” [1]

Due to Kolmogorov, the concept of independence is still of strategic and central importance in science as such.

“The concept of mutual independence of two or more experiments holds, in a certain sense, a central position in the theory of probability.” [2]

Historically, one of the first documented mathematically approaches to the concept of independence was provided to us by De Moivre.

“Two Events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other.” [3]

In defining independence of events De Moivre refers one event to another event. These general considerations of De Moivre about the nature of independence [4] are derived from the position of the ancient Greeks which demanded to describe a motion of a body while referring to another body. As was mentioned earlier, Einstein’s position concerning the concept of independence is very clear.

“Ohne die Annahme einer … Unabhängigkeit der … Dinge voneinander … wäre physikalisches Denken … nicht möglich.” [5]

Einstein’s position translated into English:

“Without the assumption of ... independence of ... things from each other ... physical thinking ... wouldn’t be possible.” [Author]

Einstein is elaborating on the principle of independence as follows:

“Für die relative Unabhängigkeit räumlich distanter Dinge (A und B) ist die Idee characteristisch: äussere Beeinflussung von A hat keinen unmittelbaren Einfluss auf B; dies ist als‚ Prinzip der Nahewirkung’ bekannt, das nur in der Feld-Theorie konsequent angewendet ist. Völlige Aufhebung dieses Grundsatzes würde die Idee von der Existenz (quasi-) abgeschlossener Systeme und damit die Aufstellung empirisch prüfbarer Gesetze in dem uns geläufigen Sinne unmöglich machen.” [5]

Einstein’s position in English:

“For the relative independence of spatial distant things (A and B) the following principle is characteristic: any external influence of A has no direct influence on B; this is known as a ‘principle of locality’ which is only applied consistently in field theory. This principle completely abolished would disable the possibility of the existence of (nearly-) closed systems and the establishment of empirically verifiable laws in the common sense.” [Author]

A further position Einstein’s is the following:

“But on one supposition we should, in my opinion, absolutely hold fast: the real factual situation of the system S2 is independent of what is done with the system S1, which is spatially separated from the former … the real situation of S2 must be independent of what happens to S1 … One can escape from this conclusion only by either assuming that the measurement of S1 ((telepathically)) changes the real situation of S2 or by denying independent real situations as such to things which are spatially separated from each other. Both alternatives appear to me entirely unacceptable.” [6]

However, over recent years attempts to meet the difficulties as associated with the concept of independence (i.e. non locality in quantum mechanics) in quite different ways have met with little success. One way to meet at least some of these challenges is by begging number theory and algebra for some wisdom in order to revise our understanding of independence as such. In particular, one of the central concepts in number theory is divisibility but in an impressive act of enlightened “do nothing” number theory and algebra bypassed severe historical mathematical and scientific problems altogether and are still quite silent about a generally valid concept of independence. This analysis of independence concerns the attempt to articulate from the standpoint of number theory and algebra in what exactly the interior logic of independence consists and aims to give a generally valid and systematic account of independence.

2. Material and Methods

If not otherwise stated, the standard notation for various sets of numbers, mathematical operations et cetera is used. Z is the set (or sample space) of integers = {... , −2, −1, 0, 1, 2, ...}, Q is the set (or sample space) of rational numbers, R is the set (or sample space) of real numbers, C is the set (or sample space) of complex numbers et cetera. We write logbase_y(x) for the logarithm of x to the base base_y. We write ((base_y)x) for the usual power function. We write p(X,S) or f(X ,S ) to indicate that p or f is a function (also called a map) from a set S to a set X . This is of value especially under conditions where X is a sub set of the set S while the set S can denote something like the sample space.

2.1. Definitions

Definition 0. (Number +0).

Let c denote the speed of light in vacuum, let e0 denote the electric constant and let µ0 the magnetic constant. Let i denote the imaginary. Let “+” denote addition. Let “−” denote subtraction, an arithmetic operation which represents a (natural) process of removing a (mathematical) object or a part of an (mathematical) object from a collection of objects or from an object itself. Let “/” denote division. Let “´” or “*” denote multiplication. The number +0 is defined as the expression

+ 0 ( c × ε 0 × μ 0 2 ) ( c × ε 0 × μ 0 2 ) + 1 1 i 2 + i 2 (1)

Until otherwise cleared, it is [7] for N Î of the set of all numbers

N _ 0 0 N ( 0 + 0 + + 0 ) ( 0 × 1 + 0 × 1 + + 0 × 1 ) ( ( + 1 + 1 + + 1 ) × 0 ) ( N × 0 ) (2)

or

( 0 ) ( 0 × 0 × 0 × ) ( ( 1 × 0 ) × ( 1 × 0 ) × ( 1 × 0 ) × ) ( ( 1 × 1 × 1 × ) × 0 ) ( ( 1 ) × 0 ) (3)

and

log 0 ( 0 × 0 × 0 × ) log 0 ( 0 ) (4)

Scholium.

Historically, it was the Chinese mathematician Qin Jiu-shao (also known as Ch’in Chiu-Shao) introduced the symbol 0 for zero in the year 1247 in his mathematical text “Mathematical treatise in nine sections” [8] . Justifying such a methodological definition of the number zero requires answering at least one pragmatic question. Why does it matter whether the foundation of number theory is grounded on physical constants or on nature and objective reality itself? There are a variety of important issues surrounding such an approach to the definition of the number zero. One must look ahead that number theory does not deal only with the particular cases, but is of use with one of the most generally valid forms of reasoning or inference too. In this sense, such a mind independent definition of the number zero applies no matter what one is thinking or reasoning about and in particular has the potential to serve as the foundation of (classical) logic with the consequence the classical logic can serve as the foundation of number theory too and that both can be unified. Clearly, classical physics describes light by Maxwell’s equations as a type of an electromagnetic wave and demands that the speed c with which such electromagnetic waves (i.e. light) propagate through the vacuum is determined by the electric constant e0 and the magnetic constant µ0 and is a mind independent process. With these clarifications in place, we are now ready to ask in general. What does remain if all which is constituting myself is taken away from myself? In nature, the process of annihilation is related to the operation of subtraction. Thus far, if an antiproton collides with or is subtracted from a proton, both will annihilate.

Definition 1. (Number +1).

Let c denote the speed of light in vacuum, let e0 denote the electric constant and let µ0 the magnetic constant. Let i denote the imaginary. The number +1 is defined as the expression

+ 1 i 2 c × ε 0 × μ 0 2 (5)

In point of fact, until otherwise cleared, it is

+ 1 ( + 1 + 1 ) (6)

or

( 1 ) ( 1 × 1 × 1 × ) (7)

and

log 1 ( 1 × 1 × 1 × ) log 1 ( 1 ) (8)

Scholium.

Number systems related to binary numbers appeared in multiple cultures (Egypt, India (Author: Pingala) and China (I Ching, Shao Yong)) itself. Western predecessors like Thomas Harriot, Bishop Juan Caramuel y Lobkowitz (1606-1682), Blaise Pascal and other authors provided some important facts about Leibniz binary number system too. Finally, a self-consistent and modern binary number system representing all numeric values while using typically 0 (zero) and 1 (one) was devised by Leibniz [9] himself in 1703. In the following, George Boole (1815-1864), an English mathematician, developed in a short time an impressive algebra of logic [10] and revolutionized traditional (Aristotelian) logic by applying methods from algebra to logic. We briefly indicate other features of a definition of the numbers 0 and 1 based on natural constants. After all, classical or bivalent logic as one of our main tools in the formal study of reasoning prefers to be concerned with absolutely certain truths and inferences while based on the numbers +0 and +1 or on the categories either true or false. A definition of the numbers +0 and +1 as provided before determines nature or objective reality as such as the foundation of logic, number theory or of scientific knowledge at all.

Definition 2. (Infinity).

Let +¥ denote positive infinity. Let denote negative infinity. In order to avoid certain major errors of definition, let us just talk about infinity. In general, it is

0 ( + ) (9)

Thus far, until contrariwise cleared, it is

N _ ( + + + ) ( ( + 1 + 1 + 1 + ) × ) ( N × ) N (10)

or

( ) ( × × × ) (11)

and

log ( × × × ) log ( ) (12)

Scholium.

What is zero, what is infinity? Is zero something relative or is zero something absolute? Is infinity itself something relative or is infinity something absolute? What are the consequences if there is something infinite within a finite and vice versa? Can there exist something finite within an infinite? What is the relationship or the interior logic between a finite and an infinite? According to the definition above, within zero (the natural state of symmetry, “the black hole of mathematics” [11] ) there is even a lot of space for infinity too. Thus far, can we escape from zero? Under which conditions can we escape from zero? Clearly, zero is something relative too. Firstly. It is +1 − 1 = 0. Secondly. It is +10 ? 10 = 0. But the number 1 is different from the number 10 and vice versa. Thus far, even if zero as related to 1 is different from the 0 as related to 10 it is equally the same zero. In other words, it is 110 ? 109 = +1 and 3 ? 2 = +1. The number one is determined by different constituents but equally identical with itself.

In particular, Wallis himself claimed in 1656 “1/¥ ... habenda erit pro nihilo” [12] . Isaac Newton supported the position of Wallis in his book Opuscla. Due to Isaac Newton and Euler too, it is “1/0 = Infinitae” [13] . Unlike most of his contemporaries, Euler provided us in his ground-breaking work both, an extraordinary amount of mathematical wisdom and equally a fascinating new look into indeterminate forms with some deep and far reaching theoretical consequences. We will not delve deeper into Euler’s position on indeterminate forms in what follows. Still, a rough description of Euler’s very impressive historic position is of further use. Euler’s original position in German:

“Dieser Begriff von dem Unendlichen ist desto sorgfältiger zu bemerken, weil derselbe aus den ersten Gründen unserer Erkenntniß ist hergeleitet worden, und in dem folgenden von der größten Wichtigkeit seyn wird. Es lassen sich schon hier daraus schöne Folgen ziehen, welche unsere Aufmerksamkeit verdienen, da dieser Bruch 1/¥ den Quotus anzeigt, wann man das Dividend 1 durch den Divisor ¥ dividiret. Nun wissen wir schon, daß, wann man das Dividend 1 durch den Quotus, welcher ist 1/¥, oder 0 wie wir gesehen haben, dividiret, alsdann der Divisor nämlich ¥ heraus komme; daher erhalten wir einen neuen Begriff von dem Unendlichen, nämlich daß dasselbe herauskomme wann man 1 durch 0 dividiret; folglich kann man mit Grund sagen, daß l durch 0 dividiret eine unendlich große Zahl oder ¥ anzeige. … Hier ist nöthig noch einen ziemlich gemeinen Irrthum aus dem Wege zu räumen, indem viele behaupten, ein unendlich großes könne weiter nicht vermehret werden. Dieses aber kann mit obigen richtigen Gründen nicht bestehen. Dann da 1/0 eine unendlich große Zahl andeutet, und 2/0 ohnstreitig zweymal so groß ist; so ist klar, daß auch so gar eine unendlich große Zahl noch 2 mal größer werden könne.” [14]

Euler’s position stated in German can be translated [15] into English as follows:

“It is the more necessary to pay attention to this understanding of infinity, as it is derived from the first elements of our knowledge, and as it will be of the greatest importance in the following part of this treatise. We may here deduce from it a few consequences that are extremely nice and worthy of attention. The fraction 1/¥ represents the quotient resulting from the division of the dividend 1 by the divisor ¥. Now, we know, that if we divide the dividend l by the quotient 1/¥, which is equal to 0 [i.e. zero, author], we obtain again the divisor ¥: hence we acquire a new understanding of infinity; and learn that it arises from the division of 1 by 0; so that we are thence authorized in saying, that 1 divided by 0 expresses a number infinitely great, or ¥. ... It may be necessary also, in this place, to correct the mistake of those who assert, that a number infinitely great is not susceptible of increase. This position is inconsistent with the principles which we just have laid down; for 1/0 signifying a number infinitely great, and 2/0 being incontestably thee double of 1/0, it is evident that a number, though infinitely great, may still become twice, thrice, or any number of times greater.”

Definition 3. (Bernoulli Trial).

Let t denote a Bernoulli trial thus that

t = + 1 , , + N (13)

2.2. Methods

In the spring 1953, a graduate Student of history J. S. Switzer wrote Einstein a letter and requested Einstein’s opinion on non-science and science. Einstein replied to Switzer on 23 Apr 1953 in a letter as follows:

“Development of Western science is based on two great achievements: the invention of the formal logical system (in Euclidean geometry) by the Greek philosophers, and the discovery of the possibility to find out causal relationships by systematic experiment (during the Renaissance). In my opinion, one has not to be astonished that the Chinese sages have not made these steps. The astonishing thing is that these discoveries were made at all.” [16]

Classical logic and systematic experiments can help us to demark science from non-science not only in physics but in mathematics as such too.

2.2.1. Thought Experiments

Thought experiments [17] play a central role both in natural sciences and in philosophy and are valid devices of the scientific [18] investigation. One of the most common features of thought experiments is that thought experiments can be taken to provide evidence in favor of or against a theorem, a theory et cetera. In particular, there have been attempts to define a “thought experiment”, still there is no standard definition for thought experiments and the term is loosely characterized. More precisely, general acceptance of the importance of thought experiments can be found in almost all disciplines of scientific inquiry and are going back at least two and a half millennia and have practiced since the time of the Pre-Socratics [19] . A surprisingly large majority of impressive examples of thought experiments can be found in physics among some of its most brilliant practitioners like Galileo, Descartes, Newton and Leibniz. Many famous physical publications have been characterized as thought experiments and include Maxwell’s demon, Einstein’s elevator (and train, and stationary light wave), Heisenberg’s microscope, Schrödinger’s cat et cetera. Thought experiments are conducted for diverse reasons in a variety of areas and are equally common in pure, applied and in experimental mathematics.

2.2.2. Counter Examples

A system of axioms or basic laws and conclusions derived in a purely logically deductive manner from such axioms together form what is called a theory. The relationship between an axiom and a conclusion derived in a technically correct way from such an axiom determines the validity of such a conclusion. In point of fact, conclusions derived from the basic laws can then be compared to experience which may provide support for the justification of the assumed basic law. In particular, it is impossible for an axiom to be true and a conclusion derived in a technically correct way from the same axiom to be false. A conclusion derived in a technically correct way must follow with strict necessity from an axiom and must be free of contradictions. In point of fact, a logical contradiction is not allowed in this context. It is necessary to point out that one single real or theoretical experiment can provide a logical contradiction and prove a theory wrong. Due to Einstein:

“Eine Theorie kann also wohl als unrichtig erkannt werden, wenn in ihren Deduktionen ein logischer Fehler ist, oder als unzutreffend, wenn eine Tatsache mit einer ihrer Folgerungen nicht im Einklang ist. Niemals aber kann die W a h r h e i t einer Theorie erwiesen werden. Denn niemals weiß man, daß auch in Zukunft keine Erfahrung bekannt werden wird, die ihren Folgerungen widerspricht;” [20]

Einstein’s position translated into English:

“Thus, a theory can very well be found to be incorrect if there is a logical error in its deduction, or found to be off the mark if a fact is not in consonance with one of its conclusions. But the truth of a theory can never be proven. For one never knows if future experience will contradict its conclusion;”

In other words, due to Einstein, no amount of experimentation can ever prove a theory right while a single experiment or a single counterexample can prove a theory wrong.

A counterexample [21] is a simple and valid proof technique which philosophers and mathematicians use extensively to disproof a certain philosophical or mathematical [22] position or theorem as wrong and as not generally valid by showing that it does not apply in a certain single case. By using counterexamples researchers may avoid going down blind alleys and stop losing time, money and effort.

2.3. Axioms

There have been many attempts to define the foundations of logic and science as such in a generally accepted manner. However, besides of an extensive discussion in the literature it is far from clear whether the truth as such is a definable notion. In this context, if different persons with different ideology and believe should arrive at the same logical conclusions with regard to a difficult topic investigated, they will have to agree at least upon some view fundamental laws (axioms) as well as the methods by which other laws can be deduced therefrom. As generally known, axioms and rules of a publication have to be chosen carefully especially in order to avoid paradoxes and inconsistency. At this point, clarifying some fundamental axioms or starting points of investigations is therefore an essential part of every scientific method and any scientific progress. Thus far, in our everyday hunt for progress in science it is helpful if any attempt to build a scientific picture of complex phenomena out of some relatively simple proposition is based on principles which the scientific community can accept without any hesitation or critique. Clearly, such axioms or principles are rare. Thus far, for the sake of definiteness and in order to avoid paradoxes the theorems of this publication are based on the following axiom.

Axiom I (Lex Identitatis. Principium Identitatis. Identity Law)

In general, it is

+ 1 + 1 (14)

Lex identitatis or the identity law or principium identitatis is expressed mathematically in the very simple form as +1 = +1. In the following it is useful to point to other attempts of mathematizing the identity law. The identity law was used in Plato’s dialogue Theaetetus, in Aristotle’s Metaphysics (Book IV, Part 4) and by many other authors too. In particular, multiplying the axiom above by A we obtain A = A or “A est A”. Multiplying the axiom above by B it is B = B or “B est B”. Especially, Gottfried Wilhelm Leibniz (1646-1716) expressed the law of identity as everything is that what it is. According to Leibniz,

“Chaque chose est ce qu’elle est. Et dans autant d’exemples qu’on voudra A est A, B est B.” [23] .

Several mathematical formulas [24] - [38] are derived from the identity law while a more detailed history of the identity law [30] , [34] can be found in secondary literature. Axiom I (principium identitatis) is the most general, the most simple and the most far reaching axiom we have today.

3. Results

3.1. Theorem (Number Theory and Independence I)

Let +1 denote the number 1 at a certain Bernoulli trial t. Let +0 denote the number +0 at a certain Bernoulli trial t.

Claim.

In general, it is

+ 0 + 0 = + 1 (15)

Direct Proof.

Given axiom I (principium identitatis, lex identitatis, the identity law) as generally valid it is

+ 1 = + 1 (16)

What makes axiom I a special case for a theoretical consideration of a mathematics without any exception is the general validity of the same. In different terms, to ask on behalf of (classical) logic, under which conditions are we authorized to treat the number +1 algebraically as being independent of any other number? Moreover, if the number +1 is independent of any other number (including infinity), then the number +1 is independent of any other number (including infinity). Within this framework and taking axiom I into account the number +1 stays that what it is, the number +1, independent of any relation or mathematical operation to any other number. In this context, there is at least one algebraic operation which assures the identity of something with itself, of a number +1 with its own self. We obtain

+ 1 × ( 1 ) = + 1 (17)

The first trial.

In particular, the first trial or run of an experiment provides evidence the statement before holds for the first time. The value we obtained at the first trial t = +1 may be random. We obtained the value +1 at the first Bernoulli trial t. Thus far, it is

+ 1 × ( + 1 t = + 1 + 1 t = + 1 ) = + 1 (18)

The second trial.

In other words, the theorem is true at the Bernoulli trial t = +1. In the following we perform a second (real-word or thought) experiment and obtain the value +4. In point of fact, it is again 1 × ( 4 t = 2 / 4 t = 2 ) = 1 .

The n-th trial

Finally we decide to increase the number of experiments. To get evidence, we perform a lot of (real-word or thought) experiment and obtain each time different random numbers like 1 × ( 6 t = 3 / 6 t = 3 ) = 1 , , 1 × ( X t = n / X t = n ) = 1 . Clearly, we have proofed that the equation above is valid even after t=n runs of an experiment while every time a random value is obtained. By a straightforward combination of established facts (axiom I) and without making any further assumptions we proofed that the theorem is true for any given number too. To prove that the theorem above is valid in general, we perform another, last (real-word or thought) experiment.

The n + 1 trial

At the last experiment or at the experiment t= n + 1, the value of the outcome of an experiment we obtained is equal to 0. In other words, it is

+ 1 × ( + 0 t = n + 1 + 0 t = n + 1 ) = + 1 (19)

Thus far, if axiom I is generally valid and thus far the foundation of a mathematics without any exception, the same is valid even if 0 is divided by 0. In this case, a division of 0 by 0 cannot have any influence on the validity of axiom I. The number +1 has to stay that what it is, the number +1 and we must accept that

( + 0 + 0 ) = + 1 (20)

Quod Erat Demonstrandum.

Assuming that axiom I is generally valid, we must accept that 0/0 = 1. Though a number of claims are made about the topic zero divided by zero according to number theory, 0/0 = 1.

3.2. Theorem (Number Theory and Independence II)

Let +1 denote the number 1 at a certain Bernoulli trial t. Let +¥ denote the positive infinity at a certain Bernoulli trial t. Let +0 denote the number +0 at a certain Bernoulli trial t.

Claim.

In general, it is

+ × 0 = + 1 (21)

Direct Proof.

Given axiom I (principium identitatis, lex identitatis, the identity law) as generally valid, valid without any exemption, it is

+ 1 = + 1 (22)

If the number +1 stays that what it is, the number +1, independent of the relation to any other number, there is at least one operation which assures such an identity. We obtain

+ 1 × ( 1 ) = + 1 (23)

The base case.

In point of fact, the statement before holds for the first natural number +1 at the first Bernoulli trial t. In general it is

+ 1 × ( + 1 t = + 1 + 1 t = + 1 ) = + 1 (24)

The inductive step.

Again a lot of (real-word or thoughts) experiment are performed and the following data are obtain: 1 × ( 10 2 / 10 2 ) = 1 , 1 × ( 1000 3 / 1000 3 ) = 1 , , 1 × ( X t = n / X t = n ) = 1 . In other words, the above equation is valid even after t=n runs of an experiment every time with a different number. In this context, if axiom I is generally valid, then the same axiom I is valid even for the relationship between infinity and the number +1. In general, we obtain

+ 1 × ( + + ) = + 1 × ( + + 1 × + 1 + ) = + 1 × ( + × + 1 + ) = + 1 (25)

Changing equation, we obtain

+ × ( + 1 + ) = + 1 (26)

Following Wallis [12] , Newton [13] , Euler [14] , Barukčić et al. [35] and other, there are reasons to accept that (1/¥) = 0. In general, until contrariwise proofed, it is

+ × 0 = + 1 (27)

Quod Erat Demonstrandum.

3.3. Theorem (Probability Theory and Independence)

Let p(0At) denote the probability that an event 0At will occur or has occurred at the Bernoulli trial t. Let p(RBt) denote the probability that an event RBt will occur or has occurred at the Bernoulli trial t. Let p(0AtÇRBt) denote the joint distribution of 0AtÇR Bt at a certain Bernoulli trial t.

Claim.

In general, according to probability theory and logic, it is

+ 0 + 0 = + 1 (28)

Direct Proof.

Given axiom I (principium identitatis, lex identitatis, the identity law) it is

+ 1 = + 1 (29)

Multiplying equation by p(0At), the probability that an event 0At will occur or has occurred, we obtain

1 × p ( A 0 t ) = 1 × p ( A 0 t ) (30)

or equally

p ( A 0 t ) = p ( A 0 t ) (31)

The probability that an event 0At will occur or has occurred is equal to p(0At). Let us assume that the probability that an event 0At at the Bernoulli trial t will occur or has occurred is independent of any other event, no matter what is the probability of the event 0At or of another event RBt. Mathematically, there is at least on mathematical operation which assures such an assumption. We obtain

p ( A 0 t ) × ( 1 ) = p ( A 0 t ) (32)

Under these conditions the probability of an event 0At will and must stay that what it is, i.e. p(0At) and the occurrence of an event 0At is independent of anything else, of any other event RBt denoted by p(RBt) which itself occurs with the probability p(RBt). This must not mean that the probability p(0At) as associated with an event 0At, is and must be constant. A probability p(0At) as associated with an event 0At stays only that what it is, a third has no influence on the probability p(0At). In other words, if the probability p(0At) as associated with an event 0At is multiplied by +1, the probability p(0At) as associated with an event 0At stays that what it is, the probability p(0At). Thus far, an event RBt, with its own probability of occurrence of p(RBt) can but must not have any influence of the probability of p(RBt). Under conditions of independence of event 0At and event RBt, the equation before is respected only under circumstances where we accept that (p(RBt)/p(RBt)) = 1. Only under these conditions an event RBt, with its own probability of occurrence of p(RBt) has no influence on the occurrence of the event 0At. The equation before changes to

p ( A 0 t ) × ( p ( B R t ) p ( B R t ) ) = p ( A 0 t ) (33)

In other words and as generally known, especially under conditions of independence and due to probability theory, it is

p ( A 0 t B R t ) p ( B R t ) = p ( A 0 t ) × p ( B R t ) p ( B R t ) = p ( A 0 t ) (34)

According probability theory, every single event can possess a probability between 0.0 and 1.0, including 0.0 and including 1.0. In other words, even if the probability of the occurrence of an event RBt, is equal to p(RBt) = 0, the probability p(0At) as associated with an event 0At is independent of this fact, the same probability stays that what it is, p(0At), and should not change at all since the same is independent of p(RBt). The equation before is and must be valid for any probability value and even in the case if p(RBt) = 0, since the same is derived from axiom I. Thus far, let p(RBt) = 0, we obtain

p ( A 0 t ) × 0 0 = p ( A 0 t ) (35)

Whatever the result of the operation (0/0) may be, under conditions of independence, the same operation must ensure that p(0At) = p(0At). Thus far, if an event 0At is independent of any other event RBt, then this is the case even under conditions where p(0At) = 1.In other words, even if the probability p(0At) as associated with an event 0At takes the value p(0At) = 1, this has no influence on the independence of events. Under conditions where p(0At) = 1 we obtain

1 × 0 0 = 1 (36)

Probably the best way of understanding the law of independence of the probability theory is to accept as generally valid that

+ 0 + 0 = + 1 (37)

Quod Erat Demonstrandum.

4. Discussion

Today, the division of zero by zero is commonly not used and completely misleading. Does a possible solution of the division of zero by zero exist? Of course, yes [35] . The aforementioned view is associated with the demand of a realistic approach to the solution of problems as associated with indeterminate forms. In this context, it is worth to mention some points in detail. What is the result of 10(0´¥), is it 10(0´¥) = 1? A superficial a preliminary analysis can lead to the conclusion that ( 10 0 ) = ( 1 ) = 1 . In this context, a more detailed view is necessary. Operations within brackets should be performed before other operations or the term 10(0´¥) should be rearranged in a way that either there is no infinity or no zero within the term mentioned. In other words, we obtain 10 ( 0 × ) = 10 ( 1 ) = 10 , because (0 ´ ¥) = 1. Another way to cope with this equation is to consider that (1/0) = ¥. We substitute infinity within the term 10(0´¥) by (1/0) and do obtain 10 ( 0 × ( 1 / 0 ) ) = 10 ( ( 0 / 0 ) × 1 ) = 10 because (0/0) = 1. Viewed from the standpoint of infinity we obtain 10 ( 0 × ) = 10 ( ( 1 / ) × ) or in other words 10 ( ( / ) × 1 ) = 10 ( ( 1 ) × 1 ) = 10 1 = 10 . Working with zero can lead to another problem too. Some theoretical claims can exist independently of the needs of any logic and mathematics an may end up with the demand that +2 = +3, which is of course a fallacy and incorrect. An attempt to proof such a fallacy correct and to disproof the theorem that 0/0 = 1 could be to multiply the equation +2 = +3 by 0. We obtain 2 ´ 0 = 3 ´ 0 or according to our today’s understanding of the multiplication by 0 it is 0 = 0. Dividing by zero we obtain (0/0) = (0/0) or +1 = +1. Thus far, we started with something obviously incorrect, i.e. the claim that +2 = +3 and obtained something correct, i.e. +1 = +1, which is a contradiction. A straightforward conclusion could be to claim that a division of 0 by 0 is responsible for this contradiction and as such not allowed. Such an conclusion is inappropriate. The multiplication by zero must be differentiated in more detail. Multiplying the equation +2 = +3 by 0 we obtain 2 ´ 0 = 3 ´ 0 or 2_0 = 3_0 and not 0 = 0. Dividing the result +2_0 = +3_0 by zero it is +2_0/0 = +3_0/0 or +2 = +3, the starting point we started from. Consequently, the division by zero is logically consistent and does not lead to any contradictions.

It may be true that the demonstration that these false reasons concerning the division of 0 by 0 does not customarily lead to the abandonment or withdrawal of the prejudicial attitude. Nonetheless, the phenomenon of the division of 0 by 0 suggests that over the long run, the sustaining of even prejudicial attitudes requires a kind of a logical justification. Thus far, let us recall that (0 ´ ¥) = 1. Taking the logarithm on both sides of this equation, we obtain that log ( 0 ) + log ( ) = log ( 1 ) = 0 . In point of fact, for lack of better terms, it is log ( 0 ) + log ( ) = 0 . This thesis can be understood with richer nuance when we approach it as it is. In other words, we have to accept that log ( 0 ) = log ( ) . One possible consequence is that log ( 0 ) = log ( ) = log ( ) . It should be noted that the use of indeterminate forms in the literature often involves terms like 00 and ¥0 too. Following our rules above, we obtain that

0 0 = ( 1 / ) ( 1 / ) = ( ( 1 / ) ( 1 ) ) / ( ( 1 / ) ( ) ) = ( 1 1 / 1 ) / ( 1 / ) . In other word, the term 00 equals 0 0 = ( 1 1 / 1 ) × ( / 1 ) = ( 1 1 × ) / ( 1 × 1 ) = ( ) / ( 1 × ) . From

this follows that 0 0 × 1 × = ( ) = ( 1 / 0 ) = 1 / 0 . In this context it is 0 0 × = 1 / 0 . Furthermore, we obtain 0 0 × 0 × = 1 or 0 0 × 0 = 0 because it must be that (0 ´ ¥) = 1. This leads to the conclusion that 0 0 = 0 / 0 . Recall otherwise that 0 = 0 1 / 0 = 0 1 / 0 0 . Approaching the term 0 0 = ( 1 / ) ( 1 / ) from another point of view, we obtain log ( 0 0 ) = log ( 1 / ) ( 1 / ) or 0 × log ( 0 ) = ( 1 / ) × log ( 1 / ) which is equal with 0 × log ( 0 ) = ( 1 / ) × log ( 0 ) and correct. The term ¥0 is of special interest. Changing the same, it is 0 = ( 1 / ) = ( 1 / ) = ( 1 / ( 0 0 × 1 ) ) which appears to allow the conclusion that 0 × ( 0 0 × 1 ) = 1 . In other words, it is equally true that 0 = ( 1 / 0 ) 0 = 1 0 / 0 0 with the consequence that 0 × 0 0 = ( × 0 ) 0 = 1 0 .

More recently, work on indeterminate forms has been an integral part of the development of modern mathematics and it has become a subject of extensive research in its own right. Whether this line of thought and elaboration on indeterminate forms is strong and powerful enough to withstand the theoretical challenges and to make an end to the endless and ongoing battle against indeterminate forms may remain an open question. The need for a generally valid and logically self-consistent concept of independence in number theory and algebra is great. In particular, it is easy to recognize that the above line of thought could be extended to a general and more complex version of indeterminate forms and can make a contradiction free connection to classical logic. While relying on axiom I as the starting point of further deduction it is assured, that the results are logically consistent from the beginning. What are we to make of this? Against this, there is a long tradition of defining the result of the division of 0 by 0 and similar operations. It is uncontroversial (though remarkable) that this approach has not lead to the solution to the problem of indeterminate forms through centuries. In general, it will be helpful to begin any theorem with regards to indeterminate forms with axiom I. In its simplest formulation, this should help us to achieve the desired goals.

5. Conclusion

Today’s number theory is missing a generally valid concept of independence. In this publication, it was demonstrated that the concept of independence under conditions of number theory can be derived from axiom I. Furthermore, evidence was provided that axiom I has the potential to serve as the foundation of the solution of the problems as associated with indeterminate forms. Finally, using axiom I, the problem of the division of zero by zero was solved in a logically consistent form. In summary, +0/+0 = +1. Further and more detailed research is possible and necessary to solve the problems of indeterminate forms and to enable a generally valid mathematics without any exception. While relying on axiom I, this goal appears to be achievable.

Conflict of Interest Declaration

I have no conflict of interest to declare.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Kolmogorov, A.N. (1950) Foundations of the Theory of Probability (Morrison, N., Translator). Chelsea Pub. Co., New York, 9.
[2] Kolmogorov, A.N. (1950) Foundations of the Theory of Probability (Morrison, N., Translator). Chelsea Pub. Co., New York, 8.
[3] de Moivre, A. (1756) The Doctrine of Chances or a Method of Calculating the Probabilities of Events in Play. 3rd Edition, A. Millar, London, 6.
[4] Kac, M. (1959) Statistical Independence in Probability, Analysis and Number Theory. The Carus Mathematical Monographs, No 12. The Mathematical Association of America Inc., Rahway.
[5] Einstein, A. (1948) Quanten-Mechanik und Wirklichkeit. Dialectica, 2, 320-324.
https://doi.org/10.1111/j.1746-8361.1948.tb00704.x
[6] Schilpp, P.A. (1949) Albert Einstein. Philosopher-Scientist. In: Schilpp, P.A., Ed., The Library of Living Philosophers, Vol. VII, Evanston, Illinois, 85.
[7] Barukcic, I. (2017) Anti Bohr—Quantum Theory and Causality. International Journal of Applied Physics and Mathematics, 7, 93-111.
https://doi.org/10.17706/ijapm.2017.7.2.93-111
[8] Libbrecht, U. (2005) Chinese Mathematics in the Thirteenth Century (The Shu-Shu-Chiu-Chang of Chin Chiu Shao). Dover Publication, Mineola, NY.
[9] Leibniz, G.W.F.V. (1703) Explication de l’arithmétique binaire, qui se sert des seuls caractères O et I avec des remarques sur son utilité et sur ce qu’elle donne le sens des anciennes figures chinoises de Fohy. Mémoires de mathématique et de physique de l’Académie royale des sciences. Académie royale des sciences, , 85-89.
[10] Boole, G. (1854) An Investigation of the Laws of Thought, on Which Are Founded Mathematical Theories of Logic and Probabilities. MacMillan and Co., London, 441.
https://archive.org/details/bub_gb_DqwAAAAAcAAJ
[11] Barukcic, I. (2008) Causality II. A Theory of Energy, Time and Space. Lulu, Morrisville, 139.
[12] Johannis, W. (1656) Arithmetica infinitorum, Sive Nova methodus inquirendi in curvilineorum quadraturam, aliaq difficiliora problemata matheseos. Leon Lichfield Academix Typographi, Oxonii, 152.
[13] Isaac, N. (1744) Opuscula mathematica philosophica et philologica. In Tres Tomos Distributa. Tomus primus. Joh. Castillionues, Juris consultus, Lusannae et Genevae, 4.
https://doi.org/10.3931/e-rara-8608
[14] Euler, L. (1771) Vollstandige Anleitung zur Algebra. Erster Theil. Bei der kayserlichen Akademie der Wissenschaften, St. Petersburg (Russia), 34.
https://doi.org/10.3931/e-rara-9093
[15] Barukcic, I. (2017) Theoriae causalitatis principia mathematica. Books on Demand, Hamburg-Norderstedt, 19.
https://www.bod.de/buchshop/theoriae-causalitatis-principia-mathematica-ilija-barukcic-9783744815932
[16] Hu, D. (2005) China and Albert Einstein. The Reception of the Physicist and His Theory in China, 1917-1979. Harvard University Press, Cambridge, 5.
https://doi.org/10.4159/9780674038882
[17] Sorensen, R.A. (1992) Thought Experiments. Oxford University Press, Inc., New York.
[18] Horowitz, T. and Massey, G.J. (1993) Thought Experiments in Science and Philosophy. Ratio, 6, 82-86.
[19] Rescher, N. (2005) What If?: Thought Experimentation in Philosophy. Transaction Publishers, New Brunswick, NJ.
[20] Einstein, A. (25 Dec 1919) Induktion und Deduktion in der Physik. Berliner Tageblatt, Morgen-Ausgabe, Supplement 4, 1.
[21] Romano, J.P. and Andrew, F.S. (1986) Counterexamples in Probability and Statistics. Chapman & Hall, New York, London.
[22] Stoyanov, J.M. (1997) Counterexamples in Probability. 2nd Edition, Wiley, Chichester.
[23] Leibniz, G.W. (1765) Oeuvres Philosophiques Latines & Francoises de feu Mr. de Leibniz. Chez Jean Schreuder, Amsterdam, 327.
[24] Barukcic, I. (1989) Die Kausalitat. Wissenschaftsverlag, Hamburg, 218.
[25] Barukcic, I. (1997) Die Kausalitat. Scientia, Wilhelmshaven, 374.
[26] Barukcic, I. (2005) Causality. New Statistical Methods. Books on Demand, Hamburg, Norderstedt, 488.
[27] Barukcic, I. (2006) Causality. New Statistical Methods. 2nd English Edition, Books on Demand, Hamburg, Norderstedt, 488.
[28] Barukcic, I. (2006) New Method for Calculating Causal Relationships. Proceeding of XXIIIrd International Biometric Conference, McGill University, Montréal, Québec, Canada, 16-21 July 2006, 49.
[29] Barukcic, I. (2011) Causality I. A Theory of Energy, Time and Space. Lulu, Morrisville, 648.
[30] Barukcic, I. (2011) The Equivalence of Time and Gravitational Field. Physics Procedia, 22, 56-62.
https://doi.org/10.1016/j.phpro.2011.11.008
[31] Barukcic, I. (2012) The Deterministic Relationship between Cause and Effect. International International Biometric Conference, Kobe, Japan, 26-31 August 2012.
https://www.biometricsociety.org/conference-abstracts/2012/programme/p1-5/P-1/249-P-1-30.pdf
[32] Barukcic, I. (2016) The Mathematical Formula of the Causal Relationship k. International Journal of Applied Physics and Mathematics, 6, 45-65.
https://doi.org/10.17706/ijapm.2016.6.2.45-65
[33] Barukcic, K. and Barukcic, I. (2016) Epstein Barr Virus—The Cause of Multiple Sclerosis. Journal of Applied Mathematics and Physics, 4, 1042-1053.
https://doi.org/10.4236/jamp.2016.46109
[34] Barukcic, I. (2016) Unified Field Theory. Journal of Applied Mathematics and Physics, 4, 1379-1438.
https://doi.org/10.4236/jamp.2016.48147
[35] Barukcic, I. (2017) Helicobacter Pylori—The Cause of Human Gastric Cancer. Journal of Biosciences and Medicines, 5, 1-19.
https://doi.org/10.4236/jbm.2017.52001
[36] Barukcic, J.P. and Barukcic, I. (2016) Anti Aristotle—The Division of Zero by Zero. Journal of Applied Mathematics and Physics, 4, 749-776.
https://doi.org/10.4236/jamp.2016.44085
[37] Barukcic, I. (2018) Epstein Barr Virus—The Cause of Hodgkin’s Lymphoma. Journal of Biosciences and Medicines, 6, 75-100.
https://doi.org/10.4236/jbm.2018.61008
[38] Barukcic, I. (2018) Fusobacterium nucleatum—The Cause of Human Colorectal Cancer. Journal of Biosciences and Medicines, 6, 31-69.
https://doi.org/10.4236/jbm.2018.63004

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.