Can Engineers Build AI/ML Systems? Analysis and an Alternative Approach

Abstract

Although Artificial Intelligence (AI) and Machine Learning (ML) are attracting a lot of scientific and engineering attention nowadays, nothing up to now has been achieved to reach the level of building machines that possess human-like intelligence. Though, the engineering community continuously claims that several engineering problems are solved using AI or ML. Here, it is argued that engineers are not being able to build intelligent machines, implying that the systems claimed to have AI/ML belong to different engineering domains. The base of the syllogism is the existence of four main obstacles on which extensive elucidation is performed. In addition, attempt to clear out the developed confusion, mis-use and ab-use of the phrases “Artificial Intelligence” and “Machine Learning” by scientists and engineers is carried out. Furthermore, mathematical, and philosophical approaches are also mentioned that strengthen the argument against AI implementability as part of the whole syllogism. Finally, an alternative approach (not being unique) is suggested and discussed for performing research on AI and ML by the engineers. It is based on complexity theory and non-linear adaptive systems and provides the benefit of eliminating the before mentioned pragmatic and philosophical obstacles that engineers are facing and ignoring, without creating confusion on this scientific endeavor.

Share and Cite:

Panagiotopoulos, N. (2023) Can Engineers Build AI/ML Systems? Analysis and an Alternative Approach. Open Journal of Philosophy, 13, 504-530. doi: 10.4236/ojpp.2023.133034.

1. Introduction

It is evident that Artificial Intelligence (AI in short) has attracted a lot of attention from the scientific and engineering community in the last decade. Lots of effort, time, resources, and money have been invested in order to implement versions of AI in different applications. This observation is reflected by Zhang et al., in their annual report (Zhang et al., 2022) . Although AI research has been performed since 1950s, it never reached the level of high interest from the scientific and engineering community until the recent years. Since about 2010, three major factors created the infrastructure for boosting AI research. Mainly, the development of microelectronics especially the Graphical Processing Units (GPU), the enormous amount of data generated and stored in internet (but also the growth of internet itself as well), and the advancement of mathematical algorithms related with deep learning created the necessary tools for resuscitating the AI research and development. Furthermore, old ideas were reinvigorated supporting an extra boost in AI R&D. GPUs were transformed into AI accelerators, new neuromorphic devices with different designs appeared in the market. In addition, software development tools such as CUDA and Tensor Flow (to name few) were built, and new AI/ML libraries for high level programming languages have been introduced (e.g. Python AI) for assisting the engineers to utilize concepts that exist in AI, mainly Artificial Neural Networks (Russel & Norvig, 2021; MSV, 2018; Trends, 2020) .

But even with the enormous shift of scientists and mostly of engineers towards AI, the ability of humans to build a machine that possesses intelligence has not been achieved yet.

This study analyzes the reason behind the inability of engineers to create AI systems and proves that engineers are not able to make it happen. Four main points are presented to prove that engineers cannot build intelligent machines with the current research and development (R&D) approach. In addition, due to the puffery in the recent years on AI products, an attempt to clear out the confusion, the misuse and abuse of the phrase “Artificial Intelligence” and “Machine Learning” (ML), is conducted. Furthermore, the influence of this puffery on the society is mentioned. So, part of this study is to increase the awareness of this phenomenon to the public and bring to its attention several consequences that appear in sociopolitical level and in the quality of scientific outcomes. Finally, as a non-unique solution for avoiding all the previous issues, this study provides an alternative approach for conducting AI R&D, which engineers can also perform.

It is important to note, that the proof, the clarification and the description of AI and ML will be performed from the engineering perspective. The reason behind is that the implementation of AI for assembling a final AI/ML product, necessitates the utilization of engineering.

The first part of the paper is to introduce the fundamental problems that exist in AI from engineering perspective. In other words, the problems that an engineer does have as obstacles, for not being able to build intelligent machines. In addition, some clarification about the terms AI and ML used in literature and their connection with real life applications is performed and criticized. The second part discusses the syllogism behind, and some additional explications and remarks are noted. The last part is to propose and discuss an idea of how to progress on the R&D of AI. Focus will be on human intelligence and how machines will be able to approach as much as possible its level that eventuates in human. All main parts in the paper include some philosophical aspects on this matter as well. The reason is to strengthen the syllogism that is used for the proof of the main objective.

Before proceeding to the rest of the paper, it is important to clarify some issues about “intelligence”, and “Artificial Intelligence” in order not to create confusion and to develop a common ground of understanding. In this paper, the word “intelligence” relates to human intelligence with all the vague, abstract, and probably still unknown characteristics that exist in human brain and mind. It is not only collecting information, processing, and acting based on the result of the processed information, as it happens in data handlers, or in robots for example. The word “intelligence” in this paper is embracing concepts of a larger field than the one is used in engineering and science at the current time. Some of these fields are “understanding”, “learning”, “adaptation”, “causation”, “cognition” to name few. The phrase “Artificial Intelligence” is referred to the technology for building machines that possess human intelligence. It is not referred to the technological advances that occurred under the umbrella of Artificial Intelligence (AI) research and development in the last almost 60 years. This paper does not criticize the technological advances due to AI in which the author acknowledges and respects. But it does criticize the way that research and development is performed mostly by a significant number of engineers and scientists in AI R&D, and the claims that specific problems have been solved utilizing AI technology in which fallacy is present. It does not accept that the commonly used term “ML” in published works is related with the actual “learning” that occurs in human brains, or the “training” that is related with the actual educational system utilized for tutoring or instructing human brains, connecting them with technology and eventually with machines. This is part of the criticism as well. Furthermore, it reveals the magnitude of confusion and misinterpretation one can find in policy makers or decision mechanisms when new laws or rules are formulated for regulation issues on technological products. For if we are targeting to build intelligent machines then more work on fundamental level must be performed and we are still far away for reaching this objective. The criticism is from engineering (hence pragmatic) perspective, although references to some philosophical aspects are mentioned. In addition, the author does not have the will and intention to criticize specific people or group of people. The target is not the human, but the procedures or methodologies used and the deployed way of thinking. For this reason, reference to specific papers when criticism happens is avoided as much as possible, although if requested the author can provide them. Furthermore, AI and ML have already started to be utilized in space applications as well. This paper is also targeting the space scientists and space engineers for providing a realistic perspective of AI and ML, and to build the basis on which research and development on AI and ML would provide reasonable, practical, and pragmatic results for space projects.

2. The Four Obstacles

2.1. Obstacle 1: What Is AI? The Main Problem in AI Research for Engineers

In science and engineering activities, the starting point is usually the definition of words used in the scientific questions and hypothesis during research and engineering pursuits; especially when describing or introducing new notions or ideas in Research and Development. Regarding the AI, there are several definitions depending on the branch of science or engineering. Below there are some definitions found in dictionaries, internet, AI literature, and official scientific/engineering bodies. Note that in AI literature there is no specific definition, but usually, as it is sometimes claimed, is an attempt to define AI. A famous official attempt happened in 1994, where 52 psychologists signed a document with an agreed definition of the term intelligence. The published definition is “Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. Note that in this workshop about 100 scientists participated but 52 signed the document (almost 50%; no consensus occurred) (Gottfredson, 1997) . Another example is from Russel and Norvig (Russel & Norvig, 2021) . There, it is avoided an explicitly definition for AI to be provided. An indirect method seems to be followed, in which a pair of dipoles is presented (human vs rational and thought vs behavior). Though in the preface, explicitly is mentioned that “We define AI as the study of agents that receive percepts from the environment and perform actions”. Both approaches are not compatible and irrelevant to each other. Especially for the last statement in the preface, a PID controller (machine that is used a lot in automation, e.g. food production, propulsion systems, electrical motors, etc) can be considered as an (intelligent) agent since through its sensors perceives the environment and through its actuators performs actions to the system under control. Which is exactly what the definition of “agent” is claimed to be. Consequently, either simple controllers (such as the PID) are in the domain of AI meaning that already AI is utilized in real world applications, or AI is something more than just an input-output system only, and more research in fundamental level is necessary.

As mentioned before, some definitions of AI are:

Artificial intelligence (AI), also known as machine intelligence, is a branch of computer science that focuses on building and managing technology that can learn to autonomously make decisions and carry out actions on behalf of a human being. (Artificial Intelligence (AI), 2021)

The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this. Abbreviated AI. (Artificial intelligence, 2022)

the study of how to produce machines that have some of the qualities of the human mind has, such as the ability to understand language, recognize pictures, solve problems, and learn (Artificial Intelligence, 2022)

or

computer technology that allows something to be done in a way that is similar to the way a human would do it (Artificial Intelligence, 2022)

Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals including humans. Leading AI textbooks define the field as the study ofintelligent agents”: any system that perceives its environment and takes actions that maximize its chance of achieving its goals. (Wikipedia, 2022)

Some definitions from official scientific and technological bodies follow as well:

artificial intelligence (AI): 1) A branch of computer science devoted to developing data processing systems that performs functions normally associated with human intelligence, such as reasoning, learning, and self-improvement. 2) The capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement. (Information Technology, NIST, n.d.)

artificial intelligence: capability of a system to acquire, process, and apply knowledge. Note 1 to entry: knowledge are facts, information, and skills acquired through experience or education”. (ISO/IEC 22989:2022, 07/2022)

Of course, the list of definitions for AI is not an exhaustive one. More definitions exist depending on the group of scientists or engineers. The above quotes are a representative subset of the facts that reflect the status of our capability to define AI. No matter if there are more definitions, all of them have a flaw and are not proper or ill-conditioned. This means that they contribute nothing to the idea and effort for building machines with AI/ML. And this is a major obstacle for the engineers. This major obstacle is due to the following reason.

The flaw or inappropriateness of the definitions is based on the fact that they are either cyclic (e.g. …exhibit or simulate intelligent behaviour, or … associated with human intelligence, …) or have high level of abstraction by using other abstract words (e.g. … to understand language, recognize pictures, solve problems, and learn., or … apply knowledge., or … such as reasoning, learning, and self-improvement). Meaning that all the definitions related with AI and ML are breaking one or more of the rules used for defining words in science and engineering. Specifically, the two rules that are not followed are the “avoidance of circularity” rule (the definiendum, in general, is defined in terms of itself) and the “avoidance of figurative, obscure, vague, or ambiguous language” (when metaphors, or hidden meanings, or lack of precision, multiple interpretations are used) (Hurley & Watson, 2018) . Furthermore, due to the lack of precise definition and presence of abstraction mentioned above, it has been demonstrated by Wilkins (Wilkins, 1928) , there is significant influence on syllogistic reasoning of a researcher (i.e. scientist or engineer). As a result, inferential problems occur, and false conclusions follow.

Just for completeness and supporting the previous statements, based on the literature in logic (Hurley & Watson, 2018) , there are five types of definitions. These are: stipulative definitions, lexical definitions, precising definitions, theoretical definitions, persuasive definitions. All of them are used in our lives and help humans to understand each other written or verbally. But the one that is used in science, engineering, mathematics, medicine, and law is the “Precising Definition”. The main reason of this type of definition is that it lacks vagueness. In that way, conveys accurate information to the audience providing a full understanding of the definiendum. In addition, through the precision definition, quantification is possible. This means that it is possible to link numbers to the definition, something vital and very efficacious for scientists and engineers. Ryan et al. (Ryan, Wheatcraft, Dick, & Zinni, 2015) very nicely have explained and clarified the issue of the connection between definition and quantitative requirements.

Based on the above it is valid and sound to conclude that the following proposition is true:

P1: There is no proper definition ofArtificial Intelligence.

An equivalent statement can also be:

P1: The current definition ofArtificial Intelligenceis ill-defined.

The true value of the proposition is based on the facts derived by the common engineering practice, the above-mentioned literature, engineering standards, and Logic.

2.2. Obstacle 2: No Requirements, Hence No Verification & Validation

Without the proper definition, engineers cannot produce requirements and specifications for ideas related with AI and ML. It is impossible for an engineer to generate requirements that will be used for the design, simulation, and analysis of a machine that possesses intelligence. Of course, the requirements and specifications will also be related with the rules and constraints that must be followed by the engineer for building a reliable and functional apparatus with intelligence. Consequently, without requirements it is impossible for the engineer to make the verification and validation (V & V) requirements, plans, procedures, and tests. The V & V phase is responsible for proving, demonstrating, and connecting the concept that the fabricated device does satisfy the requirements and does have intelligence which can be used in the relevant application. Again, Ryan et al. (Ryan, Wheatcraft, Dick, & Zinni, 2015) explicitly describe the link between the proper form of requirements in connection with definitions with the V & V phase of a project or R&D activity. It is also mentioned how important is the lack of vagueness in the requirements after they have been translated from the phase of “need” expressions. It is characteristic in the paper how unequivocally is described that requirements are “… unambiguous, testable and measurable” among other things as well. For those that desire to have a deeper look on that issue, a comprehend guide for writing requirements is the INCOSE (Dick et al., 2012) .

To sum up the previous two arguments, AI and ML do not have proper definitions which means that no requirements can be generated as it is necessary in engineering. Therefore, no V & V requirements, plans, procedures, and tests can be performed. As a conclusion, there is no logical and scientific connection between the concept of AI/ML in a machine and the final product in a project.

These concepts and principles are also reflected in the various standards that exist in different industry sectors. In Europe, for the space sector there is the European Cooperation for Space Standardization (ECSS) in which reflects all the above in their text. Specifically, ECSS-M-ST-10C (European Cooperation for Space Standardization (ECSS), 2009) and ECSS-E-ST-10-06C (European Cooperation for Space Standardization (ECSS), 2017) explicitly mention:

The successive states of a product are characterised by initially ahigh level” (e.g., rather of functional type) definition of needs/requirements (e.g., at Phase 0), evolving progressively to a more precise (e.g., at phase B) or frozen (e.g., Phase C, or procurement of an equipment) definition of all requirements.” (ECSS-E-ST-10-06C, par. 5.1, page 15).

• “Each technical requirement shall be described in quantifiable terms.” (ECSS-E-ST-10-06C, par. 8.2.1.a, page 24).

• “The technical requirement shall be unambiguous.” (ECSS-E-ST-10-06C, par. 8.2.4.a, page 25).

Please note that specific words deliberately have been underlined by the author in order to make obvious the point of the argument. In addition, it is worth noticing how the concepts provided in (Ryan, Wheatcraft, Dick, & Zinni, 2015) are formally implemented in the ECSS. More information related with requirements and definitions for engineers has been provided by Koelsch (Koelsch, 2016) and Robertson & Robertson (Robertson & Robertson, 2012) ; of course, the literature in this subject is vast.

From this paragraph the following true statements are deducted. These are:

P2: No requirements can be produced.

P3: P1 Þ P2. Which meansIf there is no proper definition ofArtificial Intelligence”, then no requirements can be produced.

P4: No Validation and Verification plans and procedures can be produced.

P5: P2 Þ P4. Which meansIf there are no requirements, then no Verification and Validation plans and procedures can be produced.

From P3, P4, and using the Hypothetical Syllogism (or chain rule) it can be deducted the following:

P6: P1 Þ P4. Which meansIf there is no proper definition ofArtificial Intelligence’, then no verification and validation plans and procedures can be produced.”.

The validity of P3 comes from the facts derived by the common engineering practice, the above-mentioned literature, and engineering standards. Then we have:

P1 Þ P2

P1 (which is true from the previous paragraph)

Using Modus Ponens (MP) on the two previous propositions it is concluded that:

P2 is true.

Similarly, the validity of P5 and P3 can be concluded. Considering all the above the P6 is true (as explained and proved previously)

It is important to note that the implication statements also include the element of causality. For example, the implication P3 (P1 Þ P2) besides the material true value character of formal propositional logic it involves a causal relation between the definition and the requirements. Which is also informally logical since an engineer cannot make requirements for something that he/she does not know exactly what to build. In addition, the engineer cannot run tests (V & V phase) if there are no requirements.

2.3. Obstacle 3: No Consensus

The third reason is related with the lack of consensus among the scientists and the engineers for what is AI and ML. Hence the previous reasons are highly linked with this one. There are differences on how “intelligence” is studied, understood, analyzed, and perceived among the scientists; for example, between psychologists (Sternberg & Kaufman, 2002; Piaget, 2001) and neurologists (Haier, 2016) . Of course, there are other disciplines like neuropsychologists, biologists, and many others, that have their own methods for studying and understanding “intelligence” with slightly different definitions. The same applies also for engineering disciplines such as Computer scientists, aerospace engineers, to name few. In addition, inside the same scientific fields abovementioned, there are different schools of thought regarding intelligence. More information is provided by Sternberg (Sternberg, 2018) .

Consequently, there is a missing link between the true concept of AI/ML with the solution of a specific engineering problem. From that respect, there is no justification and proof that the solution to a specific application is due to AI. But still several engineering publications claim that AI/ML has been utilized for solving a particular problem. This is a common logical fallacy that occurs and is called affirming the consequent. In simple terms, it describes the following reasoning: if P is true, then Q has occurred; Q has occurred, therefore P must be true. The fallacy is obvious since there is nothing that makes P to follow Q, even if Q always follows P. Just for clarity, P is the statement “machine does have intelligence” and Q is “problem solved (Barnes, 1985; Kern, Mirels, & Hinshaw, 1983; Popper, 1959) . Therefore, the sentence becomes: If the machine has intelligence, then problem solved. This is a source of confusion in the scientific community, and it would be beneficial if it is acknowledged, and try to improve the way making scientific claims. Doing that, research quality will be increased; also understanding the nature and dynamics of intelligence is enlarged; this will bring more insightful outcomes during research on AI.

From this paragraph a valid statement can be generated. This is:

P7: There is no agreed definition of theArtificial Intelligenceamong the scientists of different scientific domains.

The validity of this statement comes from relevant references mentioned above addressing this issue from different fields of science related with “intelligence”.

2.4. Obstacle 4: Philosophical Open Issues

Finally, the last impediment is related with the fact that “intelligence” is not an object or phenomenon that can externally be observed and studied by scientists. It is a man-made mental entity (like many other abstract words i.e., justice, freedom, etc) in our brain that requires the same organ that “hosts” it, to be used for observation, analysis, and explication. Moreover, this must happen in an unbiased and objective way. There is no other “intelligence” in nature or in our known universe that can be used as a reference or even for comparison. In case, other animals (such as dolphins, or octopus, which is apparently considered for some scientists as the most “intelligent” non-mammal in the sea) are considered to possess some “intelligence”, again it is still the human brain that is doing the observation, the analysis, and most importantly the interpretation. There is no feedback from the other species about our intelligence for further analysis. Furthermore, only few branches of science consider that some animals possess “intelligence”. Other ones claim that “intelligence” as we perceive it, is a human attribute only, as explained by Colombo & Scarf (Colombo & Scarf, 2020) and Gerhard & Ursula (Gerhard & Ursula, 2005) .

Furthermore, this can also relate to the self-reference, semantics, and formal system issues addressed in mathematical logic and computational theory. Gödel’s ideas (Feferman, Dawson, Goldfarb, Parsons, & Solovay, 1995) and the equivalent halting problem by Turin (Turin, 1938) provide a set of arguments that computers (considered as a specific kind of machines) cannot reason about themselves. Another source of AI denial is Dreyfous in his paper (Dreyfus, 1974) . There, Dreyfous claims that human intelligence has aspects that cannot be formalised in order to be transferred to a machine and consequently build AI. Moreover, Dreyfous in his famous books (Dreyfous, Alchemy and Artificial Intelligence., 1965) , (Dreyfous & Dreyfous, Mind over Machine: The power of Human Intuition and Expertise in the Era of the Computer., 1986) , and (Dreyfous, What Computers Still Can’t Do., 1992) also claims with evidence that machines cannot reach the level of human intelligence and although they are in many things better than humans, they are still tools that aid humans in their activities. Just for clarity reasons, Dreyfous was not against AI in general, but against the situation that research on AI was focused on Symbolic manipulating Machines (SM) only. More on this issue are mentioned later in the discussions section.

The repercussion of this fourth hindrance on engineers is that the subject of AI/ML is still immature. More research in fundamental level is required. Inevitably, the lack of fundamental research creates confusion and misinterpretation of “intelligence” and “learning” to the engineering community, and to the public as well. As a result, some engineers or researchers enjoy an arbitrary “right” for “baptising” their products as an implementation of AI/ML. The author acknowledges that this is against the ethics and principles of engineering and brings no benefit to the AI/ML research.

The produced valid statement from this paragraph is the following:

P8: The notions ofintelligenceandArtificial Intelligenceare not well established in humansminds.

2.5. Deduction from Previous Propositions

It is evident from the above that research on artificial intelligence for engineering applications such as space projects has major obstacles. It is mainly related with the definition of the word “intelligence”. Ironically, the Artificial Intelligence field of science cannot define itself. What is “intelligence”? How does it emerge or is created? How can it be detected? These are some from plenty of the main questions that scientists are called to answer. More information on the fundamental questions can be found in (Luger & Stubbirfield, 1999) . Still there are no concrete answers and there is no consensus on this matter. Consequently, in engineering and science, it is a major issue for the reasons explained above.

As a result, based on the previous paragraphs and using Formal Propositional Logic the following syllogism is emanated:

The true statements P1, P7, and P8 can be grouped together providing the fact that intelligence and Artificial Intelligence (as an extension for engineers) are not well formed and understood (having lack of consensus, open philosophical points, and ill-defined). Therefore:

P 9 = ( P 1 P 7 P 8 )

Using the same proof method used in 2.2 it can be concluded:

( P 9 P 2 )

( P 2 P 4 )

The final statement that needs to be proven is the following:

P10: The engineers cannot build AI/ML systems.

But this statement is part of the true implication:

P11: ( ( P 9 P 2 P 4 ) P 10 )

Which means that if the engineers cannot define and cannot create requirements and cannot produce V & V test, then they cannot materialize (or build) the idea, which for this paper is the Artificial Intelligent System. This is direct consequence from the common engineering practices reflected in the standards, and corresponding literature.

Here simply using Modus Ponens (MP) can be concluded that P10 is true. Therefore:

P 9 P 2 P 4 P 10

In an informal way the following can be written:

Informal proof: due to the four main facts, which are:

1. Lack of proper definition forintelligenceandlearning.

2. Inability for engineers to produce requirements and V & V tests for AI/ML projects.

3. No consensus about what intelligence in the scientific and engineering communities is.

4. The man-made concept of intelligence (if it really exists) is embedded in the human brain only. It is asked from the brain to understand and analyse its own existence. It is like the stomach is trying to digest itself. This creates a deadlock.

It can be deducted that engineers cannot build AI/ML systems.

Furthermore, besides the current practical inability of humans from engineering (and practical) perspective to design, build, verify and validate an intelligent machine, there is also a fundamental obstacle that comes from the Mathematical Logic and philosophy. Initially, was mentioned by Gödel in (Feferman, Dawson, Goldfarb, Parsons, & Solovay, 1995) with a disjunctive conclusion which can be stated in simple words: that either the human mind (which is assumed to have intelligence) is not a Turing machine or there are problems in mathematics that cannot be solved (actually Gödel refers to them as absolute unsolvable Diophantine problems) by a Turing machine. Meaning that machines and mind are not and cannot be the same. Note that there is a distinction between “mind” and “brain”; they are not considered the same. Later in a more concise approach by Lucas (Lucas, 1961) and later by Penrose (Penrose, 1994) utilization of Gödel’s Incompleteness Theorem (Gödel, 1931) proved (but this is still debatable) that human mind is not a computer. Consequently, it is impossible to build a Turing machine that can replicate human mind, hence intelligence. As mentioned earlier, this is still in debate in scientific and philosophical communities. To be fair, some references that object to this idea can be found in (Coder, 1969) and (MacDermott, 1995) . Of course, the bibliography on this matter is furthermore. Nevertheless, this relates to the four obstacles mentioned before. The connection is that since there is no equivalence between human mind and Turing machine, then practical obstacles in engineering appear, such as the ones mentioned before. In addition, since the current technology utilizes Turing machines, then it has the consequence not to be possible to build intelligence inside a Turing machine as very nicely expressed by Searl (Searl, 1990) .

3. Discussion

Analysis and thoughts on what have been written in this document will be enunciated in this section. Starting from the first points that were evinced in Section 2, some significant remarks can be rendered.

3.1. Prove the Truth of Your Claims

Regarding the lack of precise definition of intelligence and its consequences mentioned above, a reason to question is necessitated, when someone claims that a specific machine or service includes AI. The questions are regarding with the set of requirements and the relevant tests procedures for verification and validation. Formally, two main questions to the claimer ought to be asked when more info about a project using AI is mandatory. These are:

1) Which set of requirements and specifications have been used by the engineer(s) to design and build a machine that embodies AI and/or ML capabilities?

2) Which verification and validation plans, procedures, and tests have been performed in order: i) to demonstrate that the requirements are fulfilled, ii) to prove that the AI/ML (as an outcome of the requirements mentioned previously) does exist in the machine, iii) AI/ML is responsible for solving a problem or is part of the operation?

It is important to clarify that those questions are referenced to the system (or machine) that possibly hosts intelligence. Not to the problem or application that uses the machine for solving the problem or accommodate a service in an application. Research and development in AI is not about solving complex problems or utilizing “intelligent” machines in specific applications. The target of AI R&D is to acquire the knowledge and know-how, formatting them in a technology that will enable humans to design and build machines with intelligence, which in turn can be used to solve complex problems or perform complex services. Therefore, to solve a problem is not the target. But to build the machine that can be used to solve many complex problems is what is needed. Hence the above-mentioned questions are targeting the engineers’ capability for building intelligent systems, and not if a specific problem was solved or can be solved. A problem may be solved in many ways (if it is solvable). R&D in AI is focused on how to solve complex problems using intelligent machines. But first, we need to be able to build such machines.

For the first question the engineer is forced to have precise definitions of the word “intelligence” and “learning” (for ML case) for producing requirements. And this is not yet possible. For the second question the engineer needs to have verification and validation procedures and tests, such as Turing test; although it has been argued that Turing test is not a good test for AI by Churchland & Churchland (Churchland & Churchland, 1990) and by Luger & Stubbirfield (Luger & Stubbirfield, 1999) . Even though there are companies and research groups that advertise their products as “smart” devices or systems with AI, still: 1) they do not have any evidence and information in order to answer the questions adequately, and 2) the “AI” and “ML” have become phrases that lure customers, and in some cases funding (i.e., for universities), and increases the probability of higher profits, sales and prestige. Nevertheless, at the current time most of the companies are focused in the “narrow AI” implementation. This means that several problems are solved using conventional or advanced technology and provide to the customer a feeling of “smartness” or “intelligence” due to client’s ignorance.

3.2. Tacit Assumptions Due to Lack of Objectiveness, and Confirmation Bias

Another important remark needs to be addressed. In case, someone uses a subjective definition (since there is no agreement yet in scientific community) that can be defined precisely, then this should be explicitly mentioned and not used as a hidden assumption or hypothesis. For example, someone might define intelligence as the capability of a machine to identify human faces in a certain area through a camera. A precise definition can be provided, and relevant requirements can be generated. But having been explicitly mentioning this definition, it is up to the receivers of the information to accept the presented information or not; knowing that it is an outcome of a personal definition or belief, and not of an accepted scientific definition. In that way, it is also understood and perceived that any subjective definitions are adapted to the capabilities of the engineer to prove whatever he or she wants, and not the other way around. Whereby in objective definitions the proofs are worked in a way that respect them. Also, equally important is the fact that with subjective definitions on AI and ML, anyone can claim anything, hence the scientific value and outcome of the work performed, is questionable. In the previous example someone can argue that there are people with prosopagnosia (cannot recognize faces or people) and still are intelligent as explained by Zhu et al. (Zhu, et al., 2010) , hence the subjective definition given above is invalid or irrelevant and the scientific outcome is barely accepted, which in turn is going to raise suspicions about how the product is working. This means that the product probably solves a problem (or performs a service) not due to machine intelligence but due to other reasons, such as using a complex or advanced mathematical algorithm or a simple image processing procedure. And this is another important point that has to be addressed and is connected with the confirmation bias (Nickerson, 1998) and experimenter bias (Rosenthal & Frode, 1963) . In simple terms, confirmation bias is when the scientist/engineer has the tendency to include only information that supports the objective of the work (or the hypothesis), rather than to include (or consider) all the information that is possibly to falsify it. It is important to note, that this is a human characteristic, and most of the cases is due to unintentional behavior and not on deceptive thinking. Nevertheless, the outcome of this type of research activities contributes to the noise in scientific community and negatively impacts the progress of R&D on AI/ML. Of course, more confusion takes place to the public’s understanding as well.

3.3. Side Products of AI R&D Are Not AI

Products that utilize AI in order to solve a problem use complex, though conventional, mathematical tools in order to produce the desired results. For example, for image processing, convolution neural networks may be used. This does not mean that the machine has AI or has learning capabilities. Another example is when hardware electronic cards with the name “AI accelerators” are utilized in some applications. This type of hardware simply has several microelectronic devices that can perform enormous number of parallel calculations in short time. Again, this has nothing to do with AI (although many companies claim that their application has AI). It is conventional technology used by engineers to solve specific problems.

There is an issue to be conversed furthermore. It is related with the several approaches or technologies used in AI research. In this part the author would like to make an important remark for elucidating some implementations of AI in real world. The approaches or technologies (i.e., cellular automata, genetic algorithms, artificial neural networks, nonmonotonic logic, natural language processing, heuristic systems, Bayesian logic, and many others) for AI that exist, present number of tools that are used by AI scientists for research purposes. It is obvious, that AI is not a specific field of science but encompasses a lot of disciplines. As a result, AI borrowed many tools from these disciplines. This means that whenever an engineer is using these tools in a project, it does not mean that the final product has intelligence. Furthermore, research in AI, produces additional tools with better attributes than earlier ones (e.g., heuristic systems, and neuromorphic integrated circuits). Again, when an engineer is using these side-products of AI, it does not mean that the final product possesses intelligence. Of course, the use of the side products might improve the performance of a system (i.e., processing power, power consumption, speed of calculations, and many more) but the utilizations of these advanced tools does not mean that the final system has intelligence. Though some insight or improve our understanding about intelligence can be achieved. But still, it is the intelligence of the engineer (and the user or operator) that makes the system to operate or function accordingly. Finally, the same applies when bio-inspired concepts are introduced in the field of AI/ML such as artificial neural networks, neuromorphic engineering and many others. Although these notions assist researchers to introduce new ways of R&D, it does not mean that when they are utilized in applications, AI has been implemented.

3.4. Learning Is Not Tunning Parameters

Another remark is about “Learning” and consequently “Machine Learning”. “Learning” in the current state of technology and as it is used commonly in now days, which mostly is the utilization of Artificial Neural Networks (although there exist other tools such as Tensors, Information Geometry, and many others, all of them use the same principle or strategy), is related with the tuning of parameters of a multiparameter system in order to have the optimum outcome based on the data used for the tuning and some cost function. Of course, learning is more than this, and the mechanisms of how it is achieved in biological level differ a lot than the ones implemented in machines. A convincing and admirable critique on pure learning and ANN has been performed by Zador (Zador, 2019) , and provides a better insight of learning in brains (human and animal).

Furthermore, learning is also highly related with connecting already structured knowledge through causation (casual relation or etiology) and not through correlation, among other aspects as well. Data represents the effect, or state what is the status in a particular moment of time and place. Positive correlation between data sets does not mean that there is causal relation. Proving causality is a very difficult task. In logic (Hurley & Watson, 2018) “cause” is treated as a state that requires one of the three conditions. These are the 1) sufficient condition, 2) the necessary condition, 3) and the sufficient and necessary condition. Also, Mill (Mill, 1874) provides five methods for identifying causal connections between events. A thorough study on causality has been performed by Pearl (Pearl, 2009) with very promising results in AI field. However, the ability to extract causality from a set of facts and rules in the same way that humans do, is not possible yet. While current machine learning involves statistical patterns and correlations from large amounts of data, there is lack of intuition or the ability to reason about causality in the same way that humans do. There is ongoing research in the field of AI to develop models that can reason about causality and extract causal relationships from data, but this is still an active area of research and there is much work to be done before AI systems can fully emulate human-level causal reasoning. As it can be seen causation is something that requires effort which is not yet mechanized on already structured data. It is not only some kind of optimization that depends on a specific cost function, as it happens in most of ML projects.

In addition, it has been demonstrated by Jonas & Korting (Jonas & Korting, 2017) that even if there is enormous amount of data available on every possible aspect in the system, and available methods for analyzing them, still it is impossible to get insight of the system under investigation, hence it is impossible to acquire knowledge and detect meaningful causality. Which means, that all these data-driven methods used in AI projects, are not promoting any intelligence at all; and do not produce any knowledge. New data might be produced, but knowledge is still under the analyst’s or engineer’s judgement, with levels of uncertainty. This is another outcome of the four main points described in the previous paragraphs. Finally, what most companies and R&D entities do, is to use tools (most of them in software form) that are utilized in AI research such as heuristic systems, Artificial Neural Networks, and others, in order to connect the project or product with AI. But this creates confusion and abuse of this beautiful field of research.

3.5. Because You Claim It, It Doesn’t Mean It Is Real. The Green Unicorn

Linked with the previous paragraphs, there are companies claiming that their products (either hardware or software) are utilizing “cutting edge AI”. This is understandable and accepted. This kind of phraseology is common in the marketing business since using the concept of metaphors and other strategies such as effective emphasis and motivation as explained by Nijs (Nijs, 2017) and Lucas & Britt (Lucas & Britt, 1950) , promote their products or services and lure more customers or investments.

In other words, the existence of an object or the validity of a sentence does not depend on the written text. If one claims that the “Green Unicorns are very big horses”, it does not mean that green unicorns exist. In philosophical logic there are different perspectives to avoid such issues. For example, in logic literature (Hurley & Watson, 2018) there is reference on the Aristotelian standpoint and the Boolean standpoint. The Aristotelian standpoint recognizes that there is an existential import through the statement, which means that if “green unicorns” do not exist then the statement is false. The Boolean standpoint does not recognize any existence or there is no existential import. It keeps a neutral position and the true of the statement must be proved in a different method. But this issue has been considered also by other philosophers such as Marcus and Quine. as well, as it is mentioned by MacFarlane (MacFarlane, 2021) .

3.6. Confusion in the Society

For the case of policy makers, or decision mechanisms, research institutes, and governmental bodies, the lack of proper definition and the absence of any logical and scientific connection between the idea of AI and ML with the final product, promotes pseudoscience, adds noise in the scientific publications, and resources (i.e., manpower, money, time) are wasted.

In addition, decision mechanisms and policy makers make conclusions based on false information introducing needless rules (maybe useless prohibitions?) and promoting (through funding) projects that bring no benefit or progress to the whole AI research. As explained by Jeans (Jeans, 2020) billions of dollars are spent in AI research with very small return value due to the growing rate of AI failures. Although, this can also be explained by the sunk-cost effect (Arkes & Blumer, 1985) . Consequently, money that could be directed to proper research on AI (or other vital research areas of science, e.g., cancer research) are redirected for making products that are “baptized” as AI or ML without any valid and sound reasoning, and scientific ground. An example is the study made by European Parliamentary Research Service (EPRS) (Bird et al., 2020) , which addresses ethical issues and frameworks related with jobs, responsibilities, and human relationships influenced by AI/ML. All these are based on abstract definitions (read chapter one of the study) and no scientific and logical connection between the “Intelligence” and technology. Another example is from the Consumer Product Safety Commission (CPSC) (Taylor, 2021) in USA. The concept of AI that is described in the document is so general that even machines with very basic control or processing power are considered to have AI. Later, explicitly the document mentions as examples technologies that have a very wide field of applications. In other words, the already built consumer products safety regulations are forced to be updated (through a proposed framework) with new rules which are the same as the last ones. In reality they are rephrased for including the words AI and ML. This occurs only because there is the belief of AI presence in the consumer products.

In other words, actions, regulations, and many other administrative functions are created and put on force, for products that the already safety rules are adequate. This is evidence of the confusion that exist in several sectors in the society about AI/ML, which activates social and governmental reflexes. The outcome is doubling the work, since rules are already there, and spending time and resources, which could be redirected in vital and more essential human activities.

Finally, it is not yet clear how to detect AI/ML products, how to distinguish AI/ML products from other consumer products, and machine learning is not the same with what happens with humans. Nonetheless, CPSC document produces a list of questions as a set of criteria to identify products with AI/ML.

3.7. Sugaring the Pill

Very briefly to mention the existence of two terms that appear in AI literature. These are the “strong AI” or “general AI” and the “soft” or “weak AI”. There might be more but have the same meaning and objectives. Just for completeness, the “strong AI” is what scientists were expecting to achieve for decades and is equivalent to human intelligence, whilst the “soft AI” is related with more specific tasks or objectives. It is obvious that this is just a game with words. Not necessary intentionally. Simply, the “strong AI” is what always was understood when scientists were mentioning “AI”, and “soft AI” is the conventional technology of the time being utilized to solve current problems. The opinion of the author is that after more than 60 years of research, funding, manpower, and time, the “invention” of new terms, without the corresponding progress, is another evidence of human impotence to build AI machines. Although, it looks like to the rest of the people that some progress has been achieved. Technological progress yes (i.e., problem solvers, automated theorem, faster heuristic algorithms, CNN, and many more); it has been achieved in hardware and software. But in AI, as clarified in the introduction of this paper as human intelligence, the outcome is not as expected, and more fundamental research work is required. A thoroughly explanation about the “strong AI” and its implementation through a formal computer program is given by Searl (Searl, 1990) .

3.8. Philosophy’s Perspective

Worthy of discussion is the philosophical aspect of the AI as was mentioned previously through Gödel, Lucas and Penrose. The author favors the outcome of Lucas and Penrose for the following reasons:

• Based on Gödel incompleteness Theorem and the disjunctive conclusion (either the human mind is not a Turing machine or there are problems in mathematics that cannot be solved) it can be claimed that “Intelligent Turing Machines” cannot be built. This is based on the four main points explained before, which provide an additional evidence and proof that human intelligence is not able to build Turing machines with Intelligence. From different perspective, Turing machines cannot approach intelligence, due to their formalism and strict design constraints mandated by the human capabilities. And there is also the option that AI is a problem that cannot be solved.

• Another reason is that the arguments of Lucas and Penrose are similar to the authors’ opinion, which can be interpreted as: Intelligence as a language1 (similar to a formal system) needs a metalanguage (similar to the formal system plus the Gödelian formula) for analysis. This can be translated as a meta-intelligence. Therefore, since intelligence (the language) is what a human mind hosts, it requires higher level of intelligence (the metalanguage) in order to analyze, understand, and consequently design and build one. Probably a meta human or a transcendent human. A simple example from engineering domain that has a relevance, is in instrumentation. When an engineer needs to analyze an instrument (i.e., uncertainty budget) or calibrate it, then a better instrument (i.e., higher accuracy, higher precision, and much lower uncertainty than the instrument under analysis) should be used to derive sound and safe results. Another example that is not related with engineering can be the animals’ instinct. Make use of a different perspective, it can also be claimed (using similar analogy) that intelligence for humans is the meta-instinct of animals. Clearly, if animals wished to analyze and comprehend their instincts, then intelligence (or meta-instinct) had to be used. So, if humans want to build AI, then they need to be ultra-intelligent. The fourth reason mentioned in the beginning of the document was related with the “deadlock”. An ultra-intelligent entity would not have this problem. Meaning that the ultra-intelligent entity could comprehend and analyze an intelligent entity or intelligence in general. Of course, like before, the ultra-intelligent entity still would not be able to analyze ultra-intelligence (it will be a kind of ultra-deadlock). Just for completeness German-born Logical Positivist Rudolf Carnap and Alfred Tarski, Polish-born mathematician are few that worked in this field of philosophy as seen from Tarski (Tarski, 1969) .

Besides the philosophical part of AI, next an alternative proposal for R&D on AI is recommended. Since it is impossible to build a machine that possesses AI and ML capabilities due to the reasons explained before, an alternative way is necessary (if AI is something that is really needed). Fortunately, complexity theory can be utilized for performing proper research with scientific rigor and engineering robustness. Specifically, complex adaptive systems can be considered as a starting point for this research endeavor.

4. The Proposal for AI Research and Development

Complexity (Mitchell, 2009) is when the behavior of a system depends on the multiple possible interaction of its nonlinear components that (those interactions) are not part of the initial design. The whole is greater than the sum of its constituents and high order of emergence is culminating. Intelligence is also considered an emergent property. Therefore, instead of targeting directly to build AI and ML, an indirect way can be chosen through the complexity theory. AI (including ML) are emergent properties of a complex adaptive system.

Complex Adaptive Systems are non-linear systems with memory or feedback. It is a subset of nonlinear dynamical systems. Some good candidates of a complex adaptive system are the Artificial Neural Network, Cellular automata, Genetic Algorithms, just to name few. By using Complexity theory as the basis of research and increasing the complexity of the system (including the environment) the following proposition can be formed.

Proposition: Artificial Intelligence with learning capabilities can be achieved as an emergent property of a designed complex non-linear adaptive system.

The author is inspired by (or follows) the Luger & Stubbirfield (Luger & Stubbirfield, 1999) , when emergent systems are described, and from Lucas (Lucas, 1961) , when it is mentioned that when the complexity of the machine is increased beyond some critical size then emergence happens, for building this proposition. Furthermore, Ramus (Ramus, 2017) has shown that general intelligence is an emerging property. The proposition above, also expresses the strategy and the method that a research team can follow for performing R&D on AI related with human intelligence or general AI. The method is that increasing gradually the complexity of some machines, a threshold will be reached from which new non-predefined or non-predesigned properties will emerge. One of them is expected to be intelligence, among other ones. The benefits of this (indirect) method are the following:

• It is not necessary to define, and consequently to produce requirements and V & V for AI and ML that will be included in the final product or in the desired machine.

• There is a rigor scientific background (nonlinear dynamic systems, and complex modelling) that can be used as basis for the utilization of building complex machines. Therefore, a logical consequence and validity without abstractions, without vague terms, and without subjective definitions will be present in the research work. In addition, proper requirements (based on engineering principles and standards) and V & V plans and procedures can be produced.

• It provides the advantage of building a machine that satisfies specific criteria and objectives. At the same time through the increment of complexity several new attributes of the systems that might be emanated (without being part of any initial plan or design) can be observed or detected. Hence research on complexity is performed as well, and possible AI manifestation can be achieved.

• Deadlock is avoided. Since the focus of interest and main objective is not “intelligence” per se but increasing the complexity of machines.

• Finally, one very important and fundamental objective that can be achieved through this method is to realize duplication of intelligence and not simulation or emulation. This means that there will be physical and causal properties in the system which play an important part for the manifestation of intelligence through complexity. This is explained and described in a very insightful way by Searl (Searl, 1990) . Though, the goal of AI research is not necessarily to duplicate or replace human intelligence, but rather to create intelligent systems that can perform specific tasks and improve human lives in various ways.

Please note that there is difference between complex systems or problems and complicated systems or problems. Both are consistent by parts. But complex systems are nonlinear, whilst complicated systems can be linear. There are non-complex (i.e., linear ones) systems that are complicated (e.g., spacecraft), and complex systems that are simple (e.g., chaotic oscillator). Attention to the following points should be considered though:

• Methods for detecting AI/ML should be developed and established. This is necessary in order to be able to prove the validity of the hypothesis. Probably long-term use of the complex machine under different circumstances can provide evidence or indications of intelligence manifestation.

• The proposed strategy for AI research makes no claim about AI product, until the machine proves to humans that it possesses intelligence (and not humans proving to humans that the machine is intelligent). For this reason, the previous bullet is necessary in order to avoid circumstances similar to the current state of AI, in which anyone can claim anything. Further work is needed though.

• It is important to note that increasing complexity does not always promote AI/ML. There should be some specific attributes inherited in the system in order to usher the system to states in which the probability for AI/ML manifestation is higher. These may be autonomy, ability of the system to reprogram itself, perception of the environment and states, structured information and knowledge, adaptation, and kaleidoscopic environment with unpredictable elements, to name few. Further work is necessary.

• The word “machine” or “system” which are used interchangeably here, need to be specified. What kind of machine the research should start to focus on? For example, it might be a humanoid robot in which the engineers increase its complexity. Another example can be an integrated data handling framework with instruments, actuators, or robotic assemblies, for space exploration. No matter what kind of machine is chosen, it should satisfy some criteria. These can be: 1) to aid humans in a complex problem or application, 2) to be able of upgrading either by the engineers or by itself with increased complexity in software and in hardware, 3) to interact with a complex and varying environment. The main point here is that there should be a complex entity in a rich environment. Nevertheless, some work on this issue is still needed.

• Measurement of complexity of the system. This is something that quantifies the complexity of the chosen system and provide the information that an upgrade of the system does increase the complexity and how much. A non-exhaustive and informal list of different methods is provided by (Lloyd, n.d.) , but a formal one is provided by Clark & Jacques (Clark & Jacques, 2012) . Despite these references, more work on the specific chosen system might be required.

• Another point is the environment in which the system is in. A simple world (i.e., a room with few objects in it) does not support enough ecosystem for bringing out all the attributes that a complex system might possess. Therefore, a complex environment is necessary as well. This includes that more time might be needed to detect any emergent property. It is also connected with the first bullet.

It is also necessary to clarify what is the difference between the complexity in carbon-based entities (i.e., humans) and complexity in machines. The increase of complexity in biomasses has been performed through numerous biochemical reactions in the passing of time on Earth. Through this relative to humans slow process, made biomasses to change. One of the outcomes was that for a specific biomass under specific set of conditions, the complexity allowed the emergence of what we call “intelligence”.

Similarly, the machines need to be changed with added complexity. This increase of complexity cannot be done by nature as it happened in humans. It will come from the engineers (at least in the beginning; maybe later the machines will be able to upgrade themselves). As long as, the engineers increase the complexity of machines, then the critical size (referring to the complexity) could be reached, which will emanate new attributes to the machines. The continuous “machine-evolution” increases the probability of emergent intelligence. Note, based on the abovementioned, humans do not have full control of the outcome. The engineers only provide the necessary conditions to support the “machine evolution” and probably achieve the desired outcome; they do not control the outcome.

Finally, two studies that can use this alternative method are described. The first one is anthropomorphic automaton machine (ANAM). In an informal way it is an android. As it was mentioned earlier in this paragraph, this type of machine is a good candidate for utilizing this alternative method. One of the reason of being anthropomorphic is based on several studies related with embodiment cognition and robotics (Pfeifer & Bongard, 2007; Anderson, 2003; Vernon et al., 2010; Metta, Sandini, Vernon, Natale, & Nori, 2008) . In that way, the system has a precondition which can increase the probability of being in a future state that intelligence may emerge. Starting with this “primitive” system and increasing its complexity (either software wise or hardware wise or both) it is expected to reach a level where new non-predefined (or non-predesigned) properties will emerge. Among these properties intelligence might be present. Of course, the environment and the interaction with it, is an important factor that influences the android’s evolution through the engineers in the beginning and possibly by itself later.

The second study that the proposed method may be used, has different architecture than the previous one, and has a bigger scale. It does not have similarities with humans, but nevertheless its complexity can crop up intelligent like attribute. Its core infrastructure is a digital network connected with different machines. Machines that generate information in digital form (e.g., instruments, sensors, price of stocks from stock exchange database, signals from satellites, cameras, human operators or users, etc), machines that sink information (e.g., drones, switches, actuators, motors, satellites, alarms, human operators or users etc), machines that store information (e.g., mass memories, databases, etc) and machines that process information. All these machines irrelevant of their topology (or geographical position) can be connected digitally and exchange data. Machines responsible for processing the diverse type of data will transform “raw” information into higher level information, such as structured data, meta-data either on demand (operators or users) or by following some objectives (e.g., data from thermal cameras being processed for detecting fire). Furthermore, even higher level of information may be produced combining the structured data through a set of rules (e.g., logic, probabilities) and increase the “knowledge” of the system. Hence achieving learning capabilities. This type of system has the digital ecosystem in which intelligence may emerge by increasing its complexity in hardware and in software. The intelligence might differ from the human due to different environment and interactions in it, but nevertheless equivalent attributes such as intelligence might be emerged through the increment of its complexity. Note that this study is based on already established notions in engineering (specifically in digital communications) which are highly related with Internet of Things (IoT) or in a more general terminology with machine-to-machine (M2M) interfaces. Lots of activities since 2000 are performed from several industries in different industrial sectors, such as pharma industry (Brecht, 2012) , NASA (Bluck, 2006) and others (Sharma, 2023) . European Space Agency (ESA) has reviewed the concept and supported the idea initially by funding through GSTP a project under the title “An Intelligent Machine-to-Machine Framework for services Based on Satellite Planetary and Earth Observation, and Exploration Data” (or ANIMATED) in 2019. The main objectives were to conceptualize and propose a basic architecture and to provide interfaces (hardware and software) for the interactions between machine to machine and man to machine. In addition, potential use of such a system for a novel infrastructure in space applications such as planetary science and exploration, but also Earth applications utilizing Earth Observation Satellites have been envisaged as well. Scientists from different disciplines such as geologists, astronomers, biologists, physicists, engineers, and operations will benefit from such an intelligent and complex system.

5. Conclusion

In this paper, the Artificial Intelligence R&D is analyzed from different perspectives. The focus of analysis was on the capabilities of humans and especially the engineers for building intelligent machines. It is claimed and justifies that it is impossible to directly build AI in the present time. The main two points are due to the lack of definition of the word “intelligence” and the incapability of human brain to understand its own existence. The main consequences of the first point are that no requirements and V & V plans can be created. Therefore, the engineers have no way to build systems with “intelligence”. In addition, a third point which is the lack of scientific consensus of the definition of “intelligence” enhances the problem for engineers. The main consequences of the fourth point are that there are valid and sound syllogisms through mathematical logic and philosophy of not being possible the human brain to be able to analyze “intelligence” precisely for providing the foundation for building it. Furthermore, usage of terms AI and ML in scientific and engineering community created confusion due to the above mentioned points. For this reason, it is beneficial not to claim without proof and not to tacit assumption or criteria in published work. Moreover, confusion in public level impacts political decisions and decision mechanisms in general. In general words, as the famous Ludwig Wittgenstein very nicely stated in his book (Wittgenstein, 1922) :

What we cannot speak about [clearly], we must pass over in silence.

Furthermore, an alternative approach of how to perform research on AI and ML has been presented. The main idea behind is not to target directly building machines with AI, but to increase the complexity and introduce non-linear adaptive systems in machines. This will make possible to achieve intelligence as an emergent property of a complex machine that can be used for complex applications. Finally, two studies are mentioned that are good candidates for utilizing the alternative method.

Acknowledgements

I would like to thank Dr. Alen Turnwald (e: fs TechHub GmbH) and Dr. Ovidiu Ratiu (Control Data Systems CDS) for the fruitful conversations and comments on my ideas. Furthermore, I extend my acknowledgment to Dr Dietsje Joles and Dr Laura Steenbergen (both in Leiden Institute for Brain and Cognition) for the interview we had on intelligence, and their prolific ideas and concepts conveyed to me, achieving a better insight of the subject.

NOTES

1Clearly, the author here assumes that formally, intelligence is equivalent with a language hence the produced parallelism or analogy.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Anderson, M. L. (2003). Embodied Cognition: A Field Guide. Artificial Intelligence, 149, 91-130.
https://doi.org/10.1016/S0004-3702(03)00054-7
[2] Arkes, H., & Blumer, C. (1985). The Psychology of Sun Cost. Organizational Behavior and Human Decision Processes, 35, 124-140.
https://doi.org/10.1016/0749-5978(85)90049-4
[3] Artificial Intelligence (2022). Cambridge Dictionary.
https://dictionary.cambridge.org/dictionary/english/artificial-intelligence
[4] Artificial Intelligence (AI) (2021). Techopedia.
https://www.techopedia.com/definition/190/artificial-intelligence-ai#:~:text=Artificial%20intelligence%20(AI)%2C%20also,is%20not%20a%20single%20technology
[5] Artificial Intelligence, n. (2022). Oxford English Dictionary OED.
https://www.oed.com/view/Entry/271625?redirectedFrom=artificial+intelligence#eid
[6] Barnes, B. (1985). About Science. Basil Blackwell Inc.
[7] Bird, E., Fox-Skelly, J., Jenner, N., Larbey, R., Weitkamp, E., & Winfield, A. (2020). The Ethics of Artificial Intelligence: Issues and Initiatives. Panel for the Future of Science and Technology, Scientific Foresight Unit. European Parliament.
https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf
[8] Bluck, J. (2006). NASA Ames Research Center. NASA and M2Mi Corp. to Develop “Automated M2M Intelligence”.
https://www.nasa.gov/centers/ames/news/releases/2006/06_72AR.html
[9] Brecht, D. (2012). Smart M2M Pharma: A Free Mobile App and Service for Med Reps. IoTEvolution.
https://www.iotevolutionworld.com/m2m/articles/315686-smart-m2m-pharma-free-mobile-app-service-med.htm
[10] Churchland, P., & Churchland, P. (1990). Could a Machine Think? Scientific American, 262, 32-37.
https://doi.org/10.1038/scientificamerican0190-32
[11] Clark, J. B., & Jacques, D. R. (2012). Practical Measurement of Complexity in Dynamic Systems. Procedia Computer Science, 8, 14-21.
https://www.sciencedirect.com/science/article/pii/S1877050912000099
https://doi.org/10.1016/j.procs.2012.01.008
[12] Coder, D. (1969). Godel’s Theorem and Mechanism. Philosophy, 44, 234-237.
https://doi.org/10.1017/S0031819100024608
[13] Colombo, M., & Scarf, D. (2020). Are There Differences in “Intelligence” between Nonhuman Species? The Role of Contextual Variables. Frontiers in Psychology, 11, Article 2072.
https://doi.org/10.3389/fpsyg.2020.02072
[14] Dick, J., Ryan, M., Wheatcraft, L., Zinni, R., Baska, K., Fernandez, L. J. et al. (2012). Guide for Writing Requirements. International Council on Systems Engineering (INCOSE).
[15] Dreyfous, H. (1965). Alchemy and Artificial Intelligence. RAND Corporation.
[16] Dreyfous, H. (1992). What Computers Still Can’t Do. MIT Press.
[17] Dreyfous, H., & Dreyfous, S. (1986). Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Blackwell.
[18] Dreyfus, H. L. (1974). Artificial Intelligence. The ANNALS of the American Academy of Political and Social Science, 412, 21-33.
https://doi.org/10.1177/000271627441200104
[19] European Cooperation for Space Standardization (ECSS) (2009). Space Project Management Planning and Implementation. ESA Requirements and Standards Division.
[20] European Cooperation for Space Standardization (ECSS) (2017). System Engineering General Requirements. ESA Requirements and Standards Division.
[21] Feferman, S., Dawson, J. W., Goldfarb, W., Parsons, C., & Solovay, R. M. (1995). Kurt Godel Collected Works Volume III Unpublished Essays and Lectures. Oxford University Press.
[22] Gerhard, R., & Ursula, D. (2005). Evolution of the Brain and Intelligence. Trends in Cognitive Sciences, 9, 250-257.
https://doi.org/10.1016/j.tics.2005.03.005
[23] Gödel, K. (1931). über Formal Unentscheidbare Sätze der Principia Mathematica und Verwandter Systeme I. Monatshefte für Mathematik, 38, 173-198.
https://doi.org/10.1007/BF01700692
[24] Gottfredson, L. S. (1997). Mainstream Science on Intelligence: An Editorial with 52 Signatories, History and Bibliography. Intelligence, 24, 13-23.
https://doi.org/10.1016/S0160-2896(97)90011-8
[25] Haier, R. J. (2016). The Neuroscience of Intelligence. Cambridge University Press.
https://doi.org/10.1017/9781316105771
[26] Hurley, P. J., & Watson, L. (2018). A Concise Introduction to Logic. Wadsworth Cengage Learning.
[27] Information Technology—NIST (n.d.). ANSI INCITS 172-2002 (R2007). American National Standard Dictionary of Information Technology (ANSDIT) (Revision and Redesignation Of ANSI X3.172-1996).
[28] ISO/IEC 22989:2022 (07/2022). Information Technology—Artificial Intelligence—Artificial Intelligence Concepts and Terminology.
[29] Jeans, D. (2020). Editor’s Pick. Forbes.
https://www.forbes.com/sites/davidjeans/2020/10/20/bcg-mit-report-shows-companies-will-spend-50-billion-on-artificial-intelligence-with-few-results/?sh=254112417c87
[30] Jonas, E., & Korting, K. (2017). Could a Neuroscientist Understand a Microprocessor? PLOS Computational Biology, 13, e1005268.
https://doi.org/10.1371/journal.pcbi.1005268
[31] Kern, L. H., Mirels, H. L., & Hinshaw, V. G. (1983). Scientist’s Understanding of Propositional Logic: An Experimental Investigation. Social Studies of Science, 13, 131-146.
https://doi.org/10.1177/030631283013001007
[32] Koelsch, G. (2016). Requirements Writing for System Engineering. Apress.
https://doi.org/10.1007/978-1-4842-2099-3
[33] Lloyd, S. (n.d.). Measures of Complexity a Non-Exhaustive List.
https://web.mit.edu/esd.83/www/notebook/Complexity.PDF
[34] Lucas, D. B., & Britt, S. H. (1950). Advertising Psychology and Research: An Introductory Book. McGraw-Hill Book Company.
https://doi.org/10.1037/13239-000
[35] Lucas, J. R. (1961). Minds, Machines and Gödel. Philosophy, 36, 112-127.
http://www.jstor.org/stable/3749270
https://doi.org/10.1017/S0031819100057983
[36] Luger, G. F., & Stubbirfield, W. A. (1999). Artificial Intelligence: Structures and Strategies for Complex Problem Solving. Addison-Wesley.
[37] MacDermott, D. (1995). Penrose Is Wrong. PSYCHE: An Interdisciplinary Journal of Research on Consciousness, 2, 66-82.
[38] MacFarlane, J. (2021). Philosophical Logic. Routledge Taylor & Francis Group.
[39] Metta, G., Sandini, G., Vernon, D., Natale, L., & Nori, F. (2008). The iCub Humanoid Robot: An Open Platform for Research in Embodied Cognition. In Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems (pp. 50-56). Association for Computing Machinery.
https://doi.org/10.1145/1774674.1774683
[40] Mill, J. S. (1874). System of Logic. Harper & Brothers Publishers.
[41] Mitchell, M. (2009). Complexity: A Guided Tour. Oxford University Press Inc.
[42] MSV, J. (2018). Here Are Three Factors That Accelerate the Rise of Artificial Intelligence. Forbes.
https://www.forbes.com/sites/janakirammsv/2018/05/27/here-are-three-factors-that-accelerate-the-rise-of-artificial-intelligence/?sh=52cbdccaadd9
[43] Nickerson, R. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General Psychology, 2, 175-220.
https://doi.org/10.1037/1089-2680.2.2.175
[44] Nijs, L. (2017). Visual and Verbal Metaphors in Advertisements. Tilburg University, Communication and Information Sciences Business Communication and Digital Media.
[45] Pearl, J. (2009). Causality: Models, Reasoning, and Interference. Cambridge University Press.
https://doi.org/10.1017/CBO9780511803161
[46] Penrose, R. (1994). Shadows of the Mind. Oxford University Press.
[47] Pfeifer, R., & Bongard, J. (2007). How the Body Shapes the Way We Think: A New View of Intelligence. MIT Press.
[48] Piaget, J. (2001). The Psychology of Intelligence. Routledge.
[49] Popper, K. (1959). The Logic of Scientific Discovery. Psychology Press.
https://doi.org/10.1063/1.3060577
[50] Ramus, F. (2017). General Intelligence Is an Emerging Property, Not an Evolutionary Puzzle. Behavioral and Brain Sciences, 40, E217.
https://doi.org/10.1017/S0140525X1600176X
[51] Robertson, S., & Robertson, J. (2012). Mastering the Requirements Process: Getting Requirements Right. Addison-Wesley Professional.
[52] Rosenthal, R., & Frode, K. (1963). The Effect of Experimenter Bias on the Performance of the Albino Rat. Behavioral Science, 8, 183-189.
https://doi.org/10.1002/bs.3830080302
[53] Russel, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach. Pearson Esucation Limited.
[54] Ryan, M. J., Wheatcraft, L. S., Dick, J., & Zinni, R. (2015). On the Definitions of Terms in Requirements Expressions. INCOSE International Symposium, 25, 169-181.
https://doi.org/10.1002/j.2334-5837.2015.00055.x
[55] Searl, J. R. (1990). Is the Brain’s Mind a Computer Program? Scientific American, 262, 26-31.
https://doi.org/10.1038/scientificamerican0190-26
[56] Sharma, R. (2023). IoT/M2M Technology Trends in the Logistics Industry. The Fast Mode.
https://www.thefastmode.com/quick-take/12266-iot-m2m-technology-trends-in-the-logistics-industry
[57] Sternberg, R. J. (2018). Theories of Intelligence. In S. I. Pfeiffer, E. Shaunessy-Dedrick, & M. Foley-Nicpon (Eds.), APA Handbook of Giftedness and Talent (pp. 145-161). American Psychological Association.
https://doi.org/10.1037/0000038-010
[58] Sternberg, R. J., & Kaufman, J. C. (2002). The Evolution of Intelligence. Psychology Press.
[59] Tarski, A. (1969). Truth and Proof. Scientific American, 220, 63-77.
https://doi.org/10.1038/scientificamerican0669-63
[60] Taylor, J. N. (2021). Artificial Intelligence and Machine Learning in Consumer Products. Office of Hazard Identification and Reduction. Consumer Product Safety Commission (CPSC).
[61] Trends, M. (2020). What Are the Important Factors That Drive Artificial Intelligence? Analytics Insight.
https://www.analyticsinsight.net/what-are-the-important-factors-that-drive-artificial-intelligence/
[62] Turin, A. (1938). On Computable Numbers with an Application to the Entscheidungsproblem. A Correction. Proceedings of the London Mathematical Society, s2-43, 544-546.
https://doi.org/10.1112/plms/s2-43.6.544
[63] Vernon, D., Metta, G., & Sandini, G. (2010). Embodiment in Cognitive Systems: On the Mutual Dependence of Cognition and Robotics. In J. Gray, & S. Nefti-Meziani (Eds.), Embodied Cognitive Systems (pp. 1-12). Institution of Engineering and Technology (IET).
https://doi.org/10.1049/PBCE071E_ch1
[64] Wikipedia (2022). Artificial Intelligence. Wikipedia, The Free Encyclopedia.
https://en.wikipedia.org/wiki/Artificial_intelligence
[65] Wilkins, M. (1928). The Effect of Changed Material on Ability to Do Formal Syllogistic Reasoning. Archives of Psychology, 102, 5-77.
[66] Wittgenstein, L. (1922). Tractatus Logico Philosophicus. Harcourt, Brace & Company.
[67] Zador, A. M. (2019). A Critique of Pure Learning and What Artificial Neural Networks Can Learn from Animal Brains. Nature Communications, 10, Article No. 3770.
https://doi.org/10.1038/s41467-019-11786-6
[68] Zhang, D., Maslej, N., Brynjolfsson, E., Etchemendy, J., Lyons, T., Manyika, J. et al. (2022). The AI Index 2022 Annual Report. AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University.
[69] Zhu, Q., Yiying, S., Siyuan, H., Xiaobai, L., Moqian, T., Zonglei, Z. et al. (2010). Heritability of the Specific Cognitive Ability of Face Perception. Current Biology, 20, 137-142.
https://www.sciencedirect.com/science/article/pii/S096098220902123X
https://doi.org/10.1016/j.cub.2009.11.067

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.