Artful Deception, Languaging, and Learning—The Brain on Seeing Itself

Abstract

Despite having named ourselves Homo sapiens—a designation contingent on word/reason (logos) as our chosen identifier—recent evidence suggests language is only a small fraction of the story. Human beings would be more aptly named Homo videns—seeing man—if percentage of cortex area per modality determined the labeling of an organism. Instead, the sentential ontology of language philosophers and linguists persists in spite of the growing body of cognitive research challenging the language instinct as our most defining characteristic. What is becoming clearer is that language is palimpsestic. It is like a marked transparency over visuospatial maps, which are wired to sensorimotor maps. The left lateralized interpreter uses language to communicably narrativize an apparent unity, but people are not the only fictionalizing animals. This examination looks to cognitive and psychological studies to suggest that a prelinguistic instinct to make sense of unrelated information is a biological consequence of intersections among pattern matching, symbolic thinking, aesthetics, and emotive tagging, which is accessible by language, but not a product thereof. Language, rather, is just an outer surface. Rather than thinking man, playing man, or tool-making man, we would be better described as storytelling animals (narrativism). Like other social mammals, we run simulation heuristics to predict causal chains, object/event frequency, value association, and problem solving. The post hoc product is episodic fiction. Language merely serves to magnify what Friederich Nietzsche is rightfully identified as an art of dissimulation—lying. In short, the moral of the story is that we are making it all up as we go along.

Share and Cite:

Preston, A. (2015) Artful Deception, Languaging, and Learning—The Brain on Seeing Itself. Open Journal of Philosophy, 5, 403-417. doi: 10.4236/ojpp.2015.57049.

1. Introduction

When I started studying literature as an undergrad, it never occurred to me that I should know anything about the systems, architecture, or anatomy of the organ doing all the work. I went to class and read assigned texts like everyone else. I imitated the analytical style common to my discipline, formatted according to the right style guide’s most recent edition, and submitted essays modeling what published authors in the field wrote. This mimicry and practice is how skills sets are learned in higher education: by trial and error, often half-blind, half-in- formed and mostly imitative.

Students come to school knowing little to nothing―why else would they be there? They often have some naïve interest in the subject like visions of a future occupation, admission to graduate school, attaining to a specific tax bracket post-graduation. They read published voices in the field, copy the style and lingo, plug and play frequently used jargon, drop names worth name dropping, and then gradually acquire enough subconscious skill to develop their own expression of that craft. That is what I did because it was what everyone else was doing; that is also what my students do.

Most education apparently happens like that―without any requisite knowledge of the mind or brain on the part of the educator or learner. Children, for example, learn to speak and walk without any notion of how the brain works. Most instructors also know very little about the seat of learning, even though it is what they are catering to, molding. They know the effects, not the causes, and historically that has been enough to invent the trivium, quadrivium, rule of law, and categorical imperative. The creation of Greek democracy and drama required no deep, structured, scientific knowledge of electrophysiology or neural networks.

2. A Brief History: Education without a Brain

Going back more than 2500 years to Confucius―one of the first teachers paid for academic instruction (Yao, 2000) , it seems pretty obvious that the brain can do its thing without having to actually look at itself under a microscope. Neither Confucius nor Socrates, archetypal sages, knew anything about neurons or action potentials but shaped the minds of their pupils nonetheless. Apparently they didn’t need to know. Behavior, observation, and deduction were enough to give rise to theories of mind and how to condition the psyche.

Going back even 300 years to the agricultural revolution, the species has invented an enormous wealth of technology and concept models to understand, manipulate, and exploit our environment, all sort of feeling our way through the forest without a map. We have constructed alphabets, architecture, democracy, plumbing, vaccinations, microwaves, cell phones, satellites, nuclear reactors, global geography, coinage, credit, calculus, bioengineering, steel manufacturing, maps of galaxy clusters, and a myriad of other cultural, scientific, and political constructs. The list goes on. Few great historical inventions required an intimate understanding of cognitive modalities, functional neuroscience, or cognitive psychology, all latecomers to the game of knowledge and culture.

Most of philosophy, likewise, existed long before we knew much of anything about the brain and its operations. In fact, Aristotle (1994) and Shelds (2010) thought the brain was an unimportant part of anatomy, placing the seat of consciousness in the heart, like the Egyptians. He observed that the living brain, if a hole were bored through the skull, is not sensitive to touch―it has no nerve endings and thus does not feel pain if poked. He noted that there is no blood in the brain, and that parts of the brain could even be damaged and removed without killing the subject, while the heart, by contrast, if damaged or removed resulted in death. There were, however, other scientistic thinkers who recognized the importance of the brain―Hippocrates, for example―even in ancient Greece, but an intimate understanding of the brain, its systems, regional modalities, and their functions only began concretizing as a legitimate science in the late nineteenth century. By all accounts, knowing the brain was never prerequisite for knowing the effects of its operations―observable behavior. Inference and speculation, deductive reasoning, and observation were enough to develop systems of education: techniques for enhancing learning, skill acquisition, and craft mastery.

3. The Mind-Body Divide

However, when psychology earned legitimacy in academy in the 19th century, how we thought about the brain and its relationship to the body began to change. Nietzsche (1961), like Hippocrates, postulated that there could be no mind without body, that “There is more wisdom in your body than in your deepest philosophy”. Still, a long running dualist account of the mind-body divide persisted up to the birth of brain science, and arguably remains even now among various schools of thought, something Nietzsche argues is thanks to (read can be blamed on) Plato’s Socratic dialogues.

The notion that there is a difference between mind and brain, too, reflects Platonic dualism, implying a kind of metaphysical quality to cogitation. This is easily discoverable in most scientific texts up through the fin de siècle in Europe and the Americas, a product of the Great Chain of Being and religious dogma (Lovejoy, 1964) . Mind was not a consequence, function, or aesthetic of the physical feature or function of the brain; it was the soul―an essence of the divine contained in man’s physical packaging―his corpus. It was not brain that distinguished mankind from the rest of the living things on earth, nor the body; plenty of other mobile creatures had both. It was psyche, then, that set human beings above all other animals, mind not brain.

Descartes struggled with this dualist model, trying to understand the relationship between soul stuff and physical stuff. He tried to find a way for the ephemeral mind to interact with the automaton-body, finally coming to the conclusion that vibrations of the pineal gland had to be how mind-spirit communicated with the brain to control body actions (Lokhorst, 2013) . Animals, on the other hand he asserted, were mere automata, soulless and thus not alive the way human beings were; they only existed in this world, while our souls were really alive elsewhere. Our bodies were part of that Platonic vision of a lower, corrupt physical world, which is really an illusion, a place where souls are tested, or as Dante (1995) calls Earth after looking back at it from one of the higher planes of the celestial spheres, “the little threshing floor/that so incites our savagery”―where the wheat is separated from the chaff. Here was an impossible situation for a scientist, seeing as the ephemeral cannot be tested, repeated, measured.

In just such a way, time and again in historical scientific and scholarly discourse, mind has been relegated to the world beyond; the counter to that axiomatic presupposition pointed to something more dangerous―the possibility that there is no metaphysical realm. This is where existentialism would eventually lead to and where Darwinian evolution would begin to threaten the Great Chain of Being, but until that time, placing the mind into the physical body would present too much conflict with the West’s dominant doctrine. Rather than sever ties with the beyond, awaiting William James’ pragmatism and Hermann Ebbinghaus’ (1913) pioneering memory studies, mind and brain remained estranged as part of the mystery of being.

4. Predictive Narrativizing: An Evolutionary Developmental Function

The modern sciences have the luxury of greater separation from belief-based paradigms, and take to the problem with measurement, statistical and gradient sets of valuations, and without the yoke of dichotomous thinking. Eventually, brain and behavioral specialists took up evolutionary theory and begin looking at consciousness, thought, and mind-brain as an emergent property of embodied cognition, something that most complex organisms with central nervous systems have at least in some form. Rats, dogs, pigs, horses, cats, baboons, dolphins all think, feel, have personalities and taste preferences. To some degree, they can imagine hypothetical fictions about the behavior of other animals and the environment, which allows them to hunt or alter behavior to adapt to shifting environmental patterns. Predators, at the very least, have to have this kind of simulative-cognition to survive―their mental representations partner up with their emotional and homeostatic systems to spin salient stories. It may not be a consciously constructed narrative, but a narrative it is nonetheless.

Anecdotally, let’s take my cat, for example. She is clever enough to fabricate brief future-fictions based on her observations of my roommate’s cat. His predictable behavior allows her to realize that when he goes into one end of a hallway, she can sneak to the other end, wait for him to pass by the corner overhang where she lays in wait, and then pounce to get a good, hair-raising fright out of him. She even seems to find delight in this, a reward for guessing his pathway and probable behavior correctly. Sometimes she tricks me into playing this non-threatening game in the same way, manipulating the “higher” ape for her own amusement.

While this may seem a bit quaint as an illustration, there is more to this illustrative example than convenience and a fondness for my own pet’s intellect. My cat has the same basic system of forethought as a wild predator who has evolved predictive narrativizing to survive. Both domestic and wild feline can also appropriate the same survival instinct to aesthetic or arbitrary actions as well―to play. Most complex social animals, in fact, learn by playing, modeling, imitating. Children play house. Dogs play fight. Social animals make believe, pretend, they engage in dramatic performances as an in-built developmental strategy. That, alone, would be better grounds for calling human beings Homo ludens instead of Homo sapiens, as Johan Huizinga (1950) suggests.

From this perspective, it seems humans are not the only members of the animal kingdom who tell stories, but there is more to it than just explaining animal behavior. Mimicry suggests something about how the brain works and how we learn. While we haven’t needed to measure the length of axon processes or explain how sodium, potassium, and calcium affect potentiation in order to learn, as most living history can attest to, other members of the animal kingdom likewise do fairly well without culture or language. All we needed to develop was the ability to play act, not even schools or concepts. What all this is getting at is that knowing about neural mechanisms, anatomical structures, and neurological systems is perhaps, at least in some very fundamental ways, not as usefulin understanding mental space or how it works, regardless of what the 21st century cognitive neuroscience revolution would have us believe.

What much of the literature does suggest, however, is that we have a kind of preconscious awareness of some of the operations of mind that higher social animals both genetically and culturally inherit, something Heidegger (1977) made a very convincing case for. The system exists below the level of conscious access most of the time, like when a group of lions work together to catch their next meal. Human beings harness and tap into that narrative mode of thinking when we convince a friend to share a taxi fare with us, teach a child how to tie his shoe, train a pet a new trick, or even coax a bamboo plant to grow in a coil around a small wooden dowel. We get it, somehow, that things adapt to external pressures and can thus apply those external pressures to modify internal models, values, and systems of knowledge―even the very structural integrity of a growing body and mind. Supposing, then, that this mode of cognition is not just another −ism, but a useful lens for peering through the cracks in consciousness, a question arises: where does this innate form of learning come from?

5. Understanding the Multimodal Network

Frederick Turner (1992) explained to me once years ago that human beings were the only animals who had domesticated themselves. This point, like the reveal in a murder mystery, seems so obvious in retrospect, like so many hard-to-see truths. While many higher organisms can fabricate preconscious fictions to hunt other animals, play with each other, and develop social hierarchies, we are the only species we know of right now who have trained ourselves. We change our own behaviors and beliefs, test and confect new ways to modify how we act and think, preserve our discoveries in storehouses of knowledge, and try to measure, understand, and harness every part of ourselves and our environment. How? How have we been able to do this, where no other animal has been able to, at least not to the extent that we have? No other animal, after all, has managed to get off this little planet, much less write a symphony or a sonnet.

5.1. Pattern Recognition

Some theories posit that the rise of human consciousness is a gradual complexification of pattern recognition. This is the same kind of trap Descartes suffered from in explaining the human body problem―oversimplifying to a simple cause-effect theory. Behaviorists, too, spearheaded years of psychological research on this premise―animal behavior, including humans, was little more than programmed stimulus reaction calibrated through conditioning. While they are not entirely wrong, there is plenty of reason for why that psychological movement has gone out of vogue. Human beings, as well as other complex organisms, are more than just pattern readers―bottom-up thinking. However, this capacity is, I would argue, a key ingredient in the progression of cognition toward more metacognitive functions of mind. It is just not the whole story, merely a part.

5.2. The Languaging Instinct

Other theorists argue that is it language which gave us the great leap forward. In this Sentential Paradigm, as Patricia Churchland (1989) calls it, the development of a complex combinatorial sign system made it possible for the mind-brain to think in more abstract ways, where a symbol can mean more than one thing depending on its context. The same word activates different semantic values depending on the other words that it are placed in a sentence with it, shifting its meaning depending on where in the sentence it is, what Ezra Pound (1987) would have called logopoiea―charging a word with meaning through its syntactical relationship within a larger sentence. The tone and situation also offer other contextual influences that dampen certain denotative or connotative values of a word, so that saying, “That looks great on you” can literally mean what it says or can mean the exact opposite when stated sarcastically, ironically. Sometimes, as the saying goes, a cigar is just a cigar, but sometimes it isn’t. Sometimes a pipe is not a pipe if it is a painting of a pipe, as in Rene Bagritte’s (1929) “Treachery of Images”. All of this, crudely and in short, reflects the late 19th through the 20th century trends in language philosophy that claim human cognition is language-like, born of our language instinct (Pinker, 1994) , and/or too enframed in language-like representation processing to be parsed from our linguistic, symbolic enculturation.

5.3. Conceptual Metaphor & Representation

What the language-as-cause arguments stem from is largely a focus on symbolic thinking―representations. It is perhaps best expressed and examined by George Lakoff’s (1989) Metaphors We Live By. Lakoff’s work gave rise to more contemporary conceptual metaphor theory (CMT) that cognition is basically metaphoric in nature, meaning that we think in terms of mappings where the understand “one conceptual domain (e.g., love) in terms of another conceptual domain (e.g., journeys)” (Katz, 1998) . Evidence is largely behavioral in testing this cognitive model, but there is a growing amount of support to suggest there is at least some measure of significance to it.

5.4. Associative Palimpsests

The principles underlying CMT, however, go back to earlier figures than verbal-linguistic forms of expression, which are relegated to social communication. Newer revisions to CMT suggest that metaphorical thinking is prelinguistic and preconscious, where the behavior is an external expression of aesthetic system operations. Evolutionary anthropologists, for example, look to early tools discovered as sacred objects buried with the dead, clearly not to be used for the function that the tool was originally designed for (Carbonell et al., 2001) . At the point that hand axes became non-functional objects, they represented symbols of status or of artistry, the way one canvas painting may only be worth $20 while another is worth $200,000. In burying these ceremonial objects with the dead, they take on some other value than cutting meat or breaking open nuts. They turn into something sacred―set aside―which changes their appreciable quality; the object becomes rich in symbolic connotation, a figurative meaning, not a literal one.

Any token imbued with a set-aside-ness alters the value of that object, indicating an abstraction of it beyond its original purpose or function. The problem with the language argument, in part however, is that it holds verbal-linguistic language above other forms of symbolic expression. Before language can even really begin to acquire a figurative level of meaning, the capacity to think symbolically must already exist. Tools, gestures, images, objects, even utterances all have the latent potential to be representative of more than just what they appear to be―something any person who collects things subconsciously knows. We’re attracted to the object aesthetically, perhaps even emotionally, through some association or projection of value that the thing represents to us, but it has none of that thrown (given) meaning. Most spoons, for example, are just tools for scooping liquid or soft foods in a way that is easy to deliver to the mouth. For people who collect little novelty spoons at every city they visit, however, clearly they are not using those spoons to eat ice cream or stir coffee with; the tool becomes a trinket that has an extra quality, which to someone else may not be apparent or given.

5.5. Pleasure Principle

We could argue that this kind of symbolic, associative thinking motivated early human beings to ornament themselves. If some object, say a shell, has a beautiful iridescent surface, the beauty of that object may make the object itself come to represent that taste preference. The sensory response its beholder experiences―a feeling of pleasure―would become associated with the object if experienced every time that person sees the shell. Since what fires together wires together (Hebb, 1949) , it seems no large stretch to say that the concurrent perception and sensation would become bound together. This kind of conditioning would simultaneously activate populations of neurons that signal a kind of pair bonding of the aesthetic (sensory) and the emotive (pleasure reward). That association turns the shell into a symbol for the pleasing experience of beauty, which might have motivated the person who saw beauty in the object to bore a hole through it, string it on a strand of leather, and tie it around a neck, braid it into the hair, or bind it around a wrist.

5.6. Aesthetic Bias

If nothing else, we know, at least, that this aesthetic sensibility exists, yet another component of the consciousness concoction. Anthropologists have collections of stones and shells turned into beads. Pink granite hand axes never used as axes have been found in ceremonial burial sites. Symbolic thinking paired with aesthetics and associative thinking seems to have motivated early man to artistically reproduce the form-patterns of animals on cave walls, reproducing or copying those vital parts of life that would have been the focus of so much mental imagery through hunting, fishing, foraging, communing around the fire. Except, at the point that those things became symbolic, they began to possess the power of duplicity―literal and figurative meaning, not an either/or. More than that, in the emergence of a parallel layer of meaning, the objects could take on a bit of life of their own, where the meaning they now carry becomes something like the moral of a story. The platitude, “what if these walls could talk?” changes its tenor in the Chauvet Cave in France because there, they do.

5.7. Abstract Modeling

Abstract symbolic thinking is clearly another feature required for complex cognition, and one that many organisms often do not possess. While my cat may have taste preferences for sleeping on freshly laundered clothes or in a window seat instead of on the floor, she does not create paintings of those places or wax poetical about their loveliness. She does not try to recreate representations of them that are special, sacred depictions meant to resurrect specific memories, historical experiences, or pivotal moments in her life upon which she think fondly. She does, however, steal hair bands and horde them under the bed because of an aesthetic sense of ownership―she likes them and so they seem to belong to her. We also see this in young children playing with someone else’s toys; ones they like they try to take home. Like the male blue satin bower bird that collects all things the lapis color of his eye, arranging them into an elaborate array around his purely aesthetic bower, my cat does have some rudimentary symbolic thinking going on. Those items that please become symbols of pleasure. The bower bird does this to attract a mate, while my cat does it likely because she is bored in an apartment most of the day, but the point remains. Each is displaying a kind of associative, symbolic aesthetic that pairs experience with an object. If they were able to think more abstractly, perhaps they might make models or artistic miniatures of their affective objects, they way that human children often do when they draw pictures of things that they think fondly of.

6. Toward a New Ontological Model

This notion of figurative association between an object and a feeling that then becomes a kind of conditioning and a kind of metaphor seems to exist before language, even in the absence of language. This is important in refuting the sentential cognition stance as ontological. Most animals that have forms of communication do not tend to have connotative variability for their utterances, but they can think or acquire certain conditioned associations and make gestures that often operate at dual levels. While the shriek a chimpanzee makes indicating a predator is in the area will always mean that one thing, danger, to be able to look at a twig and strip its leaves to turn it into a tool suggests there is some degree of symbolic thinking taking place in mind.

In human beings the degree of complexity in symbolic thinking seems to have increased exponentially. Take the word |play|, for example. This four-letter word has at least 20 different meanings depending on the context of the sentence it is used in. It can be a noun or a verb; a drama, a trick, a rouse, a game, a sport, an amusement, instrumentation, cooperation, being uncooperative, the slack in a rope, a written tragedy, a performance, a fiction, the act of playing an audio track, turning on a radio, composing or making music, make-believe, a kind of freedom, a joke, acting, putting on an act, being foolish, making someone else look a fool. It’s one word, one arbitrary collection of phonemes that can bear the burden of so many nuanced variations. The symbol itself means nothing, but comes to mean through the collective effort of social convention, sentential context, authorial intention, historical and cultural influences, and performative speech acts, not a single mode of thought, not just one of these parts, but an active, dynamic gestalt. Language itself cannot take all the credit for producing so much variability, even if once it emerged it further complicated and complexified consciousness.

Language arises out of a combination of certain cognitive capacities, which would have to have been there already for language to manifest. This is arguably why it takes time for children to learn to speak―their brains are still developing for one: the motor skills that control their lips, tongue, jaw, and throat; how to decode pulses and frequencies transduced through the eardrum and cochlea; how to see and recognize objects and discern between all the features of things light reflects off of. On top of all that, they have to learn the symbols, syntax, prosody, and gestures of communication. Language is a late comer in the evolutionary and developmental mental toolkit and takes a lifetime to master, if ever.

Language does require symbolic representation, but also pattern recognition, socialization, an ability think associatively, an ability (perhaps deriving from empathy?) to interpret what a sender means to convey versus what is shown/said, an ability to chunk and parse information, adaptability (context and code shifting), and an ability to model or form internal maps of external phenomena. Language alone cannot produce what so many want to claim is a unique form of human cognition, but it may have been the emergent property that then produced the fertile ground upon which complex acts of thinking about thinking have arisen. The folding and thus in-forming of these various capacities of mind perhaps emerges from a wider distribution of these functions and modalities synchronizing into chains of systemic activation. Consciousness and self cannot arise from language alone, which is but one system expression of a more primitive collection of evolutionary adaptations, the intersection of many different innate cognitive modalities, like those listed here, plus other sensorimotor, visuospatial, and phonologic aspects of speaking and hearing.

7. Mental Cartography: Setting & Characterization

Antonio Damasio (1994) is of the mind that where human consciousness arises is in the emergence of self, and that self is an experiential effect of the numerous mappings of mind that occur when the brain records and creates internal representative models of the embodied mind interacting with the world (ibid.). It learns to read the world, writes its own code, which translates what it reads and revises it based on trial and error while engaged in physical activities. It authors an entire world and produces elaborate maps, with itself as the compass, ink, parchment, and navigator. The language of the brain is, thus, not words or a universal grammar, but interactive maps that are constantly being updated, improved, and redrawn (ibid.).

The brain has many internal maps of not only the body, like the topographical map of all the innervated limbs, torso, genitalia, face, mouth, hands, and digits―the sensorimotor homonculus, but also the motor cortex, the cerebellum, and the brain stem, to name a few. There are also maps of maps, since the role of the brain is distributed communication with each of the parts of the body and itself, something that allows a body to move through space and time with improved refinements (Damasio, 1994) . It conveys information from distal limbs to the head, from the head to the heart, from the spinal column to the fingers and hips, sharing the interoception, exteroception, proprioception with the whole corpus. It is much easier to manage homeostasis if there is an infrastructure for information disbursal, receipt, and processing. Unlike trees, that have no need for brains because they do not need to navigate their environment, the brain’s mapping instinct, Damasio suggests, is one of the first widespread multimodal network operations of complex lifeforms. From this, then, perhaps it is not language that is the house of being, but something closer to journeying, and the self then is an active construct in that narrative.

What is particularly interesting about all of this, as said earlier, is that most of this information has been unknown throughout the whole history of human knowledge. There was a kind of intuition about much of it among many great thinkers, but without a lot of evidence (if any) and often without consideration for the possibility that so much sentience could arise out of something that is material, biological, mortal. The self has long been regarded as something unique, exceptional, belonging only to the human, but born of something transcendent. Disrupting the anthropocentric world view with a Copernican shift so that the human soul is no longer the focus might do a lot of good for global politics, the environment, and the future of all living things on this planet.

Wishful thinking aside, what matters is not the absolute truth of the existence of a unified and singular self―one’s personal identity as an unchanging thing―because a fixed capital-T truth is not something that really exists outside of imagination. That abstract idea is as much a noble lie as the ones that parents tell their children to keep them out of trouble or to make sure they brush their teeth. Truth is the elusive horizon, an illusion of perspective and perception. What matters is getting a better and closer approximation of the active narrativization of a self-unity and to accept that we have through this process learned how to experience ourselves existing―that is the Mc Guffin, the Not-so-holy Grail that answers those perpetual questions about being. How, not why, are we consciously aware of our experiences? What benefit does that afford us or rather what does it cost us? And how are we able to be aware of ourselves being aware of other people’s awareness of our awareness―to what end?

8. The Dissimulating Brain: In an Amoral Sense

This discourse is all about the working hypothesis that the middle ground where consciousness exists―between the bottom-up systems and top-down systems we have no conscious access to―is essentially narrativization, not as a metaphor but as a representative system that maps the mapping of itself mapping itself, like a metatextual simulator doing comparative translations to produce a self-learning, self-authoring, self-editing story. Research into what Michael Gazzaniga (2011) labeled the left-hemisphere interpreter offers interesting insights into the possible seat of this story of self, which language is implicated in, but likely not the sole progenitor of. They are very probably correlates; ergo the emergence of a metacognitive self may very well be comorbid with language development as an exaptation of the left lateralized representation system. We’ll let Gazzaniga’s work unpack that statement. Briefly, the reason that the left hemisphere is called the interpreter is best demonstrated by Gazzaniga, who did the original studies and coined the term. I will give a rough approximation here of his findings purely for illustrative purposes, but reading the actual studies obviously provides a much clearer demonstration of what I’m getting at.

In this sort of cod example, split brain patients were tested on their ability to select items with the hand contralateral to the hemisphere shown an image, since the side of the brain that controls the left hand, for example, is the right motor cortex, and vice versa. Motor functions cross over in the spinal column or brain stem, so the right motor cortex is responsible for clenching the left hand, for example. So too with visual perception, the information that is shown to the right hemifield crosses over the chiasmus from the contralateral eye so that the whole right hemifield appears in left visual cortex. Because of this, and because language is largely left lateralized, in patients who have had the corpus callosum severed, information processed by the right hemisphere could not be articulated verbally; the right hemisphere wouldn’t have any way to communicate the information to the speech-language areas in the left brain.

In Gazzaniga’s blind sight experiment, the split-brain subjects were shown an image of a chicken claw so that it would only be seen by the left side of the brain. At the same time, the right side of the brain was shown an image of a snow shovel. The patient was then asked what he saw and responded that the image was a chicken claw. While both sides of the brain saw their respective images, only the side that saw the chicken claw could verbalize it. The left hand, however, when the patient was instructed to select from several items in a bag, despite the subject’s certainty that he only saw the chicken claw and nothing else, correctly picked out the snow shovel. The right brain identified what it saw, but could not do so with words; it had to rely on a means of communication that it had―the body. The fact that the right brain was able to identify the correct object, however, clearly indicates that it could see, understand, and communicate what it knew, even without language.

Where the experiment gets interesting is when the patient is asked why his left hand picked a snow shovel. Rather than stating what would be true for the left brain, that it did not know why, the subject comes up with an ad hoc reason for making his choice. He completely fabricates a relationship between the shovel and the chicken claw, claiming that one would need something to scoop droppings up with if there were chickens. Having absolutely no prior knowledge or legitimate rationale for why his left hand would have picked that object, his left brain created a reason to explain it―it lied, told a tale.

This act of spinning a story to make sense of events, which seem or are apparently unrelated, speaks to some of the operations of the left hemisphere―it tells stories. It takes disparate events and organizes them into a unified model that is understandable, perhaps compensating for the human mind’s discomfort with randomness and chaos, preferring sensible order instead (Gazzaniga, 2011) . There is, perhaps, more to this than merely an inclination toward rationalization.

9. Making It Up as We Go Along

It seems, in many ways, humans are storytelling animals. The split brain patient spun a Mark Twain style tall tale to link two objects in what seemed like, at least, a tenuously reasonable explanation. In most students’ cases, on paper due dates, they too create hypothetical fictions to try and ameliorate the awkward space of unknowing, fear of failure, and what they think instructors will believe is beyond their control. Children fabricate imaginary friends, whole play worlds, and wildly ingenious reasons for why their parents do the things they do. Although, it is worth reiterating that we are not the only animals that do this, as the description of my feline friend suggests; animals’ brains too tell stories, even deficient a language-culture.

We all make-believe often enough in daily life, albeit attending to the act very little. In fact, we often take our storytelling quite for granted, even trying to condition children out of doing it because eventually they have to know the truth―the tooth fairy and make believe are not not real, not honest. We do not usually celebrate wildly false excuses when a child does something wrong, judging their creative fictionalizing as morally flawed―thou shalt not bear false witness many moral codes assert.

Even still, the very same act―inventing a novel untruth―is perhaps one of the most fascinating acts of Lucy, a chimpanzee, who blamed her experimenter for stealing something she had eaten (Dowling, 2012) . When she did this, it suggested quite a leap of imagination where to avoid trouble or disappointing her handlers, like most children and undergraduates until fully indoctrinated, she created a story that replaced one character with another, like replacing one noun with another. She did something that we have historically kept for ourselves: a leap of imagination―she ran a narrative heuristic.

One could argue that in most cases we ignore our storytelling instinct because it is so ubiquitous. Children tell stories to explain phenomena they cannot understand; they mock up simulations and then test them. We make up stories to teach morals to children when they cannot understand complex systems of values and cultural norms, knowing that the tortoise and hare are not real, not literally true. When we have a bad day and we need to release the tension that keeps some episode running on aloop in our heads, we get together with or call a friend and recount the story, modifying and rediscovering it as we conjure what we can remember or want to emphasize, like eye-witnesses on the stand (Loftus, 1979).

Just the same, to predict what will likely happen based on decisions we are weighing out, but we are not yet set on, we fabricate hypothetical narratives to see how those choices might develop based on our observations, behaviors, preferences, and past experiences. We consider our context, our variables, the weights of variables (preferences), and try to imagine likely outcomes. We formulate these kinds of heuristics often to discover what we think the moral of any kind of story is, in our own lives to see if we can live with various consequences. The goal of which is to select the imaginary timeline we hope will form the future that our present will be the past of.

In this way, science fiction writers often produce cautionary tales that help inform readers about their world and decisions now by creating plausible futures. In those social commentaries, the course of action that leads the protagonist and antagonist to the story climax illustrates both right and wrong choices so people have models to learn from, the way children model their parents when they play house. Perhaps, then, too, we as a species started modifying ourselves with this storytelling instinct. Perhaps that is why even preliterate societies still have a voice that speaks to us through their great legends and myths like The Epic of Gilgamesh, The Iliad, The Odyssey, and The PopolVuh. Long before there was writing and Platonic traditions of rational thought, ancient peoples produced vehicles that could carry information and give rise to learning and culture. Those tales still have affect millennia later. Imagine how much imaginative effort has be lost to time or simply to an absence of even spoken word.

10. Memory: A Bardic Revisionist

An awful lot of learning in social animals takes place in the setting of stories like this, real life played out in modeling demonstrations―showing the young how to hunt, how to fish for insects using stripped twigs, how to make different bird songs, where to find the best fruit and nut trees. Knowledge is often transmitted without words, as abstract concepts rooted in the body and thus in space. They are enacted, practiced, learned through trial and error, and when mastered, performed as a series of events in a given setting for a particular purpose―the moral of the story, you might say. When the story goes wrong, another run through the simulation is required to improve various parts of the episode. Interestingly enough, personal experience―memories bound to specific events in life―are called just that, like a segment out of a film or television show: episodic.

Although, memories only seem like fixed episodes, they are far more volatile than they appear, not little picture shows that record the world exactly as it is. This is abundantly clear in the research of Elizabeth Loftus, who pioneered false memory studies and changed the way we think about eye witness testimony (Loftus, 1997). Memory can be revised and may be recreated each time recalled, meaning that our memories are susceptible to revision each time they are reactivated―drawn up into the attentional workspace, where they are vulnerable to revision, inaccuracies. Whenever we are recollecting, we are actually gathering up, redrawing, and reinterpreting what we think we know, glossing over or ignorantly disregarding the creative, authorial power consciousness and attention might have in connecting ideas.

Looking back to ancient Greece, Mesopotamia, and Egypt, where some of our oldest recorded stories come from, human culture and wisdom was transmitted largely through stories―epics, parables, legends, myths. Those narratives spelled out a people’s gender roles, technology, economic systems, social hierarchies, and common (maintained, shared) history; word of mouth was how civil values were perpetuated from generation to generation. Patterns in flood plains were turned into stories that were anthropomorphized as the effects of deities acting on earthly terrain or the atmosphere, which allowed people to learn how to predict growing seasons after flood waters receded and left fertile soil for agriculture; this kind of narrativizing represents a kind of cultural and cognitive technology―transmittable knowledge, a shift from subconscious mental processes to explicit memory.

In many anthropological accounts, the imprinting of each unique ecology on the mind also accounts for distinctions in cultural narratives based on environmental context. This is what Joseph Campbell (1991) calls public dreams: mythology, the story of a people’s god or gods. Those narratives weren’t merely an account of metabolic processes but of characterization, plot devices, elements of fate born out of psychological selves existing in a causal, physical environment temporally―their experience of existing, their consciousness, was essentially story like. Arguably, self-consciousness is the interpretation of experience that ascribes meaning to the life of the individual phenomenologically. Those personal accounts then manifest in archetypal bildungsroman as individual developmental descriptions, written in the language and understanding of the time. Likewise, cultural identities arise as the next level of larger shared experiences as social narrativization, which are typically condensed into origination epics like Genseis and the Mahabarata. The heroic story is developmental psychology, while the epic is socio politics.

The beginning and end of existence, life and death, right and wrong, these were not born of mere pattern recognition or definitional description; they were first captured and systematically recruited to a more tangible form of conscious accessibility, then perpetuated through that story, stories that came out of the mind-body living in the world where images, organisms, and objects, their actions and behaviors come together as parts of a grand living narrative that seems to write itself. According to Damasio’s interpreter brain, perhaps we really are spinning ourselves out, like the Fates’ thread of life, from myriad fibers of available experience and information that flood the brain.

Perhaps consciousness is this kind of streamlining, factoring all the data down into a digestible through-line, which is little more than an account of how we have learn to see ourselves in the world. That was how we learn to reflect upon ourselves. We become aware of ourselves as characters in our own story. We always come into the story in the middle (medias res), never in the beginning. Story comes first, language later, and then they become inseparably bound. Through this lens, then, narrativism is our experience of consciousness, where language is the emergent voice that eventually provides us a mirror―the sense of sensation becoming sensible, a third voice aware of its awareness. Once we can attend to the story making part of ourselves, we can read even as we write, self-editing as we go along. Story may be the very ground of metacognition, the one that made languaging possible.

11. Learning to Learn: A New Level of Influence

So what, though? Let’s suppose we are these redundant, self-referential, self-authoring systems who simultaneously write, read, revise, and attempt to make sense of (valuate) ourselves mapping ourselves moving in and through the world. How does knowing anything about the brain change the way we live and learn? Does it? That was where we started this rendezvous, so let’s go back to that question. Does the brain need to see and know itself to change itself? And if not, why bother? There may be no way to ever know everything, and we may never be able to fully map and replicate the brain to create an artificial intelligence that does exactly what the human brain does, supposing that is something we would want, anyway. If that is, then, an impossible feat, doesn’t it seem rather silly to chase an ever moving horizon as if doing so would ever let us catch the sun?

Arguably, anything we learn must change us―learning by its very nature suggests not only conceptual changes, but structural changes in the brain’s connections, its architecture. This is why reading interventions are so important at an early age for those who have trouble with language processing; language has a critical age, like many motor skills or other modalities do. If you want to be a concert pianist and great composer, hopefully your parents introduced you to music and the piano before you are nine to wire up your skill sets as to be as second-nature as possible―to be more natural, fully imprinted. If not, while the brain is always plastic and can certainly learn all its life, trying to become Mozart later in life, like trying to learn a foreign language later in life, will be that much harder, require more maintenance, and very likely never be as automatic as it is for someone who acquired it before that critical age.

The way learning happens is different during different stages of development, before and after cognitive maturity, especially. Understanding that the brain is perhaps most plastic for different modalities up to certain ages can help us reconsider the kinds of stories we are telling ourselves about what we think we know, who and what we are, and how we should change our current pedagogical systems for improved learning. It may even change the way we think about normalcy, disorders, creativity, and success. These categorical terms describe populations and behaviors, generalized values as abstract symbols, not individuals and their identities; they describe averages across many samples to explain trends, probabilities. If we consider what they mean, we can better employ those observations to improve how we understand what the brain is and can do.

Reorienting how we think about mind-brain could allow further reorganization of how we label and treat people, and not just how people learn. That is important because how we think about individuals is often enframed by misunderstandings caused by typing, category name-calling―language, that surface gloss we so often think is our golden ticket to species superiority. What this means is that even though it has never been necessary to really understand how the brain works to change it, to inform it, to teach it something, knowing that narratives are a kind of objectivizing comportment for how to interpret life, which language is merely an external representation of, we can then look at how we come to know, what knowing is and isn’t, and perhaps further refine the way we tell the stories about knowingness and ignorance.

12. We Are Our Memories

Of the different kinds of long term memory, I would argue that episodic may be the richest. We experience episodic memory like a simulation of our original experience―full of sensory and emotionally charged activations. Semantic memory recall, by contrast, is more fragmented like a single feature of a scene pulled out and isolated in a vacuum for maneuverability and observation―objectification. With episodic memory, it is experienced and has a temporal element that we can fast-forward through or rewind and replay, what Endel Tulving (2002) describes as “mental time travel.” It is a story: clips and scenes from our lives stitched together in recall as a kind of mental model to form the long narrative of whom and what we think we are. That is the foundation upon which we formulate the interpretation or reading of our identity and what we go back to again and again to revisit, reinterpret, and rewrite. It is not an actual whole, a total personal account, but pooled activations of disparate memories we string together like beads, like a picture book with lots of separate images flipped one after another, giving the illusion of unity.

As an example of how we form different memories, consider names for abstract concepts. A memory store of what we call a state’s capital is arbitrary labeling as semantic information; it has no living context, and as such is harder to learn and recall. Such a name-learning task has no real, imagable or sensory information to anchor the retrieval cue into, so even if it is acquired, it might not be able to be pulled back out on command―right there on the tip of your tongue but you just can’t get it out. Semantic memory in school seems the hardest to acquire; think of all the flashcards and vocabulary lists you had to use to learn. You probably do not remember exactly when and how you learned that democracy is a word that means a government of, by, and for the people.

By contrast, something you can picture a little narrative around, not rote memorization via arbitrary symbol and meaning pairs, has a deeper encoding and operates on multiple levels of processing simultaneously. Episodic memories require fewer flashcards. It is a kind of memory that pairs not just orthographic and phonological, but also semantic, sensory, and experiential. That is the principle at work in mnemonics―the more rich and sensual something is, the easier it is to hold onto and often to fish back out of memory because it has more recall cues and has been processed more deeply. Stories are also easier for us to understand, which is arguably why we teach our children morality through stories and not flashcards.

This concept of multi-level learning is called levels of processing or elaborative encoding (Craik & Lockhart, 1972; Craik & Tulving, 1975) . It is well illustrated by what journalist Joshua Foyer (2012) calls the Baker/baker paradox in his TED talk on memory. He describes the phenomena like this: if you tell two separate people to remember the same word, the way they think of that word will fundamentally alter the initial strength of the memory trace. If you tell one person to remember that you met a guy named Baker and the other person to remember that you met a buy who is a baker, the latter will be more likely to recall the word baker later. This is due to having a more elaborative system of encoding―how the information is learned.

The concept of a baker has smells, actions, taste-able, and visual elements that characterize it. Associatively, perhaps the notion of bread, cookies, or various other confectionaries are simultaneously activated or partially activated by the word, which is a verb turned into a noun―a person who bakes―so it also evokes the activity of baking, not just a static label, which is what a noun name is. There is dimension to a baker that most people, even children, would be able to immediately experience if thinking “baker.” A stranger named Baker, on the other hand, has none of those rich features. It is an empty placeholder; a contextless, faceless, actionless, anesthetic term. This is much more difficult to hold on to, much less recall at a later time.

The dissociability of semantic from episodic memory may be why learning people’s names is so difficult; you have no experience or real, living character values to attach arbitrary nomenclature to when that person is unfamiliar. They are like a flashcard from your freshman biology class you’ve seen the answer to once; you probably don’t remember the definition of the name on its surface. Once you have any real interaction with that person stored up in memory, though, one that their name can be attached to―remember, language comes after experience, not before―then the word carries meaning. Before that point, the word has no weight, no value, no anchor point; the story of you asking that stranger to help you with the copy machine at work fills out the referential placeholder with human qualities, which makes colleague Michael become memorable, instead of being some guy who said “Hi, I’m so-and-so” in passing.

In a way, concrete things, like names, that have little actionable stories exist in a way that metamorphoses them into main characters―what we might call personae, from the Greek theatrical word for mask. Characters are a thousand times easier to get to know than story-less names. We form a kind of relationship with them, identifying with them like we might another person. Connecting to people is always easier than connecting to inert things. Perhaps this is why in kindergarten, children do not simply learn the alphabet, they meet anthropomorphized letters narrativized to songs.

When I was a kid, my class watched short television episodes about “The Letter People,” literally giving them identities that reflected visual and phonological qualities, like Mr. T with his tall teeth. The more we transform arbitrary information we want to learn into something that we can plug into an existing frame of reference―character, setting, likes, dislikes, past experiences―the more they are able to have their own unique identity, sort of like comparing two like things illuminates their differences while still allowing their similarities to be appreciated. We never learn things in a vacuum anyway, which is in part why we often miss so much in a first reading of a text, in the first viewing of a photograph, the first (mis) impression of a person; we have to not only attend to information but connect to it, recognize it, and anchor it to what is already there. Surface appearances make that hard to do.

13. From an Ontology of Metaphor to an Ontology of Aphorism

This kind of meaning pairing goes beyond one-for-one conditioned associations, however, like you see in metaphors. Those are what Nietzsche (2006) argues to be the foundation of human knowing in his early work, an unpublished essay, “On Truth and Lies in a Nonmoral Sense”. In this text, he is still writing within his schoolhouse tradition, using logical proofs to work through philosophical tenets, while later he shifts stylistically to the parabolic aphorism, implying that he moves past this proto-ontology into a more story-rich one. Still, early on, he grapples with the schism between a literal and figurative reading of the world, one many young minds find hard to reconcile, as are most false dichotomies.

Metaphors are comparative associations that link two unlike things and illuminate new aspects of the target by using the symbol as a lens. However, even this seems too parsed for how we experience the world―not as unrelated object data, but as part of a larger whole. Objects and experiences do not exist in isolation for us in time and space. We experience them interactively, dimensionally, where time is a component, unlike in metaphor. The more dimensions we experience something in, like in depth of processing, the better the map or character sheet we make for it in memory, and thus the more we might say we know it. Those little episodes, like with the baker where there is some kind of action and sensation, we instinctively catch below the surface, so attaching a word, name, or description to it at the surface level of conscious engagement is easier. One of the ways we know this modal activation happens instinctively through observation can be seen in the effect of mirror neuron activity (Ramachandran, 2011) .

Mirroring: Through the Looking Glass

When a dancer watches another dancer or when a child watches an adult perform some action, mirror neurons recreate the sensory-motor circuit that would perform that action but in the brain of the observer, and often without him or her actually doing it at all (ibid.). You can think of this sort of like an internal ghosting or foreshadowing, similar to the voice in your head that you hear when listening to a phone number you want to remember; you say it over and over in your head, not always out loud. So too with action-oriented modeling: it has this beginning, middle, and end; you observe it in time; and your brain watches, learns, and models that behavior, even if it’s something you’ve never done before. Your brain is predicting how to recreate that action as a kind of imitative copy. Your brain runs a simulation, a little mock-trial, a little narrative. Then, you get to try it out. The more demonstrable something is, the more you can imagine doing it, and thus the more likely you will be able to pick up that movement.

For dancers, their training gives them even greater mental activation due to acquired expertise; when watching other dancers perform, their brains are performing with even more precision than a novice’s brain would in the same task. While not identical in mental activity, their internal dream-like mirroring of what they are watching is very similar to actually doing what they are merely imagining, perceiving. They can not only see it in their mind’s eye, but perform it in their minds the way dreams happen while we sleep―active experiences while the body lies inactive. They are real in the brain, even if the motor cortex activations are not communicated downstream to the brain stem or the body’s muscles.

Consider these concepts, then, in relation to how we can come to know ourselves. In the sense that we would have no idea what we look like without a mirror, we cannot know who we are and what that means―an arguably half-creative, half interpretive act―if we have nothing that allows us to see ourselves. Without a mirror, we would imagine our faces as a kind of aesthetic ideal (abstract form) that averages other people’s descriptions of us with all the faces we’ve ever seen and our own deductive conclusions. However, this would be a face with limited detail, perhaps even as off as we feel actors are when we see someone play a part who does not fit the character-image we have in mind from having read the book first. The mirror, however, gives us new context―concrete sensory detail, evidence, illustrative examples, using the same senses we use to read the world.

14. A Moral of the Story

This is what storytelling does―it shows us to ourselves, all the cultural, social, personal features that reflect the time and mind that gave birth to it. The story is how we read life, how we interpret it as the author. It also tells us about ourselves, the viewer, which is why the narrative of life is so particularly fascinating, because we are our own target audience, the medium of which the story is born and composed, the body that generates it, the mind that conceives and labors to create it, and the very product of what we decide it means. It is our whole cosmos, robbing from the rich world to give to the impoverished world we come into―the dark, experienceless cave we have to illuminate, decorate, create, define, detail, and dwell in―psyche. The brain, after all, does kind of exist in a vacuum; there is no light, no sound, no touch it ever really experiences without mediation through the body (Damasio, 1994) . It only knows how to imagine the world based on the messages it receives from the senses, hermeneutically.

To willingly engage in disbelief long enough to start from this premise requires a bit of revision to the common conception of what language is, as language does factor into storytelling in human beings; we really mature past the preconscious self-narratives in our mid-twenties when the brain reaches cognitive maturity after the prefrontal cortex finishes developing. A language is, as stated earlier, a complex, combinatorial sign system―representative in nature―which produces meaning contextually, meaning as a product of exchange. There is a sender, a receiver, and a message. The meaning, then, is a kind of averaging between what is encoded by the sender, who is the progenitor of the idea, and what is decoded by the receiver, the interpreter. Ergo, meaning has a bit of its mathematical value for this word, meaning an approximation and active averaging, where many details and distinctions are factored out in the contextual processing (Amato, 2011) .

By this definition, math, music, dance, visual arts, pheromones, hormones, amino acids, ionic charges, and neurotransmitters all are languages. They are parcels containing information that must be read to mean something; like all symbols, they are not inherently meaningful, but derive meaning from communicable transmission, which can be revised through sustained interaction between sender and receiver, who adapt to each other for improved efficacy―more successful dialogue, a word that derives from the prefix dia- (through) and the root logos (word, reason, argument, speech, language) (Camplin, 2009) . Plato was really onto something in building a pedagogy around dialogue, something that from the pre-textual world his mentor saw emerge in his own lifetime, was safely captured as transcribed conversations instead of essays. Ergo, readers hear conversations between living characters―episodes are not static.

In the post-textual world Nietzsche (1974) lived in, arguably that same impulse to speak actively, despite the calcification of voice that the written word represents, motivated him to shift from logical proofs as a style writing in his early essays to the aphorisms of Die fröhliche Wissenschaft. This leaning toward active narrativization as a philosophical mode becomes even more apparent in his titling of Thus Spoke Zarathustra (Nietzsche, 2003) . That epic fiction, not treatise, is named for something spoken, and only the subtitle admits to being written, abook for all and none. He was well-read in the Greeks, and seems to have heard what Plato was hiding behind his character Socrates. On both accounts, the discourse is narrative and intended to be active, where the authoring, reading, and interpreting is dynamic, ongoing, self in-forming, not static.

This is how we should think about consciousness, only the language we’re writing in and reading is entirely inside our own heads. We might argue, then, that when Socrates says that thinking is talking to oneself (Plato, 1997) , consciousness is that inner dialogue, able to modify itself through an ongoing conversation. It is the mental process that conceives the self in at least two parts: two characters conversing, often arguing, debating―verbally dancing or playing at an idea. Where it gets more interesting is when the initial self that subjectively exists and the second self that can play the part of object, critic, other, give rise to a new awareness in their exchanges―a realization that there has to be a third if there is an observer/narrator. That third voice is the god-like source and seat that, like Shakespeare’s (2009) Prospero can manipulate, modify, reinterpret, and narrate whole worlds of beings. The third voice is what those two are part and parcel of. This is the storyteller, the omniscient 3rd person self that I would argue is what early thinkers mistook for something divine, the way we tend to anthropomorphize forces that seem larger than ourselves and hard to control. Hearing an emerging metacognitive observer in our own heads can feel just as metaphysical, intangible as the spell of love from Aphrodite on Mount Olympus.

Since we have a hard time seeing ourselves inside the capsule where mind-brain resides, it seems particularly difficult to recognize anything, even ourselves, unless there is a way to show what we are trying to look at. We need a light and a reflective surface to see our own faces as they are. Story offers both of those things. Story is how we learn to learn, tapping into the way that all social organisms with central nervous systems tend to learn―observing, imitating, modeling, and then modifying to fit ourselves. We just externalize it by giving it a voice, making a tool we can use to modify the thing that modifies something to make revisions with. All those iterations and redundancies are the very same sort of principle that makes DNA so rich, the brain so complex, and fractals so fascinatingly beautiful.

15. Le Envoi: Take This, Word, and Carry It

Where do we go from here? No theory is an end. No hypothesis is a conclusion. Claims are challenges begging to be tested. What we need now is to collect up all the various strings of research that have broken down all the parts of cognition requisite for telling a story, which are many. If my proposal is in any way correct, nearly all parts of cognition come together to produce our narrative instinct. However, that is also where cognitive science offers new ways to study how we think, learn, and come into being. It is a fairly young field of study and by nature interdisciplinary, bringing together neuroscience, anatomy and physiology, philosophy, computer science, artificial intelligence, mathematics, education, developmental psychology, anthropology, genetics, and various other schools of thought. To study a multimodal system requires multiple lines of inquiry and research, thinking outside the box, and challenging paradigms. Merely studying literature or philosophy or pedagogy or cellular neuroscience or neurology or clinical psychology will not be enough in the future. It is not enough now. To keep up with our growth in information technology and our desire to harness our own evolution, we are going to need to be as multifaceted as our brains in how we construct the narrative of ourselves. It is going to take a few leaps of imagination and perhaps even more than a few speculative fictions. We will have to do it knowledgably, informedly, and creatively, accepting that we will only ever get an interpretation, a glimpse at the horizon. However, if in the process, we can better see and know ourselves, which is in and of itself rewarding, perhaps even in forming. We might even produce the fertile grounds for a new cultural identity and find new heroes, pilgrims, and villains lurking in the dark, waiting for a chance to share their tale.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Amato, L. A. (2011). The Ethical Imagination: An Interdisciplinary Study of the Relationship between Responsibility and Creativity. Retrieved from UMI Dissertation Publishing, ProQuest, Richardson, TX: University of Texas at Dallas.
[2] Aristotle (1994). On the Soul. J. A. Smith Hollingdale (Trans.), Internet Classics Archive. Boston: MIT Press.
[3] Campbell, J. (1991). The Power of Myth. New York: Anchor Books.
[4] Camplin, T. (2009). Diaphysics. New York: Rowan & Littlefield.
[5] Carbonell, E., Mosquera, M., Ollé, A., Rodríguez, X., Sahnouni, M., Sala, R., & Vergès, J. M. (2001). Structure Morphotechnique de L’industrie Lithique du Pleistocene Inferieur et Moyen d’Atapuerca. L’Anthropologie, 105, 259-280.
http://dx.doi.org/10.1016/S0003-5521(01)80016-9
[6] Churchland, P. (1989) Neurophilosophy: Toward a Unified Science of the Mind/Brain. Cambridge, MA: MIT Press.
[7] Craik, F. I. M., & Tulving, E. (1975). Depth of Processing and the Retention of Words in Episodic Memory. Journal of Experimental Psychology: General, 104, 268-294.
http://dx.doi.org/10.1037/0096-3445.104.3.268
[8] Craik, F. I. M., & Lockhart, R. S. (1972). Levels of Processing. A Framework for Memory Research. Journal of Verbal Learning and Verbal Behavior, 11, 671-684.
http://dx.doi.org/10.1016/S0022-5371(72)80001-X
[9] Damasio, A. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Penguin Putnam.
[10] Dante, A. (1995). Par. Canto XXII. The Divine Comedy. A. Mandelbaum (Trans.), New York: Everyman’s Library.
[11] Dowling, W. J. (2012). Philosophical Foundations of Psychology. Seminar. Issues in Cognition and Neuroscience. University of Texas at Dallas, Lecture.
[12] Ebbinghaus, H. (1913). Memory: A Contribution to Experimental Psychology. New York: Teacher’s College, Columbia Uni versity.
http://dx.doi.org/10.1037/10011-000
[13] Foyer, J. (2012). Feats of Memory Anyone Can Do. Ted Talk.
[14] Gaz-zaniga, M. (2011). Who’s in Charge? Free Will and the Science of the Brain. New York: Harper Collins.
[15] Gibbs, R. (1999). The Poets of Mind: Figurative Thought, Language and Understanding. Cambridge: Cambridge University Press.
[16] Hebb, D. O. (1949). The Organization of Behavior. New York: Wiley.
[17] Heidegger, M. (2008). Basic Writings. D. F. Krell (Ed.), 1977. New York: Harper Collins.
[18] Huizinga, J. (1950). Homo Ludens. London: Routledge.
[19] Katz, A., Cacciari, C., Gibbs, R., & Turner, M. (1998). Figurative Language and Thought. New York: Oxford University Press.
[20] Lakoff, G., & Johnson, M. (1989). Metaphors We Live by. Chicago, IL: University of Chicago Press.
[21] Lokhorst, G.-J. (2013). Descartes and the Pineal Gland. Stanford Encyclopedia of Philosophy, 18 September 2013.
[22] Lovejoy, A. (1964). The Great Chain of Being: A Study of the History of an Idea. Cambridge, MA: Harvard University Press.
[23] Nietzsche, F. (1974). The Gay Science: With a Prelude in Rhymes and an Appendix of Songs. W. Kaufmann (Trans.), New York: Vintage Books.
[24] Nietzsche, F. (2006). On Truth and Lies in a Nonmoral Sense. In K. A. Pearson, & D. Large (Eds.), The Nietzsche Reader (pp. 114-123). Malden, MA: Blackwell.
[25] Nietzsche, F. (2003). Thus Spoke Zarathustra: A Book for Everyone and No One. R. J. Hollingdale (Trans.), New York: Penguin.
[26] Pinker, S. (1994). The Language Instinct: How the Mind Creates Language. New York: Harper Collins.
http://dx.doi.org/10.1037/e412952005-009
[27] Plato (1997). Theaetetus. Plato Complete Works. M. J. Levett (Trans.), J. M. Cooper (Ed.), Indianapolis, IN: Hackett.
[28] Pound, E. (1987). ABC of Reading. New York: New Directions.
[29] Ramachandran, V. S. (2011). The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human. New York: Norton.
[30] Shields, C. (2010). Aristotle’s Psychology. 2000. Stanford Encyclopedia of Philosophy, 23 August 2010.
[31] Tulving, E. (2002). Episodic Memory: From Mind to Brain. Annual Review of Psychology, 53, 3-26.
http://dx.doi.org/10.1146/annurev.psych.53.100901.135114
[32] Turner, F. (1992). Natural Classicism: Essays on Literature and Science. 1985. Charlottesville, VA: University of Virginia Press.
[33] Yao, X. Z. (2000). An Introduction to Confucianism. New York: Cambridge University Press.
http://dx.doi.org/10.1017/CBO9780511800887

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.