Integrating Various Neural Features Based on Mechanism of Intricate Balance and Ongoing Activity: Unified Neural Account Underlying and Correspondent to Mental Phenomena

Abstract

In recent decades, brain science has been enriched from both empirical and computational approaches. Interesting emerging neural features include power-law distribution, chaotic behavior, self-organized criticality, variance approach, neuronal avalanches, difference-based and sparse coding, optimized information transfer, maximized dynamic range for information processing, and reproducibility of evoked spatio-temporal motifs in spontaneous activities, and so on. These intriguing findings can be largely categorized into two classes: complexity and regularity. This article would like to highlight that the above-mentioned properties although look diverse and unrelated, but actually may be rooted in a common foundation—excitatory and inhibitory balance (EIB) and ongoing activities (OA). To be clear, description and observation of neural features are phenomena or epiphenomena, while EIB-OA is the underlying mechanism. The EIB is maintained in a dynamic manner and may possess regional specificity, and importantly, EIB is organized along the boundary of phase transition which has been called criticality, bifurcation or edge of chaos. OA is composed of spontaneous organized activity, physiological noise, non-physiological noise and the interacting effect between OA and evoked activities. Based on EIB-OA, the brain may accommodate the property of chaos and regularity. We propose “virtual brain space” to bridge brain dynamics and mental space, and “code driving complexity hypothesis” to integrate regularity and complexity. The functional implication of oscillation and energy consumption of the brain are discussed.

Share and Cite:

Lee, T. and Tramontano, G. (2021) Integrating Various Neural Features Based on Mechanism of Intricate Balance and Ongoing Activity: Unified Neural Account Underlying and Correspondent to Mental Phenomena. World Journal of Neuroscience, 11, 161-210. doi: 10.4236/wjns.2021.112014.

1. Introduction

Brain function is realized by the computation, transformation, and propagation of neuronal activity via neural circuit. Although the underlying operating mechanism is still not clear, it is believed that the brain must organize itself according to some fundamental principles. In this article, we focus on recent neurobiological advance and integrate various perspectives into a summarized framework, that is, an intricate balance of excitation and inhibition (EIB), and on the EIB background occurs the evoked and ongoing activities (EA, OA). OA refers to the brain activities in resting state, while EA indicates brain response to external/internal stimuli. There are plenty of pathways and feedbacks to maintain EIB which is speculated to be situated near the boundary of meta-stability (bifurcation in parameter space), also named criticality or edge of chaos (addressing the transition between disorder and order), so that the system may encode diverse information, enable versatile neural dynamics, and may transit smoothly between different possibilities (multi-stable regimes) that are correspondent with various mental states and psychological functions [1] [2] [3]. OA and EA may reflect themselves in neuronal spikes and neural oscillation. In addition to spontaneous activities relevant to information transfer and processing, OA comprises other components such as physiological and non-physiological noise, and the interaction effect with EA. It is remarkable that OA is composed of both chaotic/complexity and regular portions. The properties of chaos and regularity of EIB-OA not only present at neuronal level but also at large-scale network. EIB-OA is shaped by maturation and may adapt to living environment. This article does not strictly differentiate chaos and complexity, the former adopting chaotic theory to describe neuronal/neural features and the latter emphasizing the global/emergent behavior as a result from large number of interacting components from lower hierarchy.

There are many interesting discoveries of neural characteristics in recent decades; to name a few, self-similarity, long range correlation, attractor, criticality, power-law distribution, meta-stable equilibrium, multi-stable states, optimized information transfer, maximized dynamic range of information processing, difference-based and sparse coding, neuronal avalanches, and reproducibility of evoked spatio-temporal motifs in spontaneous activities. The enrichment of all the observation is inspired by empirical as well as theoretical neuroscience. It is crucial to realize that these findings are phenomena or even epiphenomena, not the underlying mechanism(s). This theoretical article attempts to formulate that the diverse and dazzling neural features actually can be attributed to this common ground of EIB-OA. At first glance, balance of excitation and inhibition is nothing particular. However, the implication of the balance and OA is profound. EIB or equilibrium is not a dead steadiness but an active process.

This article is divided into 8 sections. The first section describes the neurobiological foundation that maintains EIB and the possible sources of OA. The ensuing Sections 2 to 3 discuss the concepts of self-organized criticality and chaos/complexity and point out the causal contribution from EIB-OA. Sections 4 to 6 mainly focus on the perspective of information processing in neural system and its relevance to EIB-OA. Section 7 highlights two basic coding strategies substantiated by EIB, namely sparse coding and difference-based coding. The content in Sections 4 to 7 is associated with (not exclusively) the regularity aspect of the brain. The last section concludes the major points.

In addition to experimental approach, computational simulation may complement our understanding of brain principles. It is particularly pertinent when the theoretical models are constructed based on physiological constraints so that the rationality is endorsed by empirical evidence and in turn, the derived neural model may provide prediction, unveil more detail, help forming hypotheses, and guide future experiments. The relationship between experimental and theoretical perspectives of neuroscience thus can be regarded as mutually informative, and these two disciplines are equally appreciated in this theoretical article. The literature selection flow in this synthesis article is summarized in Figure 1.

2. The Mechanism of Neural Equilibrium and Ongoing Activity (EIB-OA)

2.1. Neural Activity and Noise

As long as there is life, the brain is always active, no matter in sleep or even in comatose state [4]. The appearance of brain activities seems complicated, no matter at whatever scales. There are many ways to indicate neuronal/neural activities, such as neuron membrane potential and conductance, open and close of ion channels, ion flow into and out of cells, cascade of biochemical reaction, cortico-electric potential and electricity flow, oxygen consumption, glucose metabolism, blood flow, etc. The methods of electrophysiology and optical imaging (e.g., calcium imaging) are frequently adopted to quantify neuronal activity. Within a volume of brain tissue, the electrical current flowing along dendrosomatic

Figure 1. Literature selection flowchart and conceptual framework.

axis of pyramidal neurons constitutes local field potential (LFP, within 0.25 to 0.5 mm in radius around the recording tip). At large scale, the manifestation of population neural activity also depends on the applied imaging modalities; to name a few: electrical oscillation and event-related potential (ERP) in electroencephalography (EEG), blood-oxygen-level-dependent signal (BOLD) in functional magnetic resonance imaging (fMRI), flow and metabolism indicated by radioactivity in positron emission tomography (PET) and in single-photon emission computed tomography (SPECT), and oximetry in near-infrared spectroscopy and so on. Each research method possesses advantage and limitation and is equipped with innate noise structure to take care of.

At the level of neuronal tissue, intermittent spikes may occur spontaneously without external stimuli, and both chaotic and regular components have been identified. The spontaneous activity of neuronal tissue in vivo is apparently stochastic with high values in Fano factor and in coefficient of variation of inter- spike intervals, but may actually contain reproducible spatio-temporal motifs [5] [6] [7]. At the level of neural population in resting state where quasi-irregular automatic activity dominates, EEG contains oscillation with specific spectral peaks (delta, theta, alpha, beta and gamma) and fMRI comprises low frequency fluctuation (<0.1 Hz) which is organized into modular structures [8] [9] [10] [11], evidence from large-scale network also indicating that the brain dynamics hosts both irregularity and regularity. There are several hypotheses for the neural oscillation. Endogenous pacemakers in thalamus or cortex may generate electrical rhythms (multi-generators) [12] [13], which may propagate to the remainder of the cortex. Interacting neural nodes of the cortex and thalamus may either receive white noise as input and produce brain waves, or generate rhythms out of non-linearly coupled dynamics [14] [15] [16] [17] [18]. The network organized by fast recurrent excitation followed by slower feedback inhibition seems particularly ready to give rise to oscillations [19]. The above origins are neither exclusive to each other nor an exhaustive account for the observed rhythmic neural activities. However, there remains a critical question, that is: how to reconcile the prominent neuronal stochasticity and the observed population rhythmicity?

Although it is possible that some neurons or coupled neurons may fire regularly like a clock, cortical synaptic firing, as stated above, generally looks stochastic. Brunel and Wang have made an insightful contribution to this issue that the frequency of neural network oscillation is determined by the synaptic and membrane properties, independent of neuronal firing rate [20], illustrated in Figure 2. Cortical oscillation may originate from collective irregularity from fine-scale neuronal firing, and the large-scale rhythm may reflect neuronal and synaptic characteristics. The emergent order/pattern out of stochastic constituents is a hallmark of complexity system. Conversely, global network activity may influence the functional coupling between the embedded neurons, and background activity may control signal transmission [21]. The across-scale relationship

Figure 2. Illustration of the relationship between neuronal spikes and local field potential. Upper: neuronal spikes of 40 neurons/channels (ordinate) of arbitrary temporal unit (abscissa). Lower: the fluctuation of local field potential of neuronal population. Note that the spiking rate is lower than the population oscillatory rhythm.

is also noticed in cortico-electrical spectrum.

Conventional neuroimaging research has centered on “regularity” aspect of brain-behavior relationship, such as peak wave of N100 and P300. For the research adopting chaos approach (more detail in Section 3), phase plot of EEG/LFP shows consistent spatio-temporal patterns [22] [23]. EEG can be decomposed into 1/f pink noise (fractal component, will be introduced in Section 3) and spectrum peaks (harmonic component). Chaos does not mean something messy or random, and in fact, there is organized structure embedded in it. Chaos and randomness (pure noise) can be differentiated by mathematical metrics, such as finite correlation dimension for the former which is not convergent for the latter. When the “regular” part of EEG is appraised closely, it appears far away from perfectly periodic. The analysis of correlation dimension and Lyapunov exponent, two common indices of chaos, demonstrated that alpha oscillation, one of the most prominent EEG constituents, is chaotic to certain degree [24]. From the evidence summarized above, either at neuronal or neural levels, the distinction of regularity and chaos in the brain dynamics seems somewhat artificial and contingent upon different vantage points. They nevertheless may root in the same underlying neuro-architecture and neural computation. Then, how do the chaotic and regular components interact? This issue is still under investigation and several possible mechanisms are proposed. For example, in the asynchronous neuronal dynamics, there may exist occasional weak inter-areal coherence/correlation (a class of regularity) in either resting state or in response to mental activity and external stimuli. Albeit its mildness, the regularity may profoundly affect network behavior by shaping the connectivity structure, for example via spike-time-dependent plasticity (STDP; a synaptic learning process that depends on the temporal correlation of neuronal firing sequences within limited time window). The network firing pattern could thus be altered, and a recurrent network endowed with STDP at local synapses could self-organize into chaotic manifestation [25]. Another class of regularity that may interact with chaotic property of the brain is neuronal/neural code (Section 7).

OA and EA used to be treated as different brain activities, dependent upon whether the organism exposes to external stimuli or not. Previous research adds “resting” before the imaging modality to describe OA, e.g., resting fMRI or sometimes resting EEG. It is arguable that the brain may not be idling in the absence of direct sensory stimulation and of behavioral output since OA may still contain evoked components from mind wandering, emotion intrusion, automatic memory retrieval, and command to or feedback from visceral organs. The distinction of OA and EA is accordingly not absolute. This theoretical article tentatively assumes that OA comprises four components: spontaneous organized activity, physiological noise, non-physiological noise, and interaction effect with internal/external stimuli. “Spontaneous organized activity” refers to repeated spatio-temporal structures, either regular or chaotic, which generally (but not always) have higher power/amplitude, i.e., spikes, action potential or oscillation. It will become clear later (Section 6) that part of spontaneous organized activity of OA is actually a replica of EA (e.g., neuronal/neural codes), and their relationship is far more intimate than previously thought [26] [27] [28]. Physiological noise indicates stochastic activities which may facilitate signal transmission via several mechanisms, e.g., stochastic resonance and stochastic synchrony [29], whereas the non-physiological counterpart follows traditional sense of noise that has detrimental impact on information transfer and processing. Conceptually, sporadic spikes which do not contribute to psycho-physiological function should be regarded as non-physiological noise. Although neuron is a threshold machine obeying all-or-none law, it should not be confused that noise may still occur when there is no action potential. Noise, no matter physiological or not, indeed may change the conductance, capacitance, and membrane potential of a neuron without firing it, i.e., affecting the hidden state of neurons, which is different from the noise of electronic devices where circuit parameters are relatively stable. Subthreshold noise may result from many sources, such as thermodynamic noise in early sensory stage, biochemical noise of cellular machinery, and background activity caused by pre-synaptic bombardment without reaching firing threshold. Through the modulation of phase and resonance of neurons, the influence of noise will be carried over during signal transmission. Subthreshold noise may benefit signal transfer, thus probably physiological. The rationale supports the distinction of physiological and non-physiological noise will be discussed in more detail in Section 5 (Function and characteristics of OA).

Theoretical research has suggested that random noise may drive an interacting network to generate synchronous oscillation. It is notable that the oscillatory behaviors of the network are prone to occur at certain “critical” point (or zone) where the dynamics tends to manifest as chaotic. Criticality is a special state of EIB (in this article) where the characteristics of network change drastically, akin to the boundary of phase transition in physics, for example the state of co-existence of water and ice. For a network at criticality, the transmission and dynamic range of information are optimized and maximized (Section 4). Through synaptic plasticity, the brain dynamics may shift along the critical boundary to be more chaotic or less chaotic. Together, noise and spontaneous neural activity, and chaos and regularity (information transmission), can be bridged by EIB at critical state. The reproducible spatio-temporal pattern of spontaneous organized activity of OA is endorsed by structured EIB, which may carry out information storage/retrieval, representation of neuronal/neural codes, and provision of immediate access of sensorimotor repertoire. These above issues will be elaborated in later sections. Across different hierarchies and scales, every system has its own noise which may come from the system itself or from the observer’s measurement, and most frequently from both. The measurement noise is not interested in this theoretical article and it can be categorized into the non-physiological noise of OA. We will see that signal variability that used to be attributed to noise may be physiologically meaningful and is conceptually relevant to entropy (Section 6).

2.2. Excitatory and Inhibitory Equilibrium

Brain statistics show that 1 mm3 of mouse cortex contains 105 neurons, 108 synapses, and 4 km of axon [30]. It was found that the white matter volume scales approximately as 4/3 power of the gray matter volume across fifty-nine mammalian species, indicating that there must be wiring principles that mammalian brains conform to [31]. Intense local computation is endorsed by the high density of axons and dendrites which constitute around 60 percent of gray matter [32]. The fact that local connections are denser than distant ones may contribute to the property of small worldness. On average each individual neuron can integrate the information from thousands of other neurons [33] [34], and send its activity back to the network. Based on the underlying hardwire, excitatory and inhibitory mechanisms work hand by hand to construct neural codes and, to process and transfer neural information. The two cardinal forces keep dynamic balance all the time either in resting or in evoked states [35] - [41], similar to the core concept of Yin-Yang theory in oriental philosophy frequently symbolized as Taiji, see Figure 5 [42]. Yin and Yang respectively denote negative and positive drives, they not only opposing but also complementing each other. The “opposing” side results in quench and saving energy, whereas the “complementing” side leads to genesis.

It has been summarized that the wiring of neural bundles can be largely categorized into two subsystems, namely informational and modulatory, with the latter adjusting the characteristics of the former [43]. Informational pathway is organized among the rapid-conducting (50 m/sec), well myelinated, large neurons in hierarchical or heterarchical manner, where neural excitation is mainly mediated by glutamate, and neural inhibition by two fast neurotransmitters, glycine and gamma-aminobutyric acid (GABA). In contrast, modulatory pathway is distributed diffusely by poorly myelinated neurons with thin axons and slow velocity (0.5 m/sec). Unlike informational subsystem using amino acids to gate ion channels (also voltage-gated), neurotransmitters of modulatory subsystem act on G-receptors via monoamines (e.g., norepinephrine, acetylcholine, dopamine, serotonin) or peptides (e.g., substance P, endorphins). From the tissue organization perspective, neuro-inhibition is mainly executed by inhibitory interneurons. In addition, neuronal suppression may also work through post-synaptic hyperpolarization, synaptic depression, spike frequency adaptation, elevation of spiking threshold, changes in membrane conductance, and negative auto-feedback (e.g., via auto-receptor). Referring to the theory of Yin-Yang, the counteraction between excitation and inhibition in informational network is evident but what is the “genesis” aspect at EIB? We believe that it is the rich brain dynamics emerging around the banks of EIB boundary. The versatile brain dynamics shall get dampened toward monotonic patterns in over-excitation condition, such as epilepsy, or in over-inhibition condition, such as deep anesthesia induced by GABA agonists.

Around 75 percent neocortical neurons are excitatory pyramidal neurons, while inhibitory interneurons comprise the rest one fourth. There must be organization and/or mechanism to compensate the relatively smaller number of interneurons to achieve delicate balance. For example, the firing rates and synaptic strength of inhibitory interneurons are higher than those in excitatory neurons, and the depression of inhibitory synapses due to sustained activation is less significant [44]. Neural network can thus be modeled as an array of recurrently connected excitatory neurons plus a common inhibitory neuron [45]. Distant excitatory input may activate local excitatory and then inhibitory neural populations to reach EIB. EIB is reached in both spatial and temporal domains. The origin of short-term depression/adaptation can be either local (e.g., synaptic depression, spike frequency adaptation) or distant through thalamo-cortical pathway [46]. Recent evidence highlights that the neuronal excitation and inhibition may cover the same receptive fields, such as in acoustic tonal and visual orientation tuning [41] [47]. Examination of cellular conductance revealed that the distribution of inhibitory mechanism may possess the same preferred property (e.g., orientation) and tuning width as the excitation counterpart. Furthermore, organized in correspondence with the visual receptive fields, inhibitory interneurons may contribute to the distinguishing push-pull phenomenon of striate cortical simple cells where simultaneous excitation and inhibition can be elicited in isolation within discrete subregions [48] [49]. The above evidence suggests that spatial organizations of EIB are tissue-specific which in turn may lay the foundation of pertinent physiological functions.

As to temporal domain, the mechanism of EIB may also present tissue specificity. For example, EIB of simple cells sensitive to orientation in visual cortex operates in chorus, whereas EIB of neurons in primary auditory cortex has a short temporal lag [41] [49]. The temporal relationship between excitatory input and balanced inhibition is precise in auditory cortex, with the latter suppressing the former within around 4 msec during which spikes occur. The consequence of lag on neuronal dynamics is not trivial. Wehr and Zador performed computational simulation using integrate-and-fire model to explore the effects of continuing and delayed balanced inhibition [41]. The EIB with no delay (on average) and with brief delay respectively generated irregular firing and highly transient spiking, fitting with the observation in visual and auditory cortices. Since closely located neurons receive similar inputs, to investigate the dynamics of EIB, Okun and Lampl recorded 47 pairs of nearby neurons in the barrel cortices of lightly anesthetized rats and used dual recording technique to disentangle excitation and inhibition. The authors concluded highly synchronized excitatory and inhibitory inputs to a single neuron in both spontaneous and sensory-evoked (multi-whisker deflections) conditions, and they found that the inhibitory inputs lagged the excitatory counterparts for several milliseconds [40]. Mild delay of neural inhibition was also noticed by an earlier report in the prefrontal cortex [39], where excitatory conductance augmented with increasing neural activities, and the inhibitory conductance followed in a proportional way. Accordingly, EIB is spatially and temporally balanced (may have millisecond lag), maintained via various pathways, specifically organized for different brain regions, and is reached since the very first processing unit—neuron. The mechanism of EIB may avoid noise accumulation in the central nervous system [50], control the gain of excitation [37], and prevent the unnecessary firing because of saturation. At large scale network, excitatory and inhibitory balance can be achieved through bundled hard-wiring, for example, DeLong and Wichmann loop of basal ganglia, mutual suppression between neocortical and limbic compartments, excitatory-inhibitory interaction between ventromedial prefrontal cortex and raphe nuclei [51] [52]. The consequence of loss of EIB at large scale network can be serious. Imbalance in cortico-limbic interaction may underlie the characteristic pathology of major depressive disorder, i.e., hypofrontality and limbic hyperactivity [52], see Figure 3.

EIB is not static steadiness and importantly, may be organized and structured adaptively at criticality to allow several signature dynamic features to emerge, such as oscillation and chaos, which may facilitate information processing [53] [54] [55] [56] [57]. The design of EIB guarantees that mild disruption of the equilibrium by either excitatory or inhibitory input would impose non-trivial influence on the neuron, allowing tracking changes and laying foundation for difference-based coding (Section 7). The perturbation may not be enough to fire a neuron but will substantially affect the membrane background activity and hence the timing of firing, which is one of the sources for irregularity and/or chaos to occur in vivo [38] [56]. An inspiring numerical research by van Vreeswijk

Figure 3. Highlight of a network interacting pattern related to MDD, reciprocal suppression between dorsal and ventral compartments. Blue-inhibition, red-excitation. Left: short red arrows indicate various excitatory inputs to dorsal and ventral compartments of the brain. Blue arrows show mutual suppression as an interacting mechanism between the two compartments. Right: in homeostasis, balance is reached and their sizes (an abstract representative of metabolic level or engaged state) are approximately equal, with mild perturbation allowed. When reaching breaking point and the balance is broken as in MDD, the size of ventral compartment enlarges so does its negative influence on dorsal compartment, and conversely, the size of dorsal compartment shrinks and its impact on ventral limbic system reduces. The altogether consequence evolves to a state/attractor of hypofrontality and limbic hyperactivity.

and Sompolinsky revealed that with EIB constraint, the property of chaos showed up in network dynamics which persisted even under constant external input [56] [57]. Further, the network activities demonstrated linear relationship with the intensity of external stimuli, implying that a nonlinear EIB system (nonlinear for each unit/neuron) may engender linear response (linear for population) to external perturbation that has been reported by quite several neuroimaging literature [58]. Although traditional research used to adopt recurrent excitatory-excitatory interaction to explain neuronal oscillation, recent advance has confirmed the fundamental role of inhibitory mechanism in rhythmogenesis [19].

3. Criticality and EIB-OA

In physics, criticality generally refers to a state of equilibrium with potential of phase transition, like the coexistence of ice and water. In power plant, criticality is a balanced state in which producing and losing of neutrons are equivalent so that fission chain reaction of uranium is maintained and under control. The above two examples hint a research focus of great interest common to many disciplines, i.e., the equilibrium boundary of several possible states. The concept of phase transition can be re-formatted in network manifestation as the dynamics between total randomness and boring order (edge of chaos) or the dynamics with several concurrent meta-stable states, which is usually accompanied with drastic change and complicated diversity to perturbation. The denotations of equilibrium and stability in this article are different because EIB could be minimally or weakly stable to allow various possible trajectories to travel around.

3.1. Criticality and OA

It is important to note that at criticality oscillation may occur naturally. Ghosh et al. constructed a set of network formula by combining a realistic model of axonal membrane and pulse transmission [59] [60], topology of coupled neural nodes based on a connectivity map (CoCoMac) [61], nerve conduction delay, and physiological noise to study resting state dynamics of the brain [3]. At the neighborhood around critical boundary separating stable and unstable regions in parameter space, the authors found the emergence of coherent spontaneous fluctuations from the simulated neuro-electric activities. By contrast, all oscillations were either strongly damped or displayed high amplitude stereotyped spikes (resembling epileptic spikes) when the parameters were farther away from the critical boundary. Ghosh et al.’s stability analysis elegantly illustrated the power of theoretical simulation that may unveil hidden principle of brain dynamics. Similarly, Deco et al. combined Wilson-Cowan model, connection map of CoCoMac with 38 regions, conduction delay, and Gaussian noise to investigate the neural network behaviors [1] [62]. The authors selected working points at Hopf bifurcation where the system was close to losing stability and again synchronized oscillation fit with physiological characteristics started to occur. At criticality, the similarity between connectivity pattern of simulation results and that of empirical resting fMRI was confirmed by another study using different neural models and coupling constraints [2].

3.2. Self-Organized Criticality (SOC), EIB and Neuronal Avalanches

For non-linear dynamic systems, SOC is used to describe a property that the critical point is also an attractor where the dynamics prefer to travel. SOC was first proposed by Bak, Tang and Wiesenfeld, thus also named BTW model [63]. The authors used pendulum array as a thinking experiment to highlight how a minimally stable system would evolve. They constructed cellular automata obeying simple rules to illustrate that, although still debated, SOC was a good candidate to explain 1/f noise frequently encountered in nature. Nowadays, it is generally agreed that SOC has close relationship with cardinal characteristics of chaos (the relevance to neuroscience will be discussed in Section 3), such as fractals, scale-invariance, 1/f noise, and power law distribution. The concept of SOC has been extended to many other fields, such as earthquakes, economics, epidemics, forest fires, solar physics, super-conduction, ecology, sociology, and neuroscience. In contrast to traditional criticality that demands finely tuned details, a welcome trait of SOC regarding its application to neurobiology is that the parameters of the model can be changed widely without affecting the emergence of complexity behaviors. This property of SOC implicates that continuously varied biological underpinnings may engender relatively consistent neuro-electric features, and hence mental life.

To explain SOC, a metaphorical “sand-pile” is commonly adopted to demonstrate the critical and unstable condition: sprinkle sand grains on a growing pile, and through the repeated collapsing processes the sand pile system may ultimately reach certain “equilibrium state” where next extra grain sprinkled onto it could cause landslide or “avalanche”. Initial empirical research on SOC accordingly inclined to investigate real granular matter, and SOC system is conceived to have accumulating as well as dissipating forces that make the system “balanced” near critical point. Beggs and Plens are among the pioneers to postulate the existence of SOC in neural tissue, dubbed neuronal avalanches [64]. In their classical work, organotypic slices of rat somatosensory cortex were prepared and LFP was recorded. A neuronal avalanche was defined as the spatial pattern of sharp LFP peaks that was preceded and ended by blank frames, i.e., no activities for at least one time bin. The “spatial pattern” in neuronal avalanche described the synchronized propagation of potential volleys, especially at short time scale less than 100 msec. It is obvious that information transmission is embedded in the avalanches. They calculated branching parameter as the average number of neural events of descendent from one ancestor. The empirical value of branching parameter was one, exactly implying EIB. If the neural mechanisms of excitation and inhibition were imbalanced, the branching parameter would be greater or less than one, respectively incrementing or decreasing neural activities after a period. System observables distributed by power law is a hallmark indicator of chaos. In the EIB background, the authors observed power law with an exponent of -3/2 for event size and -2 for lifespan distribution of neuronal avalanches. SOC was also proposed to explain long-range correlation and power-law behaviors of EEG and MEG recordings [65] [66].

The phenomenon of neuronal avalanches has been extended empirically and analyzed theoretically. Priesemann et al. examined SOC in different vigilance states using intracranial depth recording (in vivo) and modeled by integrate-and-fire SOC model [67]. They used “area under deflection” as the primary quantity to define neuronal avalanches and discovered that the brain dynamics did conform to power law, no matter in wakefulness, rapid-eye-movement sleep, or slow-wave sleep, although mild deviations existed. Theoretical simulation of spiking neural network and dynamic synapses successfully replicated SOC behaviors that were robust to parameter changes [68] [69]. Conventional SOC research did not consider the hard-wire infrastructure of the brain. Computational simulation by Rubinov et al. introduced hierarchical modular connectivity into the neural network of spiking neurons (leaky integrate-and-fire neuronal model) [70] [71]. The neuronal model was more realistic given that leakage conduction, synaptic currents, external currents, change in membrane potential and synaptic weights, non-linear threshold property and STDP (Hebbian rule) were all considered. In addition to power-law distributions of size and duration of neuronal avalanches, the authors noticed patterns of modular spikes (coherence within modules), distant synchronization (coherence between modules), contribution of modularized inhibitory neurons to SOC, “phase transition” of dynamics shaped by synaptic plasticity, and the significance of hierarchical modularity to the occurrence of broad critical regime (robustness to change of parameters). Their work provided clues to the consequences of self-similarity at both structural and dynamic level. The properties of neuronal avalanches and power-law distribution have been proposed to occur in balanced network where the net difference between excitation and inhibition is small compared to the magnitude of excitation and inhibition—EIB [72].

It is imperative to note that power-law is close to self-similarity across different scales, thus is an indicator of chaos (or sometimes complexity). Criticality is frequently associated with chaotic features, whereas conversely, power law (chaos) is not sufficient to imply criticality [73]. Branching parameter also does not guarantee criticality, instead it implies EIB when equal to one. We believe that the concept of SOC is insightful and could be one of the main manifestations rooted in neural equilibrium.

4. Complexity/Chaos and EIB-OA

Linear system with finite dimension is never chaotic. The degrees of non-linearity of brain informatics are varied for different imaging modalities, for example higher in EEG and lower in fMRI. Although traditional neuroimaging research used to adopt linear approach to decipher neural responses, empirical evidence has unveiled properties of non-linearity, complexity, multi-stability, long-range temporal correlations, power-law scaling behaviors and so on [2] [22] [23] [27] [66] [74] [75]. Since inter-correlation and self-similarity, in opposite to uncorrelated and diffusive process, have been observed in brain informatics, the probability distributions of global quantities, such as power of particular frequency or duration of particular state, have non-Gaussian tails characterized by extremal events [74] [76].

Chaos is a mathematic discipline to explore dynamic systems that are modeled with (few) deterministic (differential) equations. The chaotic trajectory evolving with time may aggregate to show interesting topology, such as attractor; we resort to a double scroll as an illustration in Figure 4 [77]. It is evident that the spatio-temporal patterns of chaos could be correlated (even though not identical), and it is possible to transit between different wings (states) [22] [23] [75].

Figure 4. The trajectory of double scroll attractors. Two attractors are demonstrated.

Conventionally, chaos is defined as “aperiodic” and “bounded” dynamics in a “deterministic” system that is “sensitive to its initial condition” (butterfly effect), four terms in total (p.27 - p.28) [78]. The four criteria of chaotic system do describe some features of brain dynamics but not satisfactory. For example, brain dynamics may contain periodic portion in the spectrum domain. The criterion of “deterministic system” may be substantiated by the neural architecture, responsive mechanics, self-balancing (negative) feedback and self-reinforcing (positive) feedback and so on, extremely complicated indeed and modifiable online via synaptic plasticity. Besides, it is generally believed that brain possesses noise and stochastic process (e.g., Poisson-like spike train). There are many interesting features of chaos theory not covered by this theoretical article, including period doubling, Cantor set, Poincaré map, strange attractor, and the relevance to entropy and so on. However, the measurements spanning from neuronal, neural to psychological and behavioral domains do show certain features of chaos although not completely obey it.

It is obvious that “chaos” does not imply that the brain works in a random or crazy way. The idea of attractor can be extended to an attractor neural network, where the spontaneous dynamics may settle into one of several possible firing patterns, which then may eventually destabilize and shift to another pattern (attractor) either automatically or under the effect of noise. The perturbation of an external stimulus may destabilize the network so that the dynamics may leave previous state and detour to the appropriate wing for the duration of the central effect of that stimulus. Fractal is often used to catch the geometry of chaos. Fractal objects permeates our world, even at first glance they may look irregular, such as landscapes, clouds, trees, rivers, lightning, branches of tracheal tubes, blood vessels and neuronal dendrites [79]. Some fractal objects are artificial, such as Koch snowflake, Sierpinski Triangle, Dragon curve, Pythagoras tree and Cantor set [80]. Self-similarity across various scales is an important source of scale-invariance and power-law distribution and may contribute to long-range dependence/correlation in spatial and temporal domains. Self-similarity and power-law can be viewed as identical in many systems; the former is the description, and the latter is the mathematical form. The equivalence between self-similarity, scale-invariance and power low is tenable but none of them promises criticality.

Complexity theory is developed from chaos theory. Chaos is a mathematical fact, while the precise definition of complexity is yet to be delineated. Complexity system contains many non-linearly interacting, interdependent elements. In addition, complexity system spans different level, each with specific structures and organization rules, with the system at each scale made up of the constituents of the finer scale. Complexity system usually involves the interplay between chaotic and non-chaotic components. One of the most fascinating phenomena of complexity system is its emergent collective behavior, which may be hard to predict from an understanding of its lower-level constituents. For example, with increasing scale, neuro-activities may manifest as neuronal spikes/avalanches, LFP/oscillation of neural tissue, and EEG/BOLD fluctuation in brain areas. The emergent property is relevant to the concept of self-organization, which can be out of simple rules like differential equation and automaton. Another notable difference between fractal and complexity system lies in that fractal mathematics can be applied to describe the structure of static geometry, whereas complexity system must evolve with time. For living organisms, “complexity adaptive system” was proposed to account for a complexity system that may alter themselves to adapt to a changing environment and conversely, may also change the environment to suit them. In this article, the term chaos is used at mathematical stance, and the term complexity is applied to denote a holistic and realistic aspect of brain dynamics, which comprises the component of chaos/fractal but not limited to it.

4.1. Virtual Brain Space and Mental Space

Many charming ideas of chaos/complexity theory are picked up to neuroscience research. Complexity theory is welcome for a lot of reasons. The most intriguing one, we believe, is because complexity theory is a potential explanation, at least an adorable endeavor, to fill in the gap between anatomical space and mental space; we name it “virtual brain space”, explained below. Mapping brain function to anatomical location, dubbed functional localization or locationist account, has pushed forward substantial progress; however, it might be abused to become a form of reductionism. For example, take the enhanced neural activity in amygdala as equivalent to fear or that in striate cortex as vision. The potential fallacy to position psychological function at certain place in the brain has been challenged. For example, keeping time is the task of a clock but it is inconvincible to attribute that function to a particular gear [81]. Nevertheless, it seems appropriate to assume that there must be correspondence between the brain (biology) and psychology.

Given that major categories of psychological functions have respective materialistic implementation in specific brain regions, the next question is: what is the brain feature corresponding to the ever-changing psychological content, in real time manner? A natural candidate, no doubt, is the brain dynamics. Dynamics indicates something happening in time stream, which is a common place shared by the brain and the mind of living organisms. A theory is desperately needed to describe the brain space with its characteristics instantly homologous to psychological content which occurs in mental space. When collapsing the temporal dimension, the correspondent structures between brain space and mental space should be conspicuous. From the perspective of complexity theory, the attractors, meta-stable states, and trajectories can be respectively projected to specific psychological entities/states, psychological possibilities, and the routes of conscious flow; these topics will be introduced later. Complexity/chaos enables the brain to be a hermeneutic device [82]. In this article, “psychological” points to specific capability, while “mental” is a broader term, including the states when no specific psychological function is carried out.

Is there empirical evidence mediating the relationship between brain and mental spaces? Empirical support may come from the results of multivariate analyses of functional brain imaging research. Take a general fMRI study adopting certain experimental design as an example: In contrast to conventional general linear model (GLM) approach where coordinate with peak statistics (or central mass) is taken as a representative, multivariate approach addresses the activation pattern carrying pertinent information from clustered voxels of interest (assume voxel number = N) [83]. Nowadays, there are two main analytic platforms (and a lot of variations) to decode patterned information in the brain, i.e., multivariate pattern analysis (MVPA) and representational similarity analysis (RSA) [84] [85]. To relate to psychological contents, MVPA may combine machine learning procedure and classification scheme, whereas RSA may resort to dissimilarity matrices and clustering algorithm (not the focus of this theoretical article). Activation pattern is inherently multi-dimensional, and its topology enabling classification of experimental conditions/stimuli generally indicates two things: First, for each experimental condition or category of stimuli, there exists a spatially distributed response profile, neither random nor homogenous; second, their patterns are distinct. Different multivariate patterns of a cluster, say fusiform face area (dimension N), may carry the information to distinguish between different faces. The most primitive sub-space for facial recognition (embedded in virtual brain space with dimension M and M >> N) is to project the weights (i.e., beta values of the GLM results) into N-dimensional sub-space. Recover the spatially distributed pattern into the space with dimension N+1 (N voxels plus time) shall disclose that the trajectory of the cluster’s dynamics indeed eschews toward and lingers around the sub-space spanned by the N voxels with higher weights (or beta values) for facial recognition. Activated clusters/blobs to distinct classes of experimental paradigms may imply differentially preferred sub-spaces, and these sub-spaces may index the “location” correspondent to the associated psychological function or state in the virtual brain space [86].

It is noteworthy that the raw neural/BOLD signals may not be the actual entity in the virtual brain space. The dynamics of N voxels may warrant certain unknown function(s) to convert them to better the match with mental world. Accordingly, “functional localization” should not be restricted to the concrete anatomical space made of neural tissues but instead, should be examined in the brain space where the characteristics of dynamics are conceptual and multi-dimensional—that is why we regard the space “virtual”. What MVPA and RSA decode is not only about pattern but perhaps more importantly, is the indicated “psychological spot” in virtual brain space. Understanding the distinction between anatomical space and virtual brain space is a premise to appreciate why a theory capable of summarizing dynamics is so desirable.

At this point, it is proper to contrast our virtual brain space with the reflection of brain-mind issue from other discipline [87]. Psychology largely stems from studying the content in consciousness. Based on recent meta-analysis, some researchers begin to consider “psychological primitives” that are not consciously accessible but may underlie psychological functions and may have better correspondence with neural events. Here, we would like to distinguish that above conscious level, psychology has its neural correspondence in the virtual brain space, whereas below conscious level, neuronal/neural features are the candidates for psychological primitives. Decades of endeavor to localize brain functions in neuroscience is certainly successful in some sense but is destined to be insufficient since the validity to map “below-horizon” neural features to “above- horizon” psychology is questionable [87], see Figure 5 for explication. In the virtual brain space, psychological function or mental state may manifest as

Figure 5. The correspondence between psychological primitives and neuronal/neural features, and that between psychological function and complexity topology in virtual brain space. The symbol Taiji is situated in the middle of the figure to indicate EIB-OA. Taiji is a core concept of oriental philosophy Taoism which emphasizes balance between and genesis from positive and negative forces/potentials. Solid bi-directional arrows and curved arrows respectively represent correspondence and unknown transformation. The horizontal gray bar distinguishes between above and below consciousness.

attractors and “meaning” may be stored in the dynamic orbit.

Within a particular brain region and the correspondent sub-space in the virtual brain space, say fusiform face area, slight deviation in the spatial distribution of neural activities may account for the capability to accommodate tremendous within-category variability, e.g., recognize so many different faces. Similarly, different odors may display different configurations of electrical fields in the olfactory system [22] [23]. It reminds that small differences in the initial condition (within category difference) would be amplified in chaotic system. The variation could be grasped by pattern-based multivariate analyses if the resolution of neuroimaging tool is high enough. The explanatory power of complexity theory has been applied in learning, memory, and motion detection and so on [88] [89]. Across several brain regions and hence the confluence of several sub-space of virtual brain space, complexity theory provides a natural platform to assess multi-modal integration, contextual effect, and to endorse a single brain region to participate in many different tasks [21] [22] [23] [75]. It is well-established that prolonged visual stimuli may only generate transient cortical responses and are subject to adaptation [90]. It seems hard to explain the sustained visual awareness of background/environment that is out of the here-and-now focus/attention. Lingering in certain constructed attractors in virtual brain space may provide a feasible account to this stable background existence in our visual world. Studying the topology of neuropsychiatry conditions in virtual brain space shall be informative since it provides global dynamic pattern that is more relevant to (abnormal) mental phenomenon. The fronto-limbic dysregulation of MDD, as shown in Figure 3, may manifest as inflated limbic and shrink frontal attractors in virtual brain space (after proper transformation), which is hard to capture by the measured regional neural features (locationist account).

Although the transformation of brain dynamics to virtual brain space is theoretical for now, there are several constraints in the building up computational simulation that are worthy of consideration: 1) be sensitive to initial condition (can be triggered by tiny differences in sparse codes, i.e., code driving complexity; see Section 8); 2) may accommodate within class diversity by the topology of trajectories (attractors; like recognizing different faces or insects); 3) parameters can be fine-tuned by exposure to stimuli; and 4) in resting state the itineracy may repeat the patterns learnt before (replica of EA in OA, Section 6.3).

4.2. Chaos across Different Scales and Modalities

Power law is frequently used as a probe to suggest chaos or complexity system. Fractal behaviors exist in neuronal signaling, neural spiking train, and LFPs. it is tempted to examine whether the property transcends to networks at coarser scales. Freeman et al. constructed KIII model of olfactory system and discovered that among the parameter space where four different kinds of attractors are possible, i.e., fixed point, limit cycle, quasi-periodic and chaotic, only chaotic solution reproduced the observed features of action potential and EEG in olfactory bulb [91]. The well-known 1/f EEG spectrum is also an instance of power-law distribution. Kitzbichler et al. applied Hilbert transformation to resting brain signals (fMRI and MEG) and then calculated local and global synchronization indices across different scales [92]. The authors discovered that both the distributions of inter-regional phase-locking intervals and global synchronization index conformed to power law scaling, regardless of imaging modalities. Scale-invariance of functional connectivity was also observed in EEG, with the power law exponents of global synchronization differing between spectra, and the lower frequencies exhibiting steeper slope [93]. The probability distributions of EEG power of wide-range spectra and of the dwell time of different states were skewed to the right-hand tail that was also noticed in complex system [74] [76]. Although 1/f scaling is not applicable to neuro-electric activities of spectral peaks, the amplitude modulation of alpha oscillation and auto-correlation of alpha, mu, and beta frequencies still obeyed power law [66]. The abundance of self-similarity in central nervous system may underlie the observed scaling law in cognition, such as in perception, action, memory, and linguistics [94].

From the aspect of neuroanatomy, dendritic tree bears fractal structure [95] [96]. It was reported that hippocampal CA3 network exhibits scale-free topology in which the distribution of the output links per neuron decays as a power law [97]. Using a wide sense self-similarity as an indicator (retaining exponential functional form across different scales), fractal geometry was revealed in segmented gray matter with dimension around 2.80 [98]. Accordingly, part of the emergent chaotic property of neuro-activities could originate from the underlying neuro-architecture. The complexity features of the brain thus have various origins, so does the power-law distribution. From the above discussion, complexity/chaos may originate from interacting network, criticality/SOC (Section 2), EIB (Section 1.2, Section 3) and even from the fractal geometry of underlying neuro-anatomical architecture.

4.3. EIB and Complexity/Chaos, and Criticality

Exquisite theoretical research on neural models has shown that networks with recurrent structure and constraint of EIB can exhibit chaotic behavior and long-tail power law distribution [53] [54] [55] [56] [57]. In Section 1.2, EIB-chaotic network may track mild perturbation, is robust to mild change in parameters, and meanwhile may demonstrate emergent linear input-output behavior [56]. EIB network may accommodate the observation of chaos and regularity. Another appealing feature of EIB network is the rapid adaptation which reacts faster than the time constant of individual unit. EIB network may switch swiftly between different states [53]. This seems counterintuitive but the underlying concept has been introduced in the discussion of complexity (collective vs. individual), which is like that neural tissue may oscillate faster than the frequency of spikes of individual neuron, see Figure 2.

The brain is different from other chaotic/complexity systems because of the constraints of EIB, which may be further self-organized at criticality [1] [2] [74]. Although it is largely unknown how criticality is achieved, EIB, chaos and criticality have each obtained empirical as well as theoretical support. We surmise that EIB is wired at criticality which enables the trajectory to be very flexible in the virtual brain space. Under the critical condition and in response to exteroceptive/interoceptive stimulation, the neuro-dynamics in virtual brain space shall engage in and depart away from an attractor easily (transition between different phases), which is correspondent to the execution and withdrawal of various psychological representations or functions. The flexibility is endorsed by the sensitivity of chaotic system to slight difference in the initial condition. We would like to emphasize that power law, complexity, chaos, and inferred criticality are phenomena and EIB may be the mechanism.

It is noteworthy that the brain has an outstanding character that most complexity systems do not capture, that is, top-down modulation. A distinguishing emblem of complexity system is the collective, emergent global pattern out of locally interacting of components. The collective dynamics of the brain, on one hand is constructed from its constituents but on the other hand, may conversely modulate the behavior of the elements at lower hierarchy, even to the most fundamental processing unit-neuron. The well-established instantaneous top-down modulation includes LTP on spikes, up- and down-state on cortical excitability, global oscillatory pattern (awake, aroused, relaxed and sleep stages) on neuronal dynamics, and attention (large-scale network) on neuronal/neural response to perception, and so on. This kind of interaction is in contrast to the inter-regional cross-talk or the phase-amplitude relationship between different spectra at roughly the same hierarchy [99] [100]. Namely, brain interaction is not only within but also across hierarchy. The bi-directional interaction, also substantiated by EIB, may be a distinctive hallmark of the brain as a unique category of complexity system. The availability of top-down modulation may provide dynamic context to synchronize its constituents and may underlie real-time psychological function, self, and consciousness. The intriguing bi-directional complexity could be the neural foundation for an organism to be a “unity” [101].

4.4. Noise and Complexity/Chaos

Spontaneous organized activity of OA projected to virtual brain space may represent the possible itineraries of the complexity system as discussed above, while noise of OA may shape the realization of the trajectory. For example, to hop to other possible attractors, noise may detour the route toward an unstable direction normal to the path hanging onto the extant attractor [102]. Introducing noise in the KIII model of olfactory bulb would not induce or suppress attractors but may stabilize the aperiodic orbits where stability was defined by centroid and standard deviation (less than two) of spatial distribution from sequential samplings [91]. With the modeling of dendritic noise in network model, the transition between attractors could become brisker; the data points distributed less in the transitory route and more around the attractors [86]. To sum up, noise may play opposite roles for engaging in and disengaging from attractors. When the noise direction points away from or toward the center of an attractor, the trajectory to an attractor is respectively destabilized or stabilized. Although noise itself may not be enough to induce chaos, synaptic noise was suggested to be able to tune the degree of complexity of neural activities, i.e., shuttling between chaos and regularities [102] [103]. From the complexity perspective, adaptive noise (physiological) shall help the brain to locate to proper attractors to fit with the survival demand of an organism.

Since the properties of complexity/chaos and statistical randomness are both described in the brain, the apparent randomness may be the product of the two sources. Some researchers debated that true randomness does not exist in macroscopic world and stochastic model is a convenient low-dimensional approximation for high-dimensional chaos [103]. Incorporating stochastic randomness is only a simplified modeling strategy to handle the unexplained component in measurement, which is then called “noise”. Obviously, part of the noise belongs to the recording machine and hence non-physiological. Since each neuron has connection with thousands of other neurons, it is also suggested that large number of signaling and massive interaction in neural system may have averaged/canceled out truly random noise (if exists) [50] [104]. In addition, the stochastic firing pattern of cortical neurons may result from synchronous driving input, indicating hidden structure embedded in the irregularities [5] [105]. The above argument justifies the inclusion of physiological noise in OA and agrees with previous modeling research taking “noise” as a driving force in neural network. The phenomenon of stochastic resonance (Section 6.1) thus may originate from physiologically meaningful underpinnings even though the stochastic component is modeled as randomness. The duality of chaos and regularity in OA has been discussed in previous sections and will be further elaborated in Section 7.

5. Information Transfer at EIB

Information can be defined in various ways. Among them, information theory (or communication theory) developed by Shannon is canonical and has motivated substantial progress in many disciplines, such as thermal physics, statistics, engineering, evolution, biology, computational neuroscience, and coding and data analysis [106]. One of the key elements of information theory is entropy, a quantity of uncertainty based on the distribution of selected random variables. Shannon’s information theory may be of limited application in other fields [107], nevertheless, it is still versatile nowadays. Freeman made an interesting contrast between artificial and biological networks: what processed for the former are bits, symbols, and information, whereas what processed for the latter are flows, patterns and meaning [108]. This contrast is insightful and reminds of the distinction of “psychological primitives—neural features” and “mental spa- ce—virtual brain space”. Freeman claimed that “artificial neural network can be built to manipulate symbols in codes that convey information in the sense defined by Shannon and Weaver, who divorced information from meaning, biological neural network offers dynamics of meaning...”. The content discussed in this section mainly resorts to entropy as a quantifiable surrogate of information amount.

5.1. Dynamic Range of Neural Activity at EIB-SOC

Experimental work has demonstrated that at EIB-SOC the dynamic range of input processing is maximized. Shew et al. studied organotypic culture of rat somatosensory cortex on microelectrode arrays [109]. The authors adjusted excitatory and inhibitory balance by the application of antagonists of fast glutamatergic (AP5/DNQX) and GABAergic (PTX) synaptic transmission. For both spontaneous and stimulus-evoked activities, the former and the latter would respectively reduce and increase the amplitude of LFPs. In either case, the slope of power law was deviated from the condition without administration of drugs. It is interesting to note that the range of the stimuli to that the system responded significantly shrank with either AP5/DNQX or PTX. Their findings implied that at the original equilibrium state, dynamic range in the cortical network was optimized. Under the exposure of AP5/DNQX, not PTX, the probability of cluster size seemed still conformed to power law (Figure 2A, p.15597) [109], which echoed the query that power law alone may not guarantee criticality. Nevertheless, it is reasonable to infer that disruption of EIB also compromised criticality given that neural EIB is believed to be situated at SOC.

In collaboration with Shew, Larremore et al. adopted a modified version of Kinouchi-Copelli model to explore the issue of criticality and dynamic range in complex/heterogeneous networks, constituted by connected and excitable nodes [110] [111]. Like Shew et al.’s previous study, stimuli of different intensities were imported in the computational simulation and the range that the stimuli produced distinguishable responses was delineated. The authors demonstrated that when the response to perturbation revealed drastic change (criticality), the dynamic range of the network was maximized. Similar conclusion was also reached by Kinouchi and Copelli who proposed that the sensitivity to external stimuli at criticality may underlie the amazing human sensory capabilities to encode external information spanning across several orders of magnitudes [111].

In addition to the stimulus-response profile, it is also possible to define dynamic range by the number of meta-stable states, which can be quantified as the cluster number of spatio-temporal patterns. Haldeman and Beggs found that for large network and across different degrees of connection per neural node, the number of meta-stable states was maximized at the branching parameter equal to one (EIB) [112]. Deco and Jirsa’s network model adjusted the inter-regional coupling strength until a network began to disclose drastic “phase transition”, [2], and chose that state as criticality. The authors found that within a particular “range” of global coupling strengths, the system possessed multiple attractors and higher entropy value. The property of “range”, instead of a particular value, of parameter is welcome since it is conceptually like SOC where the system behaviors are robust to mild parameter change. Furthermore, the simulated neural dynamics was transformed to BOLD signals via Balloon-Windkessel model [113], and around the critical point the similarity between the functional connectivity maps generated by theoretical and empirical approaches reached optimum.

5.2. Maximization of Information Transfer at EIB-SOC

Evidence suggests that not only the dynamic range of information processing but also the amount of information transfer is maximized at EIB-SOC. Extending previous research of pharmacological intervention (Section 5.1) on neural tissue culture, Shew et al. applied information theory to investigate the capacity of information transmission while EIB was disturbed [114]. The spontaneous activities were measured by microelectrode array which showed maximal information capacity at original equilibrium state, and the administration of either AP5/DNQX or PTX would decrease the entropy value. The optimality of entropy was present across several different bin sizes and recorded duration. In addition to resting state, the authors also studied the evoked responses of neural tissues. One electrode was selected to deliver electrode shock and the averaged pair-wise mutual information was calculated as an index of information transmission. Here, the pair-wise mutual information can be viewed as the degree of similarity of the evoked neural responses between two recording sites. Again, optimized information transmission was reached at original EIB.

In the classical paper of neuronal avalanches, Beggs and Plens designed multi-layered feed-forward network to explore the impact of branching parameters on information transmission, i.e., mutual information between input and output [64]. The computational simulation results implied that when the branching parameter equaled one, indicating EIB, the information transmission was maximized. Reduction in branching parameter would enhance the stability of the network but at the cost of sub-optimal information processing. Regarding the optimized information transfer at EIB, the results of computational and experimental approaches thus converge. It is noteworthy that equilibrium does not mean stability; by contrast, equilibrium frequently indicates meta-stability. Conceptually, stability can be quantified by the amount of energy that is required to perturb a system to leave an attractor. EIB is organized at criticality where certain equipoise between stability and instability is reached so that fidelity of information is retained but flexible trajectories/states/attractors are also allowed to engage for the forthcoming signal processing.

This paramount phenomenon reviewed in this section has been examined in various input-driven adaptive models, such as cellular automata, Boolean network under selection pressure, recurrent neural network (with real-time computation) and spiking neural microcircuit [115] [116] [117] [118] [119]. It is not an exception but seems quite common that meta-stability can be beneficial for a system. Bak and Chialvo constructed an adaptive network with its connections pruned based on simple rules: trafficking through strong connections and reducing connection strength in case of error [120]. The authors noticed that although the dynamics of the network were minimally stable, flexibility, adaptability and learning/unlearning were nicely balanced so that the system may handle complicated non-linear tasks, even if contaminated by noise. It has long been noticed that complexity system may emulate the functions of perception and memory [86]. At criticality, it was demonstrated that an adaptive network may also learn logical rules, even “exclusive OR” which is difficult to model in previous research [121].

5.3. Reduction of Variability and Enhancement of Fidelity at EIB

Through organized EIB at criticality, there may exist several meta-stable states. Based on the understanding that external stimuli may drive the neural dynamics to fix on certain attractor, it is reasonable to expect that the variability of OA would shrink in response to external perturbation. That is exactly what recent empirical evidence has verified. A comprehensive study by Churchland et al. who examined twenty datasets of extra- and intra-cellular recordings disclosed that the onset of stimuli consistently quenched neural variability [122]. The reduction of variability (indicated by Fano factor) to external stimuli was such a general property of cortex that it prevailed over different modalities (membrane potentials or spikes), stimulus categories, brain regions and states (awake, behaving, or anesthetized). Biyu He who investigated the interaction between OA and external stimuli (target detection) using fMRI also found that the volume of activity space in post-stimulus condition shrank compared with that in pre-stimulus condition [123]. Churchland et al.’s conclusion thus is extrapolated to large-scale network, applicable from microscopic to macroscopic levels.

Since brain dynamics can be regarded as the manifestation of neural network, the noticed variance decline implies that the behaviors of cortical circuits become more stable (or more consistent) when being driven, either locally or quasi-globally. There are several network types that can be stabilized by an input, among them recurrent network perhaps is the most pertinent [124] [125]. Widespread presence of recurrent circuitry is noticed in the neocortex and, the recurrent thalamo-cortical resonance has been proposed to be central to neural oscillations and to the mental functions of sensory integration, temporal binding, attention, sleep and consciousness [45] [126] [127] [128]. Deco and Hugues investigated the phenomenon of variance reduction of a recurrent neural network [129], again resorting to integrate-and-fire model [2] [130]. The simulated research replicated that external stimulation would stabilize the network at one specific attractor, resulting in a net decrease in neural variability. The authors analyzed the distribution of inter-spike intervals and concluded that the reduced variability arouse from an increased regularity of the neural spike trains. A recent computational study also disclosed that under the term of EIB, the attractor networks may replicate the experimentally observed reduction of variability to external stimuli [55]. It is noticeable that fixation on one attractor is a simplified scenario to account for the phenomenon of reduced variance to external stimuli. External input itself may suppress chaos in a recurrent network and hence leads to variance reduction [131]. Another theoretical work based on integrate-and-fire model noticed that EIB with mild delay in suppression may also decrease the temporal variability of neural output to external stimuli [41]. The temporal lag of inhibitory inputs to excitatory inputs is supported by empirical studies of auditory and somatosensory neurons [40] [41], implicating that through the organization of EIB the fidelity of signal transfer can be further enhanced. In summary, variance reduction to external stimuli is a common phenomenon of recurrent neural network, while constrains such as criticality and EIB (or EIB-SOC) may contribute to it.

Although earlier reports had suggested that variable neural responses to stimuli can be explained by the linear summation of the deterministic evoked response and the OA [132] [133], convergent evidence implicates that there is actually an interaction between them. The studies discussed above have revealed how external stimuli (EA) may influence OA. It is well known that external stimuli may also alter OA’s coherent spatio-temporal structure via other validated mechanisms, such as phase resetting, and consequent event-related synchronization or desynchronization [134]. Actually, baseline OA may also contain and modulate EA (see Section 6.3); in other words, their interaction is bi-directional. According to our definition, the interactive effect between OA and EA belongs to the fourth component of OA. The above cited research mainly focuses on the neural dynamics within restricted temporal range. With development from childhood to adulthood, increased baseline variability (in contrast to reduced variability to external stimuli) is associated with maturation of brain and better behavioral performance, which will be discussed in more detail later (Section 7).

6. Function and Characteristics of OA

We decompose OA into spontaneous organized neural activity (e.g., oscillation and chaos in EEG/MEG and in fMRI; neuronal spikes and subthreshold membrane dynamics at microscopic scale), physiologically beneficial randomness (physiological noise), non-physiological noise, and the interactive effects from exteroceptive and interoceptive stimuli. This section addressed the physiological function of OA, particularly physiological noise and spontaneous organized neural activity. It will become clear that there is no clear cut boundary between regularity and complexity, noise and signal, or EA and OA.

6.1. Stochastic Resonance and Physiological Randomness

We distinguish biologically relevant noise from irrelevant one. It seems counterintuitive to assume that noise could be beneficial in terms of computation. In a non-linear threshold system, however, stochastic resonance (SR) happens (in its most primitive form) when weak/subthreshold periodic signal that is normally undetectable, can be boosted and detected by adding broadband noise to that signal, which usually appeared as a spectral peak at signal’s frequency against the noise-floor background, thus the term “resonance”. The phenomenon of SR was first discovered in a study of climatic oscillation and then spread to many other disciplines, such as physics, chemistry, engineering, electronics, lasers, ecology, psychophysics, cell biology and neuronal physiology [135-137]. With theoretical and experimental extension, SR has been applied to describe any phenomenon in a non-linear system where the presence of noise is better for output signal quality than its absence—noise benefit [137], see Figure 6.

In neurons and brain, the phenomenon of SR has gained support from two research lines. First, together with subthreshold stimuli, externally added noise may enhance sensory information processing and perception [136]. For these studies, the administered random fluctuation is not naturally occurring in the central nervous system but is part of the external input, though. The second branch of evidence comes from biomedical research, such as exploitation of SR in cochlear implant to improve hearing [138]. Biological utility of SR, i.e., neurons

Figure 6. Illustration of stochastic resonance. Plot (a) shows that input of weak signal into a system does not generate observable spectrum change. The right subplot is the spectral distribution, with arbitrary units of logarithmic power (ordinate) and frequency (abscissa); Plot (b) shows that input of signal plus noise makes weak signal detectable, as a spectral peak.

make use of internally generated physiological noise to enhance information transfer, thus is indirectly inferred. Nevertheless, we believe that the brain has evolved to utilize some random noise for proper functioning. As to the source of noise in the nervous system, Faisal et al. has an excellent review from molecular to macroscopic and from sensory to motor levels, which further linked neural noise to behavior variability [50].

Previous models of SR derived from other disciplines may lack biological appropriateness. The manifestation, property, and function of noise in neural system are different from traditional SR in physics and engineering fields. To reconcile theoretical and experimental neuroscience, McDonnell advocates using the term “stochastic facilitation” to replace SR in biological research [29]. It is interesting to note that noise induced enhancement of signal processing depends upon the fact that the parameters of the non-linear model are “sub-optimal” [139]. Engineers usually improve the performance of a model by updating the parameters rather than adding white noise. A natural question ensuing is why the evolutionary force does not shape the brain to adopt the best neural parameters as engineers do? A likely answer is that universal optimization is never achievable since the challenge the organisms need to face every day is extremely diverse and varied, which may keep on changing with time and life cycle. Fixed and “best” model parameters in some conditions may endanger the organisms in other situations. The advantage of SR in the brain may thus represent a compromised strategy between adaptation and flexibility, not an optimal model for every scenario but still the best strategy of survival. Although contradictory to traditional engineering dogma, it is increasingly acknowledged that noise and meta-stability may benefit an adaptive system.

6.2. Synaptic Noise May Facilitate Signal Transfer

Empirical evidence has pointed out that background synaptic activity may shape the probability and variability of response to stimuli [140] [141] [142]. For cortical neurons, the background “noise”, i.e., the fluctuation in membrane conductance and membrane potential, may result from the constant bombardment of synaptic potentials (a class of biological origin of SR). In response to spontaneous synaptic inputs, neurons in vivo may produce 15 mV membrane depolarization, 10 mV voltage fluctuation, 80 percent decrease in resistance and discharge at 2 - 10 Hz, much more active than the neurons prepared in vitro lacking abundant inter-neuronal connections [141] [143] [144] [145]. Other sources of local noise for a single neuron may include random leakage of electric current from neighboring neurons and quantal emissions of synaptic vesicles. It is well-known that the undulation of synaptic excitability (not reaching firing threshold), the second kind of OA, has profound influence on the integrative and electrophysiological properties of neurons [144] [146] [147] [148]. With comprehensive setting of parameters, Fellous et al. adopted dynamic clamp technique, point conductance model, and injection of calculated electrical current to the neuron soma in rat brain slice to simulate background synaptic activity [141] [149] [150]. The authors discovered that the power for neurons to detect transient current pulse was greatly enhanced in the presence of spontaneous background activity. Similar conclusion was also reported by other independent research group that the capability to detect small stimuli was dramatically increased with spontaneous fluctuation of membrane potential and conductance [140]. Shu et al. differentiated “up” and “down” states of neurons by conductance, spiking rate, and the degree of depolarization [140]. “Up” state can be simulated by certain depolarization and added Gaussian noise and, the noise may increase the spike response to applied small current, similar to SR. Faure et al. demonstrated that what had been called “synaptic noise” indeed contained periodic components which reflected the behaviors of pre-synaptic interneurons and enhanced the transmission efficacy of oscillatory temporal patterns [151]. Again, taking neuronal spikes in vivo as stochastic or random does not mean that neuron firing is governed by no rule. It is a convenient way to accommodate the complicated structure of pre-synaptic bombardment. Although it is debatable whether the mechanism of improving signals processing is different from the SR used in engineering field, these exquisite studies nevertheless have provided important insight that synaptic background activity (second type of OA) facilitates signal transmission.

6.3. Exploration of Dynamic Repertoire

From the perspective of fractal geometry, OA may empower a system to transit between attractors, as already introduced in Section 4.4. This sub-section highlights some far-reaching empirical evidence to disclose the relevance of OA to EA. It is well acknowledged that spontaneous organized activity of OA is not purely stochastic randomness but contains repeated and coherent spatio-temporal patterns. These reverberating motifs are verified in vitro and in vivo by different research groups, may correspond to information storage or implicit retrieval and, may correspond to the neuronal/neural codes [5] [6] [7]. The robust and stereotyped reactivation may occur with precision of milliseconds [5].

At large scale level, based on the functional connectivity maps derived by resting fMRI or PET, there are modular structures compatible with major psychological functions [9] [11]. Since the resting dynamics is organized according to psycho-physiological domains, it is intuitive to assume that the intrinsic/spontaneous brain activities are not idling (or just in rest) but may be relevant to normal brain functioning. Tsodyks et al. combined optical imaging and single-unit recording to establish the relationship between population activities and single neuron spikes to visual stimuli, and then to explore the relationship between evoked and spontaneous neural activities [26]. Their results implicated that spontaneous neural activities, both at neuronal and population level, actually resembled evoked neural activities. Further, the relationship between population and single neuron dynamics in evoked condition still retained in resting condition. Berkes et al. measured the visual cortex activities of awake ferrets exposed to natural scenes, artificial stimuli, and nothing across successive postnatal ages [28]. The authors disclosed that with increasing ages the Kullback–Leibler divergence between evoked activities to natural scenes and spontaneous activities (exposed to nothing) drastically decreased and the similarity of their frequency distributions increased. Their results indicated that spontaneous cortical activities may come from an internal model optimal to represent the environment which is improved with maturation. The above two elegant studies provide strong evidence that the neural features of evoked responses are actually embedded in the spontaneous/intrinsic brain activities (first class of OA). Luczak et al. explored the firing pattern of neuron population (tens of neurons) in rat auditory cortex over different conditions, i.e., tones, natural sounds, and resting condition [27]. They discovered that the contour of each stimulus categories (different frequencies of tones and different natural sounds) were actually subsets within the contour of spontaneous neural events. In other words, auditory-evoked responses lied within the realm outlined by spontaneous activities. Replay of EA in OA in sleep is supposed to reflect memory consolidation process, which is not only regional but also shows inter-areal coherence, such as between visual cortex and hippocampus [152]. To sum up, spontaneous brain dynamics comprise neural activations of specific evoked events, i.e., resting is similar to activation, and past reappears in present. The intimate relationship between resting and activation states are thus established across different observation levels, from neuronal [26], to neural population [27] [28], and to large-scale network [9] [10] [11], and across different states (from awake to sleep) [152]. The highly preserved patterns may result from re-emergence of neuronal/neural codes that may be enabled by structured and intricate EIB (discussed in Section 7).

Given the abundant information carried by OA, it is reasonable to assume that their characteristics may have strong psychological, biological and clinical implications. That is exactly the case and a few of them are listed below. It was found that baseline activities may play substantial role for the fluctuating conscious experience. Higher baseline activities in medial thalamus and fronto-parietal region and lower analogs in default mode network may facilitate the awareness of somatosensory stimulation [153]. The baseline undulation of anterior cingulate area may also predict the perceived degree of pain. Similarly, evidence suggested that enhanced pre-stimulus intrinsic activities in fusiform face area may bias the perception of Rubin’s vase-face picture toward face [154]. The spectrum power and functional connectivity of spontaneous cortico-electrical dynamics may explain several fundamental mental capabilities, such as intelligence and target detection [155] [156]. Conversely, perceptual learning may modify the covariance structure of resting dynamics [157]. The impact of intrinsic state research on psychology field is brewing and their bi-directional relationship in increasingly appreciated [101] [158].

7. Variability-Oriented Approach

The research team at Rotman Research Institute of Baycrest, Toronto has a series of influential work on the issue of brain signal variability. The material cited in this section mainly honors their contribution.

7.1. Variability of Central Nervous System

Both variance and entropy are frequently used to represent the degree of uncertainty and variability. The optimum of entropy does not guarantee the optimum of variance. Nevertheless, in the condition of Gaussian distribution, there is a monotonic relationship between variance and entropy, and thus they can be regarded as equivalent. Given the contribution of EIB-OA to information transfer as highlighted above (e.g., variability reduction in Section 5.3), it is reasonable to infer that variability-oriented approach is informative in neuroscience, which however, has long been under-appreciated. A major proportion of previous neuroimaging studies have resorted to mean-based measures (e.g., GLM) instead of variance (or standard deviation) based counterparts. The neurobiological rationale behind variability-oriented approach has been described in Section 5.3.

With massive interaction as a core feature, multivariate brain informatics is not completely independent, which is also termed “spatial correlation”. To tackle stringent multiple comparison correction, GLM is often combined with other mathematical tools such as random field theory [159] or Monte Carlo simulation [160] to provide reasonable statistical inferences. How about variability-oriented approach? One possible solution is by way of partial least square (PLS). First developed in the late 1960’s by econometrician and statistician Herman Ole Andreas Wold and largely expanded in the field of chemometrics, PLS is a data-driven harness that can explore the relationship between data in matrix form, thus multivariate in nature. Randy McIntosh is the pioneer who introduced PLS to neuroimaging circle [161].

Garrett et al. used PLS to examine the relationship between variability maps of fMRI signals and several biopsychological profiles, such as chronological age and the performance of various psychological tasks [162] [163]. They compared the results of mean-based and standard-deviation-based analyses and disclosed that the variability-oriented approach not only showed different (partially overlapped) spatial patterns but may also provide higher predictive power compared to mean-based analogue [162]. In detail, younger, faster, and more consistent performers exhibited higher brain variability across cognitive tasks of perceptual matching, attention cueing, and delayed match-to-sample [163], indicating that brain variability is functional (not merely useless noise) and that increased variability in the central nervous system may underlie neural efficiency and may further reduce behavioral variability, concordant with its connection to entropy and echoing the psycho-physiological function of OA.

7.2. Variability in Behavior and Its Relevance to the Brain

From the perspective of development, behavioral performance improves in terms of accuracy, enhanced speed, and reduced trial-to-trial variability. McIntosh et al. found that variability in EEG signal increased with maturation [164], like the finding of fMRI study by Garrett et al. [162]. In addition, the brain variability was negatively correlated with intra-subject variability in reaction time and positively correlated with accuracy. Further research replicated that the variability not only increase with maturation but its association with task was region specific [165]. During development, enhanced neural variability may indicate broader repertoire of mental attractors or microstates. By contrast, increased intra-individual behavioral variability and progressive impairment in performance were accompanied with aging, and the possible cause could be the decreased connectivity and hence, associated with reduced neural variability. Accordingly, the relationship between age and behavioral variability across lifespan can be characterized by a U-shaped function, while that between age and brain variability could be an inverted-U curve [166] [167]. Increased variability in performance has been observed in various mental disorders, such as dementia, traumatic brain injury, and attention deficit hyperactivity disorder [166]. A recent report supported that greater variability in the elderly brain is associated with better memory and fluid intelligence [168].

To examine the origin of increased complexity of brain dynamics in development, Vakorin et al. used conditional entropy and mutual information to respectively represent local and distributed variability [169]. The authors found that developmental change was accompanied with reduced local information processing and enhanced global information transfer, implying that inter-regional interaction and distributed network may underlie the observed brain variability change in maturation. There are two caveats worthy of mention here. First, not every aspect of neural variability implies information processing. For example, when considering latency variability in ERPs and reaction time, the inverse relationship of brain signal variability and behavioral variability no longer exists. Second, in the conditions of increased neural noise such as neuropsychiatric disorders (at least in some regions) [166], the composites of brain signal variability are not necessarily “physiological” and variability-oriented approach might lead to equivocal conclusion—the patients may show lower physiological but higher non-physiological variability (2nd and 3rd kinds of OA), whereas the healthy controls may exhibit opposite trend.

8. Sparse Coding and Difference-Based Coding and EIB

Although criticality and complexity are the core concepts to account for the crucial characteristics of neural computation, it is not sufficient to describe how the neural system encodes and decodes information precisely. OA is not completely random, instead, contains spontaneous activities showing stereotyped spatio-temporal patterns (note: chaos may also show “varied” but conspicuous spatio-temporal patterns). These stereotyped motifs are possible forms of neuronal/neural codes [5] [6] [7]. For neural system, encoding and decoding happen in a real time manner. Some statistics, such as momentum, kurtosis, spectrum and so on, are not ideal candidates of neural codes because these quantities demand certain sample size to calculate and decode accurately, thus not instant enough. Neuronal codes, innately, are in a form of brief spike trains (transients) as observed in intra-cellular or extra-cellular recordings, in vitro and in vivo. This section discusses one of the patterned manifestations of OA, i.e., neuronal/neural code, and this stereotyped OA is enabled by the underpinnings of structured EIB. Among the postulated classes of neuronal/neural coding, sparse and difference-based codings, in opposite to the tedious stimulus-based analog, are the most essential [158].

8.1. Sparse Coding

How does the brain read (decode) the encoded neural codes to generate percept? It seems reasonable to assume that the neural codes must carry the same amount of information as the given percept. In accord, some research applied non-linear filter to neuronal spikes to reconstruct waveform that is as similar as the physical features of external stimuli [170]. Another coding possibility is through Bayesian probabilistic model (i.e., posterior probability) that may require less “bits” of input to make inference and thus is more efficient since the perceptual system has been tuned to naturalistic stimuli through evolution and development (i.e., prior) [28] [171]. It explains why human cognition may conjecture the whole from the part and may make decision under sub-optimal condition or uncertainty [172]. However, recent evidence has suggested sparse coding. On one hand, given the limited metabolism budget, the permissible level of traffic is quite low in the brain, not allowing the massive amount of full-loaded information flows to travel [173] [174]. On the other hand, single-unit recording reveals that the structure of neuronal transients can be much more frugal than previously thought [175] [176]. In auditory cortex, neurons may even show binary responsiveness regardless the duration and frequency of tone pips [177]. Here, “binary” means either only one or zero spike to each trial of auditory input (binary coding). Binary coding is the most basic form of sparse coding, equipped with low variability and high fidelity.

Strong evidence of sparse coding comes from vision research, even in the early stage of visual system [178]. It seems counter-intuitive that sparse codes may register the time varying, vast amount information flood about the environment. However, the coding strategy has gained support from theoretical, computational and experimental perspectives in different sensory modalities, motor system, and higher cognitive function (e.g., associative memory and hippocampus) [179] [180]. Sparse coding seems to be a general coding strategy in the brain, with the degree of sparseness increasing from lower to higher processing areas. It is interesting to note that shorter coding sequences may union to form longer repetitive motifs (cortical songs), whereas the subsets of a particular spatio-temporal pattern may organize themselves to form other patterns [5]. The re-combination of spiking sequences may have to do with the coding of binding, interaction, or context effects, and importantly, may greatly enrich the manifestation of neuronal/neural codes, see Figure 7.

For sparse coding, obviously the pros are the efficiency, and the cons are its limitation to represent the detail of internal and external world. Is it inadequate for sparse code to correspond to the varied and complicated reality? The answer could be no. So copious, profuse, and affluent, resting/intrinsic/spontaneous brain dynamics is proposed to be the neural baseline for neural codes to interact. Together with the complexity theory of brain, especially the attractors embedded in resting dynamics [26] [27] [28], sparse coding may just initiate the neural trajectory to fix on one attractor to reach a psychological certainty (may also shape the original complexity structure to some extent) [131], not necessarily having to carry every detail given. We name it “code driving complexity hypothesis” [181]. Since chaotic dynamics is sensitive to initial condition, neuronal/neural code is different from random perturbation because it may guide the trajectory toward a pertinent attractor. In this regard, neural code can be viewed as a facilitator, messenger, or stabilizer, which is supported by theoretical work of interconnected oscillators in which external cue may stabilize and create an “attentive” state [88]. The above scenario is endorsed indirectly by phase-reset model of ERPs, and by even-related synchronization and desynchronization phenomenon in the cortex [134]. The research by Tsodyks et al., detailed in Section 6.3, also verifies that neuronal codes have close relationship with the population dynamics [26]. The intrinsic brain activities (OA) themselves may enable consciousness, and the configuration and the focus of attractor may determine what pops up in our experience. The potential role of complexity theory in consciousness was inferred the fractal dimensions of neural correlates in various mental states, increasing with wakefulness [182].

Up until this point, regularity and complexity is integrated by “code driving complexity hypothesis”, while other schemes to bridge the two are surely possible

Figure 7. Neuronal codes embedded in spikes. Plot (a) shows simulated neuronal spikes with varying amplitude. Plots (b), (c) and (d) disclose the repeated regular spike patterns (codes) that can be deciphered from the original semi-random spiking trace.

(such as STDP). Sections 5 to 7 touch upon the intimate relationship between chaos/stochastic and regularity (information, entropy). Again, the underlying mechanism is EIB-OA which serves a common ground for the versatile dynamics to occur. Unlike Bayesian brain theory, optimality is not coerced for sparse coding and complexity theory, which may conceptually accommodate wider bio-psycho-social variables and situations, such as making errors, framing effect, behaving according to sub-optimal choice, etc [183] [184]. Previous simulation study of chaotic system revealed that if the learning rule has been established, incomplete external stimuli can still be recognized [89]; the conclusion was drawn by sending incomplete input to the network to obtain similar output as in the case of complete external stimuli. EIB-OA thus also provides a potential alternative account for trace-elicited memory (a case of incomplete input).

8.2. Difference-Based Coding

The evidence of different-based coding as a general coding principle stems from reward research. The midbrain dopamine neurons encode the differences between anticipation and reward, which is further modulated by their temporal interval [185]. Difference-based coding not only applies in the temporal but also in the spatial domain [186]. Neuronal membrane has been modeled as an “integral operator” that may integrate various pre-synaptic bombardments and local biochemical events [1] [3] [130]. It seems that there also exists a “differential operator” in a neuron which registers the differences between “now” and “expected”, between “here” and “neighbors”, and then converts the differences into neuronal/neural signals.

What is the underlying EIB mechanism for the differential operator? Previous research on the temporal precision between excitatory and inhibitory inputs provide vital clue to this issue [40] [41]. The timely quench of excitation by inhibition within several milliseconds behaves just like a differential operator. Inhibitory interneurons are local, while excitatory inputs can be local or distant. When the inhibitory inputs bring information of local state, such as anticipation, and the distant excitatory input register external perturbation, such as extant reward, the consequent neuronal spike would issue the differences between them, exactly what difference-based coding refers to and empirical data suggests. Similarly, when the inhibitory input carry movement information of nearby neurons (local) and the distant excitatory input denotes the motoric goal, the resultant neuronal activity would guide the neuron to be coherent to the global aim, again via differential operator [186]. It is imperative to note that the purpose of wired EIB is not limited to balancing excitation and inhibition or attenuating excitation by inhibition. Through the dynamic balancing of excitatory and inhibitory imports across spatio-temporal domains, information is processed, and difference-based coding is fulfilled.

It is interesting that the temporal scales of EIB and neuronal avalanche are both situated at several millisecond level-4 msec was proposed by independent research groups [7] [41]. The coincidence is not fortuitous because the temporal lag between excitation and inhibition is expected to be the “life span” of the pattern in neuronal avalanche. It has been suggested that 25 - 50 msec is a unit of psychological time [86] [187], whereas molecular events, neuronal codes and pattern in neuronal avalanche seem too brief to match psychological functioning, suggesting that the collective dynamics in virtual brain space (attractor or itinerary) is a better candidate corresponding to psychology in mental space [188]. Figure 8 summarizes the contents of EIB-OA in previous sections.

8.3. Luxurious Energy Expenditure? Coding, Chaos, Efficiency, and Economy

The human brain weighs around 2.50 to 3.25 pounds (2 percent of body weight), but it consumes 20 percent of body oxygen and 25 percent of glucose utilization for adults [189]. The brain maintains high metabolic level across varying mental activities [190], which is relatively constant in resting wakefulness and reduced by only 15 percent in sleep [191] [192] [193]. Why human brain demands so much energy even at rest?

In Section 6.3, it has been pointed out that resting state may actually contain a replica of EA. Further, the brain has evolved to present OA all the time. It is imperative to note that OA is not a privilege of the vertebrate but also exists in the brains of arthropod and insect [194]. Aplysia ganglia may generate stable 10 Hz activity. OA in the central nervous system is a fundamental phenomenon of life. Keeping the active structure handy in virtual brain space is crucial for consciousness, self-function, adaptation, instant reaction, coping with challenges, and for learning through plasticity to consolidate. A substantial portion of brain energy spent in OA fuels the itinerary and repertoire in the virtual brain space. With its resonance with organism’s internal state and inclination [126] [128]

Figure 8. A summary of EIB-OA and its characteristics.

[195] [196], OA (directed by coding) may help the organism to engage in the preferred or advantageous system route and hence, response profile. On the contrary, without OA, the brain will become monotonous, then adaptation and flexibility will be limited.

Efficiency and economy are guiding principles of physiology. It is estimated that human brain has roughly 1012 neurons, 1015 synaptic connections between them, 1024 elementary molecules/second engaged in brain activity, and 108–1012 bits/second of information received from environment [197]. Although the brain expends disproportionate energy relative to its weight, it has organized itself to be economic. Around 15 - 20 watts only are consumed by human brain, which could be 10 million watts using modern chip technology with comparable intricacy [45]. In other words, “luxurious” is superficial; the brain has already evolved to be stringent on energy expenditure but still maintains efficient. Put it in another way, the brain must be efficient to be economic without sacrificing reliability, which could be substantiated by several mechanism, such as the designs of thresholding machinery in neurons, quench of excitation in milliseconds, and (sparse) code driving complexity. Based on the organized EIB-OA, the brain may simultaneously take care of contradictory facets, e.g., efficiency and economy, using limited budget in real time. The mechanisms that the brain saves energy are depicted in Figure 9.

Figure 9. Upper right: brain dynamics is always active. Illustration of 3 mechanisms that the brain uses to save energy. (a) Neuron is designed as a thresholding machinery; (b) EIB allows the neurons to fire intensely in a brief time span (before the excitation is quenched by inhibition); upper: the firing of neurons, lower: red and blue respectively indicates excitatory and inhibitory inputs; (c) The design of (sparse) code-driving complexity may save energy expenditure; blue lines at the top and bottom imply the amount of energy consumption, which is boosted at the source and target nodes but maintains frugal during information travel between the nodes.

9. Conclusion

Balance between excitatory and inhibitory forces is the elementary building blocks for many if not all physiological phenomenon, such as hormone and autonomic regulations. In the brain, EIB is constructed at different levels of information processing, from microscopic to large-scale. Under the constraint of EIB at (self-organized) criticality, the OA of neuronal/neural tissue may engender the phenomena of complexity, power-law distribution, meta-stable equilibrium, multi-stable states, maximized dynamic range, optimized information transfer, difference-based and sparse coding, neuronal avalanches, both efficiency and economy, and reproducibility of evoked spatio-temporal motifs/patterns. As a complexity system, brain is unique in its intensive interacting features: regional, inter-areal, cross-spectrum, cross-hierarchy and particularly, its top-down modulation. It is noteworthy that the chaotic component of OA may contain prominent structure, while the regular component of OA may possess some characteristics of chaos. The distinction of regularity and chaos may be sometimes arbitrary, and the two can be bridged by several mechanisms, such as STDP and “code driving complexity” mechanisms. It is desirable for empirical neuroscience to examine the excitatory and inhibitory designs at different scales and regions to better understand EIB.

Acknowledgements

This work was supported by NeuroCognitive Institute (NCI) and NCI Clinical Research Foundation Inc.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Deco, G., Jirsa, V., McIntosh, A.R., Sporns, O. and Kotter, R. (2009) Key Role of Coupling, Delay, and Noise in Resting Brain Fluctuations. Proceedings of the National Academy of Sciences of the United States of America, 106, 10302-10307.
https://doi.org/10.1073/pnas.0901831106
[2] Deco, G. and Jirsa, V.K. (2012) Ongoing Cortical Activity at Rest: Criticality, Multistability, and Ghost Attractors. The Journal of Neuroscience, 32, 3366-3375.
https://doi.org/10.1523/JNEUROSCI.2523-11.2012
[3] Ghosh, A., Rho, Y., McIntosh, A.R., Kotter, R. and Jirsa, V.K. (2008) Noise during Rest Enables the Exploration of the Brain’s Dynamic Repertoire. PLoS Computational Biology, 4, e1000196.
https://doi.org/10.1371/journal.pcbi.1000196
[4] Young, G.B. (2000) The EEG in Coma. Journal of Clinical Neurophysiology, 17, 473-485.
https://doi.org/10.1097/00004691-200009000-00006
[5] Ikegaya, Y., Aaron, G., Cossart, R., Aronov, D., Lampl, I., Ferster, D. and Yuste, R. (2004) Synfire Chains and Cortical Songs: Temporal Modules of Cortical Activity. Science, 304, 559-564.
https://doi.org/10.1126/science.1093173
[6] Segev, R., Baruchi, I., Hulata, E. and Ben-Jacob, E. (2004) Hidden Neuronal Correlations in Cultured Networks. Physical Review Letters, 92, Article ID: 118102.
https://doi.org/10.1103/PhysRevLett.92.118102
[7] Beggs, J.M. and Plenz, D. (2004) Neuronal Avalanches Are Diverse and Precise Activity Patterns that Are Stable for Many Hours in Cortical Slice Cultures. The Journal of Neuroscience, 24, 5216-5229.
https://doi.org/10.1523/JNEUROSCI.0540-04.2004
[8] Biswal, B., Yetkin, F.Z., Haughton, V.M. and Hyde, J.S. (1995) Functional Connectivity in the Motor Cortex of Resting Human Brain Using Echo-Planar MRI. Magnetic Resonance in Medicine, 34, 537-541.
https://doi.org/10.1002/mrm.1910340409
[9] Moussa, M.N., Steen, M.R., Laurienti, P.J. and Hayasaka, S. (2012) Consistency of Network Modules in Resting-State FMRI Connectome Data. PLoS ONE, 7, e44428.
https://doi.org/10.1371/journal.pone.0044428
[10] Xue, S.W., Li, D., Weng, X.C., Northoff, G. and Li, D.W. (2014) Different Neural Manifestations of Two Slow Frequency Bands in Resting Functional Magnetic Resonance Imaging: A Systemic Survey at Regional, Interregional, and Network Levels. Brain Connectivity, 4, 242-255.
https://doi.org/10.1089/brain.2013.0182
[11] Lee, T.W., Northoff, G. and Wu, Y.T. (2014) Resting Network Is Composed of More than One Neural Pattern: An fMRI Study. Neuroscience, 274, 198-208.
https://doi.org/10.1016/j.neuroscience.2014.05.035
[12] Fuentealba, P. and Steriade, M. (2005) The Reticular Nucleus Revisited: Intrinsic and Network Properties of a Thalamic Pacemaker. Progress in Neurobiology, 75, 125-141.
https://doi.org/10.1016/j.pneurobio.2005.01.002
[13] Garoutte, B. and Aird, R.B. (1958) Studies on the Cortical Pacemaker: Synchrony and Asynchrony of Bilaterally Recorded Alpha and Beta Activity. Electroencephalography and Clinical Neurophysiology, 10, 259-268.
https://doi.org/10.1016/0013-4694(58)90033-6
[14] Ferri, R., Alicata, F., Del Gracco, S., Elia, M., Musumeci, S.A. and Stefanini, M.C. (1996) Chaotic Behavior of EEG Slow-Wave Activity during Sleep. Electroencephalography and Clinical Neurophysiology, 99, 539-543.
https://doi.org/10.1016/S0013-4694(96)95719-3
[15] Freeman, W.J. (1987) Simulation of Chaotic EEG Patterns with a Dynamic Model of the Olfactory System. Biological Cybernetics, 56, 139-150.
https://doi.org/10.1007/BF00317988
[16] Lopes da Silva, F.H., Pijn, J.P., Velis, D. and Nijssen, P.C.G. (1997) Alpha Rhythms: Noise, Dynamics and Models. International Journal of Psychophysiology, 26, 237-249.
https://doi.org/10.1016/S0167-8760(97)00767-8
[17] Mao, B.Q., Hamzei-Sichani, F., Aronov, D., Froemke, R.C. and Yuste, R. (2001) Dynamics of Spontaneous Activity in Neocortical Slices. Neuron, 32, 883-898.
https://doi.org/10.1016/S0896-6273(01)00518-9
[18] Llinas, R.R. (1988) The Intrinsic Electrophysiological Properties of Mammalian Neurons: Insights into Central Nervous System Function. Science, 242, 1654-1664.
https://doi.org/10.1126/science.3059497
[19] Wang, X.J. (2010) Neurophysiological and Computational Principles of Cortical Rhythms in Cognition. Physiological Reviews, 90, 1195-1268.
https://doi.org/10.1152/physrev.00035.2008
[20] Brunel, N. and Wang, X.J. (2003) What Determines the Frequency of Fast Network Oscillations with Irregular Neural Discharges? I. Synaptic Dynamics and Excitation-Inhibition Balance. Journal of Neurophysiology, 90, 415-430.
https://doi.org/10.1152/jn.01095.2002
[21] Aertsen, A.D., Erb, M. and Palm, G. (1994) Dynamics of Functional Coupling in the Cerebral Cortex: An Attempt at a Model-Based Interpretation. Physica D: Nonlinear Phenomena, 75, 103-128.
https://doi.org/10.1016/0167-2789(94)90278-X
[22] Freeman, W.J. (1990) Searching for Signal and Noise in the Chaos of Brain Waves. The Ubiquity of Chaos, 47-55.
[23] Skarda, C.A. and Freeman, W.J. (1987) How Brains Make Chaos in Order to Make Sense of the World. Behavioral and Brain Sciences, 10, 161-173.
https://doi.org/10.1017/S0140525X00047336
[24] Gallez, D. and Babloyantz, A. (1991) Predictability of Human EEG: A Dynamical Approach. Biological Cybernetics, 64, 381-391.
https://doi.org/10.1007/BF00224705
[25] Lubenov, E.V. and Siapas, A.G. (2008) Decoupling through Synchrony in Neuronal Circuits with Propagation Delays. Neuron, 58, 118-131.
https://doi.org/10.1016/j.neuron.2008.01.036
[26] Tsodyks, M., Kenet, T., Grinvald, A. and Arieli, A. (1999) Linking Spontaneous Activity of Single Cortical Neurons and the Underlying Functional Architecture. Science, 286, 1943-1946.
https://doi.org/10.1126/science.286.5446.1943
[27] Luczak, A., Bartho, P. and Harris, K.D. (2009) Spontaneous Events Outline the Realm of Possible Sensory Responses in Neocortical Populations. Neuron, 62, 413-425.
https://doi.org/10.1016/j.neuron.2009.03.014
[28] Berkes, P., Orban, G., Lengyel, M. and Fiser, J. (2011) Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment. Science, 331, 83-87.
https://doi.org/10.1126/science.1195870
[29] McDonnell, M.D. and Ward, L.M. (2011) The Benefits of Noise in Neural Systems: Bridging Theory and Experiment. Nature Reviews Neuroscience, 12, 415-426.
https://doi.org/10.1038/nrn3061
[30] Braitenberg, V. and Schüz, A. (1998) Cortex: Statistics and Geometry of Neuronal Connectivity. Springer, Berlin.
https://doi.org/10.1007/978-3-662-03733-1
[31] Zhang, K. and Sejnowski, T.J. (2000) A Universal Scaling Law between Gray Matter and White Matter of Cerebral Cortex. Proceedings of the National Academy of Sciences of the United States of America, 97, 5621-5626.
https://doi.org/10.1073/pnas.090504197
[32] Laughlin, S.B. and Sejnowski, T.J. (2003) Communication in Neuronal Networks. Science, 301, 1870-1874.
https://doi.org/10.1126/science.1089662
[33] Cragg, B.G. (1967) The Density of Synapses and Neurones in the Motor and Visual Areas of the Cerebral Cortex. Journal of Anatomy, 101, 639-654.
[34] DeFelipe, J. and Farinas, I. (1992) The Pyramidal Neuron of the Cerebral Cortex: Morphological and Chemical Characteristics of the Synaptic Inputs. Progress in Neurobiology, 39, 563-607.
https://doi.org/10.1016/0301-0082(92)90015-7
[35] Ferster, D. (1986) Orientation Selectivity of Synaptic Potentials in Neurons of Cat Primary Visual Cortex. The Journal of Neuroscience, 6, 1284-1301.
https://doi.org/10.1523/JNEUROSCI.06-05-01284.1986
[36] Douglas, R.J., Martin, K.A. and Whitteridge, D. (1991) An Intracellular Analysis of the Visual Responses of Neurones in Cat Visual Cortex. The Journal of Physiology, 440, 659-696.
https://doi.org/10.1113/jphysiol.1991.sp018730
[37] Berman, N.J., Douglas, R.J. and Martin, K.A. (1992) Chapter 21 GABA-Mediated Inhibition in the Neural Networks of Visual Cortex. In: Progress in Brain Research, Vol. 90, Elsevier, Amsterdam, 443-476.
https://doi.org/10.1016/S0079-6123(08)63626-2
[38] Shadlen, M.N. and Newsome, W.T. (1994) Noise, Neural Codes and Cortical Organization. Current Opinion in Neurobiology, 4, 569-579.
https://doi.org/10.1016/0959-4388(94)90059-0
[39] Shu, Y., Hasenstaub, A. and McCormick, D.A. (2003) Turning on and off Recurrent Balanced Cortical Activity. Nature, 423, 288-293.
https://doi.org/10.1038/nature01616
[40] Okun, M. and Lampl, I. (2008) Instantaneous Correlation of Excitation and Inhibition during Ongoing and Sensory-Evoked Activities. Nature Neuroscience, 11, 535-537.
https://doi.org/10.1038/nn.2105
[41] Wehr, M. and Zador, A.M. (2003) Balanced Inhibition Underlies Tuning and Sharpens Spike Timing in Auditory Cortex. Nature, 426, 442-446.
https://doi.org/10.1038/nature02116
[42] Feng, Y. (1961) A Short History of Chinese Philosophy. Macmillan Ltd., New York.
[43] Nicoll, R.A., Malenka, R.C. and Kauer, J.A. (1990) Functional Comparison of Neurotransmitter Receptor Subtypes in Mammalian Central Nervous System. Physiological Reviews, 70, 513-565.
https://doi.org/10.1152/physrev.1990.70.2.513
[44] Markram, H., Toledo-Rodriguez, M., Wang, Y., Gupta, A., Silberberg, G. and Wu, C. (2004) Interneurons of the Neocortical Inhibitory System. Nature Reviews Neuroscience, 5, 793-807.
https://doi.org/10.1038/nrn1519
[45] Douglas, R.J. and Martin, K.A.C. (2007) Recurrent Neuronal Circuits in the Neocortex. Current Biology, 17, R496-R500.
https://doi.org/10.1016/j.cub.2007.04.024
[46] Chung, S., Li, X. and Nelson, S.B. (2002) Short-Term Depression at Thalamocortical Synapses Contributes to Rapid Adaptation of Cortical Sensory Responses in Vivo. Neuron, 34, 437-446.
https://doi.org/10.1016/S0896-6273(02)00659-1
[47] Anderson, J.S., Carandini, M. and Ferster, D. (2000) Orientation Tuning of Input Conductance, Excitation, and Inhibition in Cat Primary Visual Cortex. Journal of Neurophysiology, 84, 909-926.
https://doi.org/10.1152/jn.2000.84.2.909
[48] Hubel, D.H. and Wiesel, T.N. (1962) Receptive Fields, Binocular Interaction and Functional Architecture in the Cat’s Visual Cortex. The Journal of Physiology, 160, 106-154.
https://doi.org/10.1113/jphysiol.1962.sp006837
[49] Hirsch, J.A., Alonso, J.M., Reid, R.C. and Martinez, L.M. (1998) Synaptic Integration in Striate Cortical Simple Cells. The Journal of Neuroscience, 18, 9517-9528.
https://doi.org/10.1523/JNEUROSCI.18-22-09517.1998
[50] Faisal, A.A., Selen, L.P.J. and Wolpert, D.M. (2008) Noise in the Nervous System. Nature Reviews Neuroscience, 9, 292-303.
https://doi.org/10.1038/nrn2258
[51] Alexander, G.E. and Crutcher, M.D. (1990) Functional Architecture of Basal Ganglia Circuits: Neural Substrates of Parallel Processing. Trends in Neurosciences, 13, 266-271.
https://doi.org/10.1016/0166-2236(90)90107-L
[52] Lee, T.W. and Xue, S.W. (2018) Extended Cortico-Limbic Dysregulation Model of Major Depressive Disorder: A Demonstration of the Application of an Analysis-Synthesis Framework to Explore Psychopathology. International Journal of Psychology Research, 11, 247-297.
[53] Tsodyks, M.V. and Sejnowski, T. (1995) Rapid State Switching in Balanced Cortical Network Models. Network: Computation in Neural Systems, 6, 111-124.
https://doi.org/10.1088/0954-898X_6_2_001
[54] Jahnke, S., Memmesheimer, R.M. and Timme, M. (2009) How Chaotic Is the Balanced State? Frontiers in Computational Neuroscience, 3, 13.
https://doi.org/10.3389/neuro.10.013.2009
[55] Doiron, B. and Litwin-Kumar, A. (2014) Balanced Neural Architecture and the Idling Brain. Frontiers in Computational Neuroscience, 8, 56.
https://doi.org/10.3389/fncom.2014.00056
[56] Van Vreeswijk, C. and Sompolinsky, H. (1996) Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory Activity. Science, 274, 1724-1726.
https://doi.org/10.1126/science.274.5293.1724
[57] Van Vreeswijk, C. and Sompolinsky, H. (1998) Chaotic Balanced State in a Model of Cortical Circuits. Neural Computation, 10, 1321-1371.
https://doi.org/10.1162/089976698300017214
[58] Loggia, M.L., Edwards, R.R., Kim, J., Vangel, M.G., Wasan, A.D., Gollub, R.L., Harris, R.E., Park, K. and Napadow, V. (2012) Disentangling Linear and Nonlinear Brain Responses to Evoked Deep Tissue Pain. Pain, 153, 2140-2151.
https://doi.org/10.1016/j.pain.2012.07.014
[59] Nagumo, J., Arimoto, S. and Yoshizawa, S. (1962) An Active Pulse Transmission Line Simulating Nerve Axon. Proceedings of the IRE, 50, 2061-2070.
https://doi.org/10.1109/JRPROC.1962.288235
[60] FitzHugh, R. (1961) Impulses and Physiological States in Theoretical Models of Nerve Membrane. Biophysical Journal, 1, 445-466.
https://doi.org/10.1016/S0006-3495(61)86902-6
[61] Kotter, R. (2004) Online Retrieval, Processing, and Visualization of Primate Connectivity Data from the CoCoMac Database. Neuroinformatics, 2, 127-144.
https://doi.org/10.1385/NI:2:2:127
[62] Wilson, H.R. and Cowan, J.D. (1972) Excitatory and Inhibitory Interactions in Localized Populations of Model Neurons. Biophysical Journal, 12, 1-24.
https://doi.org/10.1016/S0006-3495(72)86068-5
[63] Bak, P., Tang, C. and Wiesenfeld, K. (1987) Self-Organized Criticality: An Explanation of the 1/f Noise. Physical Review Letters, 59, 381-384.
https://doi.org/10.1103/PhysRevLett.59.381
[64] Beggs, J.M. and Plenz, D. (2003) Neuronal Avalanches in Neocortical Circuits. The Journal of Neuroscience, 23, 11167-11177.
https://doi.org/10.1523/JNEUROSCI.23-35-11167.2003
[65] Linkenkaer-Hansen, K., Nikulin, V.V., Palva, J.M., Kaila, K. and Ilmoniemi, R.J. (2004) Stimulus-Induced Change in Long-Range Temporal Correlations and Scaling Behaviour of Sensorimotor Oscillations. European Journal of Neuroscience, 19, 203-218.
https://doi.org/10.1111/j.1460-9568.2004.03116.x
[66] Linkenkaer-Hansen, K., Nikouline, V.V., Palva, J.M. and Ilmoniemi, R.J. (2001) Long-Range Temporal Correlations and Scaling Behavior in Human Brain Oscillations. The Journal of Neuroscience, 21, 1370-1377.
https://doi.org/10.1523/JNEUROSCI.21-04-01370.2001
[67] Priesemann, V., Valderrama, M., Wibral, M. and Le Van Quyen, M. (2013) Neuronal Avalanches Differ from Wakefulness to Deep Sleep—Evidence from Intracranial Depth Recordings in Humans. PLoS Computational Biology, 9, e1002985.
https://doi.org/10.1371/journal.pcbi.1002985
[68] Markram, H. and Tsodyks, M. (1996) Redistribution of Synaptic Efficacy between Neocortical Pyramidal Neurons. Nature, 382, 807-810
https://doi.org/10.1038/382807a0
[69] Levina, A., Herrmann, J.M. and Geisel, T. (2007) Dynamical Synapses Causing Self-Organized Criticality in Neural Networks. Nature Physics, 3, 857-860.
https://doi.org/10.1038/nphys758
[70] Rubinov, M., Sporns, O., Thivierge, J.P. and Breakspear, M. (2011) Neurobiologically Realistic Determinants of Self-Organized Criticality in Networks of Spiking Neurons. PLoS Computational Biology, 7, e1002038.
https://doi.org/10.1371/journal.pcbi.1002038
[71] Thivierge, J.P. and Cisek, P. (2008) Nonperiodic Synchronization in Heterogeneous Networks of Spiking Neurons. The Journal of Neuroscience, 28, 7968-7978.
https://doi.org/10.1523/JNEUROSCI.0870-08.2008
[72] Benayoun, M., Cowan, J.D., van Drongelen, W. and Wallace, E. (2010) Avalanches in a Stochastic Model of Spiking Neurons. PLoS Computational Biology, 6, e1000846.
https://doi.org/10.1371/journal.pcbi.1000846
[73] Lee, T.-W. (2016) Network Balance and Its Relevance to Affective Disorders: Dialectic Neuroscience. Pronoun, New York.
[74] Freyer, F., Aquino, K., Robinson, P.A., Ritter, P. and Breakspear, M. (2009) Bistability and Non-Gaussian Fluctuations in Spontaneous Cortical Activity. The Journal of Neuroscience, 29, 8512-8524.
https://doi.org/10.1523/JNEUROSCI.0754-09.2009
[75] Freeman, W.J. and Vitiello, G. (2007) The Dissipative Quantum Model of Brain and Laboratory Observations. Electronic Journal of Theoretical Physics, 4, 1-18.
[76] Bramwell, S.T., Christensen, K., Fortin, J.-Y., Holdsworth, P.C.W., Jensen, H.J., Lise, S., López, J.M., Nicodemi, M., Pinton, J.-F. and Sellitto, M. (2000) Universal Fluctuations in Correlated Systems. Physical Review Letters, 84, 3744-3747.
https://doi.org/10.1103/PhysRevLett.84.3744
[77] Matsumoto, T., Chua, L. and Komuro, M. (1985) The Double Scroll. IEEE Transactions on Circuits and Systems, 32, 797-818.
https://doi.org/10.1109/TCS.1985.1085791
[78] Kaplan, D. and Glass, L. (1995) Understanding Nonlinear Dynamics. Springer, New York.
https://doi.org/10.1007/978-1-4612-0823-5
[79] Mandelbrot, B.B. and Wheeler, J.A. (1983) The Fractal Geometry of Nature. American Journal of Physics, 51, 286.
https://doi.org/10.1119/1.13295
[80] Viswanath, D. (2004) The Fractal Property of the Lorenz Attractor. Physica D: Nonlinear Phenomena, 190, 115-128.
https://doi.org/10.1016/j.physd.2003.10.006
[81] Miller, G.A. (1996) How We Think about Cognition, Emotion, and Biology in Psychopathology. Psychophysiology, 33, 615-628.
https://doi.org/10.1111/j.1469-8986.1996.tb02356.x
[82] Tsuda, I. (1984) A Hermeneutic Process of the Brain. Progress of Theoretical Physics Supplement, 79, 241-259.
https://doi.org/10.1143/PTPS.79.241
[83] Kriegeskorte, N., Goebel, R. and Bandettini, P. (2006) Information-Based Functional Brain Mapping. Proceedings of the National Academy of Sciences of the United States of America, 103, 3863-3868.
https://doi.org/10.1073/pnas.0600244103
[84] Kriegeskorte, N., Mur, M., Ruff, D.A., Kiani, R., Bodurka, J., Esteky, H., Tanaka, K. and Bandettini, P.A. (2008) Matching Categorical Object Representations in Inferior Temporal Cortex of Man and Monkey. Neuron, 60, 1126-1141.
https://doi.org/10.1016/j.neuron.2008.10.043
[85] Xue, S.W., Weng, X.C., He, S. and Li, D.W. (2013) Similarity Representation of Pattern-Information fMRI. Chinese Science Bulletin, 58, 1236-1242.
https://doi.org/10.1007/s11434-013-5743-0
[86] Tsuda, I. (2001) Toward an Interpretation of Dynamic Neural Activity in Terms of Chaotic Dynamical Systems. Behavioral and Brain Sciences, 24, 793-810.
https://doi.org/10.1017/S0140525X01000097
[87] Lindquist, K.A., Wager, T.D., Kober, H., Bliss-Moreau, E. and Barrett, L.F. (2012) The Brain Basis of Emotion: A Meta-Analytic Review. Behavioral and Brain Sciences, 35, 121-143.
https://doi.org/10.1017/S0140525X11000446
[88] Babloyantz, A. and Lourenco, C. (1994) Computation with Chaos: A Paradigm for Cortical Activity. Proceedings of the National Academy of Sciences of the United States of America, 91, 9027-9031.
https://doi.org/10.1073/pnas.91.19.9027
[89] Érdi, P., Gröbler, T., Barna, G. and Kaski, K. (1993) Dynamics of the Olfactory Bulb: Bifurcations, Learning, and Memory. Biological Cybernetics, 69, 57-66.
https://doi.org/10.1007/BF00201408
[90] McLelland, D., Baker, P.M., Ahmed, B. and Bair, W. (2010) Neuronal Responses during and after the Presentation of Static Visual Stimuli in Macaque Primary Visual Cortex. The Journal of Neuroscience, 30, 12619-12631.
https://doi.org/10.1523/JNEUROSCI.0815-10.2010
[91] Freeman, W.J., Chang, H.J., Burke, B.C., Rose, P.A. and Badler, J. (1997) Taming Chaos: Stabilization of Aperiodic Attractors by Noise [Olfactory System Model]. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 44, 989-996.
https://doi.org/10.1109/81.633888
[92] Kitzbichler, M.G., Smith, M.L., Christensen, S.R. and Bullmore, E. (2009) Broadband Criticality of Human Brain Network Synchronization. PLoS Computational Biology, 5, e1000314.
https://doi.org/10.1371/journal.pcbi.1000314
[93] Stam, C.J. and de Bruin, E.A. (2004) Scale-Free Dynamics of Global Functional Connectivity in the Human Brain. Human Brain Mapping, 22, 97-109.
https://doi.org/10.1002/hbm.20016
[94] Kello, C.T., Brown, G.D., Ferrer-i-Cancho, R., Holden, J.G., Linkenkaer-Hansen, K., Rhodes, T. and Van Orden, G.C. (2010) Scaling Laws in Cognitive Sciences. Trends in Cognitive Sciences, 14, 223-232.
https://doi.org/10.1016/j.tics.2010.02.005
[95] Milośević, N.T., Ristanović, D., Gudović, R., Rajković, K. and Marić, D. (2007) Application of Fractal Analysis to Neuronal Dendritic Arborisation Patterns of the Monkey Dentate Nucleus. Neuroscience Letters, 425, 23-27.
https://doi.org/10.1016/j.neulet.2007.08.009
[96] Rothnie, P., Kabaso, D., Hof, P.R., Henry, B.I. and Wearne, S.L. (2006) Functionally Relevant Measures of Spatial Complexity in Neuronal Dendritic Arbors. Journal of Theoretical Biology, 238, 505-526.
https://doi.org/10.1016/j.jtbi.2005.06.001
[97] Bonifazi, P., Goldin, M., Picardo, M.A., Jorquera, I., Cattani, A., Bianconi, G., Represa, A., Ben-Ari, Y. and Cossart, R. (2009) GABAergic Hub Neurons Orchestrate Synchrony in Developing Hippocampal Networks. Science, 326, 1419-1424.
https://doi.org/10.1126/science.1175509
[98] Kiselev, V.G., Hahn, K.R. and Auer, D.P. (2003) Is the Brain Cortex a Fractal? NeuroImage, 20, 1765-1774.
https://doi.org/10.1016/S1053-8119(03)00380-X
[99] Tort, A.B.L., Kramer, M.A., Thorn, C., Gibson, D.J., Kubota, Y., Graybiel, A.M. and Kopell, N.J. (2008) Dynamic Cross-Frequency Couplings of Local Field Potential Oscillations in Rat Striatum and Hippocampus during Performance of a T-Maze Task. Proceedings of the National Academy of Sciences of the United States of America, 105, 20517-20522.
https://doi.org/10.1073/pnas.0810524105
[100] He, B.J., Zempel, J.M., Snyder, A.Z. and Raichle, M.E. (2010) The Temporal Structures and Functional Significance of Scale-Free Brain Activity. Neuron, 66, 353-369.
https://doi.org/10.1016/j.neuron.2010.04.020
[101] Northoff, G. (2013) Consciousness. In: Unlocking the Brain, Vol. II, Oxford University Press, Oxford.
https://doi.org/10.1093/acprof:oso/9780199826995.001.0001
[102] Cortes, J.M., Torres, J.J. and Marro, J. (2007) Control of Neural Chaos by Synaptic Noise. Biosystems, 87, 186-190.
https://doi.org/10.1016/j.biosystems.2006.09.013
[103] Dennis, B., Desharnais, R.A., Cushing, J.M., Henson, S.M. and Costantino, R.F. (2003) Can Noise Induce Chaos? Oikos, 102, 329-339.
https://doi.org/10.1034/j.1600-0706.2003.12387.x
[104] Faisal, A.A., White, J.A. and Laughlin, S.B. (2005) Ion-Channel Noise Places Limits on the Miniaturization of the Brain’s Wiring. Current Biology, 15, 1143-1149.
https://doi.org/10.1016/j.cub.2005.05.056
[105] Stevens, C.F. and Zador, A.M. (1998) Input Synchrony and the Irregular Firing of Cortical Neurons. Nature Neuroscience, 1, 210-217.
https://doi.org/10.1038/659
[106] Shannon, C.E. (2001) A Mathematical Theory of Communication. ACM SIGMOBILE Mobile Computing and Communications Review, 5, 3-55.
https://doi.org/10.1145/584091.584093
[107] Luce, R.D. (2003) Whatever Happened to Information Theory in Psychology? Review of General Psychology, 7, 183-188.
https://doi.org/10.1037/1089-2680.7.2.183
[108] Freeman, W.J. (1994) Role of Chaotic Dynamics in Neural Plasticity. Progress in Brain Research, 102, 319-333.
https://doi.org/10.1016/S0079-6123(08)60549-X
[109] Shew, W.L., Yang, H., Petermann, T., Roy, R. and Plenz, D. (2009) Neuronal Avalanches Imply Maximum Dynamic Range in Cortical Networks at Criticality. The Journal of Neuroscience, 29, 15595-15600.
https://doi.org/10.1523/JNEUROSCI.3864-09.2009
[110] Larremore, D.B., Shew, W.L. and Restrepo, J.G. (2011) Predicting Criticality and Dynamic Range in Complex Networks: Effects of Topology. Physical Review Letters, 106, Article ID: 058101.
https://doi.org/10.1103/PhysRevLett.106.058101
[111] Kinouchi, O. and Copelli, M. (2006) Optimal Dynamical Range of Excitable Networks at Criticality. Nature Physics, 2, 348-351.
https://doi.org/10.1038/nphys289
[112] Haldeman, C. and Beggs, J.M. (2005) Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States. Physical Review Letters, 94, Article ID: 058101.
https://doi.org/10.1103/PhysRevLett.94.058101
[113] Buxton, R.B. and Frank, L.R. (1997) A Model for the Coupling between Cerebral Blood Flow and Oxygen Metabolism during Neural Stimulation. Journal of Cerebral Blood Flow and Metabolism, 17, 64-72.
https://doi.org/10.1097/00004647-199701000-00009
[114] Shew, W.L., Yang, H., Yu, S., Roy, R. and Plenz, D. (2011) Information Capacity and Transmission Are Maximized in Balanced Cortical Networks with Neuronal Avalanches. The Journal of Neuroscience, 31, 55-63.
https://doi.org/10.1523/JNEUROSCI.4637-10.2011
[115] Langton, C.G. (1990) Computation at the Edge of Chaos: Phase Transitions and Emergent Computation. Physica D: Nonlinear Phenomena, 42, 12-37.
https://doi.org/10.1016/0167-2789(90)90064-V
[116] Kauffman, S.A. and Johnsen, S. (1991) Coevolution to the Edge of Chaos: Coupled Fitness Landscapes, Poised States, and Coevolutionary Avalanches. Journal of Theoretical Biology, 149, 467-505.
https://doi.org/10.1016/S0022-5193(05)80094-3
[117] Bertschinger, N. and Natschlager, T. (2004) Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks. Neural Computation, 16, 1413-1436.
https://doi.org/10.1162/089976604323057443
[118] Natschläger, T., Bertschinger, N. and Legenstein, R. (2005) At the Edge of Chaos: Real-Time Computations and Self-Organized Criticality in Recurrent Neural Networks. Advances in Neural Information Processing Systems, 17, 145-152.
[119] Legenstein, R. and Maass, W. (2007) Edge of Chaos and Prediction of Computational Performance for Neural Circuit Models. Neural Networks, 20, 323-334.
https://doi.org/10.1016/j.neunet.2007.04.017
[120] Bak, P. and Chialvo, D.R. (2001) Adaptive Learning by Extremal Dynamics and Negative Feedback. Physical Review E, 63, Article ID: 031912.
https://doi.org/10.1103/PhysRevE.63.031912
[121] De Arcangelis, L. and Herrmann, H.J. (2010) Learning as a Phenomenon Occurring in a Critical State. Proceedings of the National Academy of Sciences of the United States of America, 107, 3977-3981.
https://doi.org/10.1073/pnas.0912289107
[122] Churchland, M.M., Yu, B.M., Cunningham, J.P., Sugrue, L.P., Cohen, M.R., Corrado, G.S., Newsome, W.T., Clark, A.M., Hosseini, P., Scott, B.B., Bradley, D.C., Smith, M.A., Kohn, A., Movshon, J.A., Armstrong, K.M., Moore, T., Chang, S.W., Snyder, L.H., Lisberger, S.G., Priebe, N.J., Finn, I.M., Ferster, D., Ryu, S.I., Santhanam, G., Sahani, M. and Shenoy, K.V. (2010) Stimulus Onset Quenches Neural Variability: A Widespread Cortical Phenomenon. Nature Neuroscience, 13, 369-378.
https://doi.org/10.1038/nn.2501
[123] He, B.J. (2013) Spontaneous and Task-Evoked Brain Activity Negatively Interact. The Journal of Neuroscience, 33, 4672-4682.
https://doi.org/10.1523/JNEUROSCI.2922-12.2013
[124] Wang, X.J. (2008) Decision Making in Recurrent Neuronal Circuits. Neuron, 60, 215-234.
https://doi.org/10.1016/j.neuron.2008.09.034
[125] Sussillo, D. and Abbott, L.F. (2009) Generating Coherent Patterns of Activity from Chaotic Neural Networks. Neuron, 63, 544-557.
https://doi.org/10.1016/j.neuron.2009.07.018
[126] Steriade, M. (2000) Corticothalamic Resonance, States of Vigilance and Mentation. Neuroscience, 101, 243-276.
https://doi.org/10.1016/S0306-4522(00)00353-5
[127] Llinás, R.R., Leznik, E. and Urbano, F.J. (2002) Temporal Binding via Cortical Coincidence Detection of Specific and Nonspecific Thalamocortical Inputs: A Voltage-Dependent Dye-Imaging Study in Mouse Brain Slices. Proceedings of the National Academy of Sciences of the United States of America, 99, 449-454.
https://doi.org/10.1073/pnas.012604899
[128] Tsai, Y.T., Chan, H.L., Lee, S.T., Tu, P.H., Chang, B.L. and Wu, T. (2010) Significant Thalamocortical Coherence of Sleep Spindle, Theta, Delta, and Slow Oscillations in NREM Sleep: Recordings from the Human Thalamus. Neuroscience Letters, 485, 173-177.
https://doi.org/10.1016/j.neulet.2010.09.004
[129] Deco, G. and Hugues, E. (2012) Neural Network Mechanisms Underlying Stimulus Driven Variability Reduction. PLoS Computational Biology, 8, e1002395.
https://doi.org/10.1371/journal.pcbi.1002395
[130] Brunel, N. and Wang, X.J. (2001) Effects of Neuromodulation in a Cortical Network Model of Object Working Memory Dominated by Recurrent Inhibition. Journal of Computational Neuroscience, 11, 63-85.
https://doi.org/10.1023/A:1011204814320
[131] Rajan, K., Abbott, L.F. and Sompolinsky, H. (2010) Stimulus-Dependent Suppression of Chaos in Recurrent Neural Networks. Physical Review E, 82, Article ID: 011903.
https://doi.org/10.1103/PhysRevE.82.011903
[132] Arieli, A., Sterkin, A., Grinvald, A. and Aertsen, A. (1996) Dynamics of Ongoing Activity: Explanation of the Large Variability in Evoked Cortical Responses. Science, 273, 1868-1871.
https://doi.org/10.1126/science.273.5283.1868
[133] Fox, M.D., Snyder, A.Z., Zacks, J.M. and Raichle, M.E. (2006) Coherent Spontaneous Activity Accounts for Trial-to-Trial Variability in Human Evoked Brain Responses. Nature Neuroscience, 9, 23-25.
https://doi.org/10.1038/nn1616
[134] Pfurtscheller, G. and Andrew, C. (1999) Event-Related Changes of Band Power and Coherence: Methodology and Interpretation. Journal of Clinical Neurophysiology, 16, 512-519.
https://doi.org/10.1097/00004691-199911000-00003
[135] Benzi, R., Sutera, A. and Vulpiani, A. (1981) The Mechanism of Stochastic Resonance. Journal of Physics A: Mathematical and General, 14, L453.
https://doi.org/10.1088/0305-4470/14/11/006
[136] Moss, F., Ward, L.M. and Sannita, W.G. (2004) Stochastic Resonance and Sensory Information Processing: A Tutorial and Review of Application. Clinical Neurophysiology, 115, 267-281.
https://doi.org/10.1016/j.clinph.2003.09.014
[137] McDonnell, M.D. and Abbott, D. (2009) What Is Stochastic Resonance? Definitions, Misconceptions, Debates, and Its Relevance to Biology. PLoS Computational Biology, 5, e1000348.
https://doi.org/10.1371/journal.pcbi.1000348
[138] Rubinstein, J.T. and Hong, R. (2003) Signal Coding in Cochlear Implants: Exploiting Stochastic Effects of Electrical Stimulation. Annals of Otology, Rhinology and Laryngology, 112, 14-19.
https://doi.org/10.1177/00034894031120S904
[139] Tougaard, J. (2000) Stochastic Resonance and Signal Detection in an Energy Detector—Implications for Biological Receptor Systems. Biological Cybernetics, 83, 471-480.
https://doi.org/10.1007/s004220000176
[140] Shu, Y., Hasenstaub, A., Badoual, M., Bal, T. and McCormick, D.A. (2003) Barrages of Synaptic Activity Control the Gain and Sensitivity of Cortical Neurons. The Journal of Neuroscience, 23, 10388-10401.
https://doi.org/10.1523/JNEUROSCI.23-32-10388.2003
[141] Fellous, J.M., Rudolph, M., Destexhe, A. and Sejnowski, T.J. (2003) Synaptic Background Noise Controls the Input/Output Characteristics of Single Cells in an in Vitro Model of in Vivo Activity. Neuroscience, 122, 811-829.
https://doi.org/10.1016/j.neuroscience.2003.08.027
[142] Rudolph, M. and Destexhe, A. (2003) The Discharge Variability of Neocortical Neurons during High-Conductance States. Neuroscience, 119, 855-873.
https://doi.org/10.1016/S0306-4522(03)00164-7
[143] Pare, D., Shink, E., Gaudreau, H., Destexhe, A. and Lang, E.J. (1998) Impact of Spontaneous Synaptic Activity on the Resting Properties of Cat Neocortical Pyramidal Neurons in Vivo. Journal of Neurophysiology, 79, 1450-1460.
https://doi.org/10.1152/jn.1998.79.3.1450
[144] Destexhe, A. and Paré, D. (1999) Impact of Network Activity on the Integrative Properties of Neocortical Pyramidal Neurons in Vivo. Journal of Neurophysiology, 81, 1531-1547.
https://doi.org/10.1152/jn.1999.81.4.1531
[145] Destexhe, A., Rudolph, M. and Paré, D. (2003) The High-Conductance State of Neocortical Neurons in Vivo. Nature Reviews Neuroscience, 4, 739-751.
https://doi.org/10.1038/nrn1198
[146] Chance, F.S., Abbott, L.F. and Reyes, A.D. (2002) Gain Modulation from Background Synaptic Input. Neuron, 35, 773-782.
https://doi.org/10.1016/S0896-6273(02)00820-6
[147] Hô, N. and Destexhe, A. (2000) Synaptic Background Activity Enhances the Responsiveness of Neocortical Pyramidal Neurons. Journal of Neurophysiology, 84, 1488-1496.
https://doi.org/10.1152/jn.2000.84.3.1488
[148] Destexhe, A., Rudolph, M., Fellous, J.M. and Sejnowski, T.J. (2001) Fluctuating Synaptic Conductances Recreate in Vivo-Like Activity in Neocortical Neurons. Neuroscience, 107, 13-24.
https://doi.org/10.1016/S0306-4522(01)00344-X
[149] Dorval, A.D., Christini, D.J. and White, J.A. (2001) Real-Time Linux Dynamic Clamp: A Fast and Flexible Way to Construct Virtual Ion Channels in Living Cells. Annals of Biomedical Engineering, 29, 897-907.
https://doi.org/10.1114/1.1408929
[150] Sharp, A.A., O’Neil, M.B., Abbott, L.F. and Marder, E. (1993) The Dynamic Clamp: Artificial Conductances in Biological Neurons. Trends in Neurosciences, 16, 389-394.
https://doi.org/10.1016/0166-2236(93)90004-6
[151] Faure, P., Kaplan, D. and Korn, H. (2000) Synaptic Efficacy and the Transmission of Complex Firing Patterns between Neurons. Journal of Neurophysiology, 84, 3010-3025.
https://doi.org/10.1152/jn.2000.84.6.3010
[152] Ji, D. and Wilson, M.A. (2007) Coordinated Memory Replay in the Visual Cortex and Hippocampus during Sleep. Nature Neuroscience, 10, 100-107.
https://doi.org/10.1038/nn1825
[153] Boly, M., Balteau, E., Schnakers, C., Degueldre, C., Moonen, G., Luxen, A., Phillips, C., Peigneux, P., Maquet, P. and Laureys, S. (2007) Baseline Brain Activity Fluctuations Predict Somatosensory Perception in Humans. Proceedings of the National Academy of Sciences of the United States of America, 104, 12187-12192.
https://doi.org/10.1073/pnas.0611404104
[154] Hesselmann, G., Kell, C.A., Eger, E. and Kleinschmidt, A. (2008) Spontaneous Local Variations in Ongoing Neural Activity Bias Perceptual Decisions. Proceedings of the National Academy of Sciences of the United States of America, 105, 10984-10989.
https://doi.org/10.1073/pnas.0712043105
[155] Lee, T.W., Wu, Y.T., Yu, W.Y., Wu, H.C. and Chen, T.J. (2012) A Smarter Brain Is Associated with Stronger Neural Interaction in Healthy Young Females: A Resting EEG Coherence Study. Intelligence, 40, 38-48.
https://doi.org/10.1016/j.intell.2011.11.001
[156] Lee, T.W., Yu, Y.W., Wu, H.C. and Chen, T.J. (2011) Do Resting Brain Dynamics Predict Oddball Evoked-Potential? BMC Neuroscience, 12, Article No. 121.
https://doi.org/10.1186/1471-2202-12-121
[157] Lewis, C.M., Baldassarre, A., Committeri, G., Romani, G.L. and Corbetta, M. (2009) Learning Sculpts the Spontaneous Activity of the Resting Human Brain. Proceedings of the National Academy of Sciences of the United States of America, 106, 17558-17563.
https://doi.org/10.1073/pnas.0902455106
[158] Northoff, G. (2012) Coding. In: Unlocking the Brain, Vol. I, Oxford University Press, Oxford.
https://doi.org/10.1093/acprof:oso/9780199826988.001.0001
[159] Worsley, K.J., Taylor, J.E., Tomaiuolo, F. and Lerch, J. (2004) Unified Univariate and Multivariate Random Field Theory. NeuroImage, 23, S189-S195.
https://doi.org/10.1016/j.neuroimage.2004.07.026
[160] Ward, B.D. (2000) Simultaneous Inference for fMRI Data.
http://afni.nimh.nih.gov/pub/dist/doc/manual/AlphaSim.pdf.
[161] McIntosh, A.R., Bookstein, F.L., Haxby, J.V. and Grady, C.L. (1996) Spatial Pattern Analysis of Functional Brain Images Using Partial Least Squares. NeuroImage, 3, 143-157.
https://doi.org/10.1006/nimg.1996.0016
[162] Garrett, D.D., Kovacevic, N., McIntosh, A.R. and Grady, C.L. (2010) Blood Oxygen Level-Dependent Signal Variability Is More than Just Noise. The Journal of Neuroscience, 30, 4914-4921.
https://doi.org/10.1523/JNEUROSCI.5166-09.2010
[163] Garrett, D.D., Kovacevic, N., McIntosh, A.R. and Grady, C.L. (2011) The Importance of Being Variable. The Journal of Neuroscience, 31, 4496-4503.
https://doi.org/10.1523/JNEUROSCI.5641-10.2011
[164] McIntosh, A.R., Kovacevic, N. and Itier, R.J. (2008) Increased Brain Signal Variability Accompanies Lower Behavioral Variability in Development. PLoS Computational Biology, 4, e1000106.
https://doi.org/10.1371/journal.pcbi.1000106
[165] Miśić, B., Mills, T., Taylor, M.J. and McIntosh, A.R. (2010) Brain Noise Is Task Dependent and Region Specific. Journal of Neurophysiology, 104, 2667-2676.
https://doi.org/10.1152/jn.00648.2010
[166] MacDonald, S.W., Nyberg, L. and Bäckman, L. (2006) Intra-Individual Variability in Behavior: Links to Brain Structure, Neurotransmission and Neuronal Activity. Trends in Neurosciences, 29, 474-480.
https://doi.org/10.1016/j.tins.2006.06.011
[167] Williams, B.R., Hultsch, D.F., Strauss, E.H., Hunter, M.A. and Tannock, R. (2005) Inconsistency in Reaction Time across the Life Span. Neuropsychology, 19, 88-96.
https://doi.org/10.1037/0894-4105.19.1.88
[168] Burzynska, A.Z., Wong, C.N., Voss, M.W., Cooke, G.E., McAuley, E. and Kramer, A.F. (2015) White Matter Integrity Supports BOLD Signal Variability and Cognitive Performance in the Aging Human Brain. PLoS ONE, 10, e0120315.
https://doi.org/10.1371/journal.pone.0120315
[169] Vakorin, V.A., Lippé, S. and McIntosh, A.R. (2011) Variability of Brain Signals Processed Locally Transforms into Higher Connectivity with Brain Development. The Journal of Neuroscience, 31, 6405-6413.
https://doi.org/10.1523/JNEUROSCI.3153-10.2011
[170] Bialek, W., Rieke, F., de Ruyter van Steveninck, R.R. and Warland, D. (1991) Reading a Neural Code. Science, 252, 1854-1857.
https://doi.org/10.1126/science.2063199
[171] Rieke, F., Bodnar, D.A. and Bialek, W. (1995) Naturalistic Stimuli Increase the Rate and Efficiency of Information Transmission by Primary Auditory Afferents. Proceedings of the Royal Society B: Biological Sciences, 262, 259-265.
https://doi.org/10.1098/rspb.1995.0204
[172] Dayan, P., Hinton, G.E., Neal, R.M. and Zemel, R.S. (1995) The Helmholtz Machine. Neural Computation, 7, 889-904.
https://doi.org/10.1162/neco.1995.7.5.889
[173] Attwell, D. and Laughlin, S.B. (2001) An Energy Budget for Signaling in the Grey Matter of the Brain. Journal of Cerebral Blood Flow and Metabolism, 21, 1133-1145.
https://doi.org/10.1097/00004647-200110000-00001
[174] Lennie, P. (2003) The Cost of Cortical Computation. Current Biology, 13, 493-497.
https://doi.org/10.1016/S0960-9822(03)00135-0
[175] Gur, M., Beylin, A. and Snodderly, D.M. (1997) Response Variability of Neurons in Primary Visual Cortex (V1) of Alert Monkeys. The Journal of Neuroscience, 17, 2914-2920.
https://doi.org/10.1523/JNEUROSCI.17-08-02914.1997
[176] Kara, P., Reinagel, P. and Reid, R.C. (2000) Low Response Variability in Simultaneously Recorded Retinal, Thalamic, and Cortical Neurons. Neuron, 27, 635-646.
https://doi.org/10.1016/S0896-6273(00)00072-6
[177] DeWeese, M.R., Wehr, M. and Zador, A.M. (2003) Binary Spiking in Auditory Cortex. The Journal of Neuroscience, 23, 7940-7949.
https://doi.org/10.1523/JNEUROSCI.23-21-07940.2003
[178] Olshausen, B.A. and Field, D.J. (1996) Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images. Nature, 381, 607-609.
https://doi.org/10.1038/381607a0
[179] Lewicki, M.S. (2002) Efficient Coding of Natural Sounds. Nature Neuroscience, 5, 356-363.
https://doi.org/10.1038/nn831
[180] Olshausen, B.A. and Field, D.J. (2004) Sparse Coding of Sensory Inputs. Current Opinion in Neurobiology, 14, 481-487.
https://doi.org/10.1016/j.conb.2004.07.007
[181] Lee, T.W. (2021) An Integrative Account of Neural Network Interaction: Neuro-Messenger Theory. World Journal of Neuroscience, 11, 124-136.
https://doi.org/10.4236/wjns.2021.112011
[182] Doyon, B. (1992) On the Existence and the Role of Chaotic Processes in the Nervous System. Acta Biotheoretica, 40, 113-119.
https://doi.org/10.1007/BF00168140
[183] Denes-Raj, V. and Epstein, S. (1994) Conflict between Intuitive and Rational Processing: When People Behave against Their Better Judgment. Journal of Personality and Social Psychology, 66, 819-829.
https://doi.org/10.1037/0022-3514.66.5.819
[184] Tversky, A. and Kahneman, D. (1981) The Framing of Decisions and the Psychology of Choice. Science, 211, 453-458.
https://doi.org/10.1126/science.7455683
[185] Fiorillo, C.D., Newsome, W.T. and Schultz, W. (2008) The Temporal Precision of Reward Prediction in Dopamine Neurons. Nature Neuroscience, 11, 966-973.
https://doi.org/10.1038/nn.2159
[186] Georgopoulos, A.P., Schwartz, A.B. and Kettner, R.E. (1986) Neuronal Population Coding of Movement Direction. Science, 233, 1416-1419.
https://doi.org/10.1126/science.3749885
[187] Dinse, H., Krüger, K. and Best, J. (1990) A Temporal Structure of Cortical Information Processing. Concepts in Neuroscience, 1, 199-238.
[188] Hopfield, J.J. (1982) Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proceedings of the National Academy of Sciences of the United States of America, 79, 2554-2558.
https://doi.org/10.1073/pnas.79.8.2554
[189] Clark, D., Sokoloff, L., Siegel, G., Agranoff, B., Albers, R. and Fisher, S. (1999) Basic Neurochemistry: Molecular, Cellular and Medical Aspects. Lippincott-Raven Publishers, Philadelphia.
[190] Sokoloff, L., Mangold, R., Wechsler, R.L., Kenney, C. and Kety, S.S. (1955) The Effect of Mental Arithmetic on Cerebral Circulation and Metabolism. The Journal of Clinical Investigation, 34, 1101-1108.
https://doi.org/10.1172/JCI103159
[191] Gusnard, D.A. and Raichle, M.E. (2001) Searching for a Baseline: Functional Imaging and the Resting Human Brain. Nature Reviews Neuroscience, 2, 685-694.
https://doi.org/10.1038/35094500
[192] Goldberg, G.R., Prentice, A.M., Davies, H.L. and Murgatroyd, P.R. (1988) Overnight and Basal Metabolic Rates in Men and Women. European Journal of Clinical Nutrition, 42, 137-144.
[193] Van Cauter, E., Polonsky, K.S. and Scheen, A.J. (1997) Roles of Circadian Rhythmicity and Sleep in Human Glucose Regulation. Endocrine Reviews, 18, 716-738.
https://doi.org/10.1210/edrv.18.5.0317
[194] Kirschfeld, K. (1992) Oscillations in the Insect Brain: Do They Correspond to the Cortical Gamma-Waves of Vertebrates? Proceedings of the National Academy of Sciences of the United States of America, 89, 4764-4768.
https://doi.org/10.1073/pnas.89.10.4764
[195] Van Someren, E.J., Van Der Werf, Y.D., Roelfsema, P.R., Mansvelder, H.D. and da Silva, F.H. (2011) Chapter 1—Slow Brain Oscillations of Sleep, Resting State, and Vigilance. In: Progress in Brain Research, Vol. 193, Elsevier, Amsterdam, 3-15.
https://doi.org/10.1016/B978-0-444-53839-0.00001-6
[196] Hahn, T., Dresler, T., Ehlis, A.C., Pyka, M., Dieler, A.C., Saathoff, C., Jakob, P.M., Lesch, K.P. and Fallgatter, A.J. (2012) Randomness of Resting-State Brain Oscillations Encodes Gray’s Personality Trait. NeuroImage, 59, 1842-1845.
https://doi.org/10.1016/j.neuroimage.2011.08.042
[197] Faugeras, O., Clément, F., Deriche, R., Keriven, R., Papadopoulo, T., Roberts, J., Viéville, T., Devernay, F., Gomes, J., Hermosillo, G., et, al. (1999) The Inverse EEG and MEG Problems: The Adjoint State Approach I: The Continuous Case. [Research Report] RR-3673, INRIA.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.