PAGE 2
 « Go to Previous Page Go to Next Page »



• The following is a list of brain structures and dynamic features that are especially important to the process of human consciousness (or the side-effect that human consciousness is, as the "epiphenomenalists" would have it). I am not qualified to give a thorough explanation of each brain structure or process and its effect on consciousness. But I will provide links for further study.

The Pre-Frontal Cortex: belle of the cerebral ball? (Not to be confused with the Cerebellum, which coordinates motor activity). The pre-frontal cortex is located near the forehead. It is the area of the brain that expanded the most during the last phase of human evolution; it sets us apart from chimps and the early hominoids. It is a "well connected" area; neuronal connections emanate from it to all major regions and structures within the brain. Although it cannot be called the seat of consciousness (as there is no one central command area in the brain directing all phases of consciousness), it does participate in most of the signal circuits in the brain that effect consciousness, including the especially important "working memory" mechanism (see Section 27). The brain is broken up into numbered regions, and the core of the pre-frontal cortex is known as "area 64". Interestingly, a SPECT blood-flow scan of a sleepwalker found that area 64 and the angular gyrus (which serves as the "concept translator" between the sensory perception areas and the speech areas) were deactivated during the incident.

The Thalamus (just above the brainstem) and the Reticular System (within the brainstem): These are the great regulators of our state of arousal, including sleep versus wakefulness (see "Degrees of Consciousness" above). The thalamus The Thalamus is well-connected, the great switchboard of the brain is also "well connected", having direct neuronal circuits to all regions of the brain, as well as the neurotransmitter system emanating from it. The later system (the amine fountains) is a series of spaghetti-like structures that permeate the brain, spraying various kinds of neurotransmitters (endorphins, serotonin, dopamine, acetylcholine, etc.) that variously bias our levels of arousal, emotional states, and our cognitive decision-making processes.

The Amygdala and the Limbic System: These are deep-brain structures that, working with the thalamus, mediate our emotional responses, including both the "crude emotions" (like fear and anger), and the more subtle, developed emotions (anxiety, contentedness, pride, apathy, etc.). (See the section on "Importance of Emotions".) The amygdala appears to modulate attraction behavior, while the pre-frontal cortex appears to modulate negative behavior. To oversimplify, the amygdala keeps you from loving too much, while the pre-frontal cortex keeps you from hating too much.

The Entorhinal Cortex: This is a deep-brain structure located close to the limbic system and the hippocampus. It helps to process new episodic memories, preparing input from the cortex for use by the hippocampus. The hippocampus, in turn, mediates the formation of long-term memory structures from short-term memory content (see section on "Importance of Memory"). The entorhinal cortex is likely influenced by the limbic system in determining whether and how to store new episodes within the short-term memory process, and whether to reinforce them or let them atrophy. As such, the degree of emotional arousal associated with a life event helps to determine if it will be remembered, and if so, for how long (and how vividly). Not surprisingly, the entorhinal structure is severely damaged during the course of Alzheimer's Disease.

The Claustrum: In 2014, a research team at George Washington University managed to switch-off and then switch-on the consciousness of an epilepsy patient by using stimulatory electrode implants aimed at a structure in the brain called the claustrum. The claustrum is a thin, enlongated deep-brain structure that connects with many cortical areas and the olfactory bulb; its strongest connections are with the entorhinal cortex. The claustrum can be considered a "broken off" segment of the amygdala, on the periphery of the limbic system. It is believed to play an important role in sensimotor control and modal integration of perception (i.e., the "binding problem"). It may also play a key role in sexual arousal and emotionally motivational responses.

A few years ago, noted consciousness researchers and theorists Francis Crick and Christof Koch posited that the claustrum was the place where the brain more or less weaved all of the various sensory input responses and stored information (such as memories and learned biases, fears and attractions) into a unified brain state representing the overall experience of being conscious. Their most significant empirical verification prior to the recent GWU study involved a certain type of mind-altering plant from Mexico called Salvia divinorum. The psychoactive chemical in the leaves of this plant were found to stick to a certain type of neuron receptor that is found in high concentrations in the claustrum. This distinguished it from other mind-bending hallucinogens like LSD, peyote and psilocybin, and even the basic feel-good stuff like coke and heroin.

According to unscientific reports submitted by “trippers” who used salvia, they experience “altered surroundings, other beings and ego dissolution”, i.e. a severe degradation of normal self-awareness. In other words, when you’re on this Mexican stuff and your claustrum is on the ropes, you’re pretty far gone. This could imply that the claustrum is the structure where our brains assemble the “train of consciousness”, given that salvia can switch this "train" onto the wrong track or off the track. And also that, for at least one epilepsy patient, electroshocks to the claustrum will stop the "consciousness train" like a red signal, then let it start up again once the signal turns green (i.e. the electrode is turned off).

Has the “hard problem” of consciousness been solved by this? Can we now say that clastrum = consciousness, mystery solved? The claustrum may be the place where our trains of self-conscious thought go round and round. But as to why an active claustrum “seems” like anything at all to us, why it “feels” so different from when we drift into dreamless sleep or undergo anesthesia, remains a mystery (i.e., the "explanatory gap" remains). Is the Claustrum where the brain weaves consciousness together for the mind? At a more immediate level, there are various questions regarding the relevance of the GWU study. First off, this is just the reaction of one person, and not a typical person at that; the experimental subject was suffering from epilepsy, and had part of her hippocampus removed, an important brain structure associated with memory formation and retrieval — if her hippocampus was fully functioning, would her mind have crashed so quickly? Still, if the GWU study results are affirmed in other subjects having more typical brain structures (perhaps using focused trans-cranial magnetic induction), then a switch-like mechanism affecting consciousness has been discovered.

However, this switch may be turning off more than just consciousness; it appears to be knocking out all higher-level body control (while allowing continuation of breathing, heartbeats, etc.). The claustrum effect seen in this study isn't very different from what an injury to the reticular formation in the brain stem will cause (other than being easily reversible); i.e. a basic coma or vegetative state. So it’s hard to conclude that the claustrum is any closer to the heart of consciousness than the posterior brain stem is. The more interesting experiment — perhaps one for the future — would be to [attempt to] induce a zombie-like state whereby the subject could still respond to visual / auditory stimulation and could still voluntarily move their muscles and limbs, but was not otherwise experiencing self-aware consciousness. Basically, this would be to induce a state akin to sleepwalking or other absence automatism state. If such a finding were confirmed, the claustrum and its dynamics would constitute an important "neural correlate of consciousness", see Section 29.

Do non-human animals have claustrums? Yes, even insects. The human claustrum has the most complex structure, however, implying that it does more or connects to more places than for an insect or rodent. A study on rats shows that their claustrums coordinate but do not integrate sensory and muscle motor information; by comparison, cat claustrums have more reciprocal connections with the cortex, which might allow more integration (presumably bringing them and their claustrums closer to “phi”, Tononi’s information cross-integration concept of consciousness, see Section 35).

Neural Plasticity: This concept describes the process by which each person's individual environment and life experiences shape and customize the layout of the neurons and their interconnections. The brain of a newborn baby is still very unformed. In the first several years of life, many loose neurons start hooking up. Just how they link, and the strength of each link, is influenced by the infant's early life experiences. For example, if one of a newborn child's eyes were covered for several months, the child might go though life blind in that eye. The eye structure itself can be perfectly fine; but if the neuron hook-ups weren't completed soon after birth, they cannot later be made up for. Neural plasticity works through the strengthening of connections that are used a lot, and the weakening and possible abandonment of those that aren't. As such, genetics cannot completely determine a person's brain structure, and thus cannot completely specify their mental workings, personality, and consciousness. Even the slightest differences in experience between identical twins causes unique brain architecture, and thus different personalities and behavior.

• A Further Question: Is the brain composed of many specialty sub-devices (one for language, one for emotions, one for decision-making, one for abstract logic such as math, etc.), Is the brain composed of many specialty sub-devices or is it a very generalized machine that can solve anything? or is it a very generalized machine that can solve anything? The consensus is "some mixture of both"; psychologist Bernard Baars likens consciousness to a 'lighted stage', a 'shared workspace' area which the specialty areas observe, as if sitting in a darkened theater. (However, the analogy must be extended, as each member of the audience gets up and participates on the lighted stage at various times).

Arguments continue over the relative degrees of specialization and generalization in the brain and the resulting dynamics of the mind. However, Gerald Edelman has gained a high degree of recognition for his "neural Darwinism" paradigms involving "mapping" and "degeneracy". The specialized areas in the brain, such as the various sight processing regions (for edge-formation, color, shape identification, movement perception, etc.) are sometimes known as "maps" or "assemblies". These assemblies/maps are linked together into processing loops, through which conscious perceptions are formed. However, the linking and looping process between maps is "degenerate"; the same conscious perception may be supported by more than one way of linking a group of brain maps together. Perhaps one or two maps that were part of the original coalition behind a particular perception (say a light blue triangle) can be substituted by other maps, and yet yield the same perception. The neuron-linking arrangements within maps and between maps which are highly used and useful are "rewarded" by being strengthened. Those that aren't used much are weakened, and eventually abandoned.

• If Edelman is right, the brain has a general "meta-structural process" akin to natural selection, which continually orchestrates the roles and performances of the many specialists. This might be similar, in some ways, to a symphony orchestra, with the conductor giving the trumpet section more prominence when the trumpeters are having a good night, and on other nights letting the cellos stand out if they are doing well -- even though the overall score is always the same.


• Studies about the effect of brain injuries on human consciousness and behavior are quite important in understanding the nature of consciousness. Almost all general textbooks about the brain, mind and consciousness introduce this topic with the case of Phineas Gage, a rock-blasting foreman who in 1848 had a metal rod shot through his frontal cortex following an accidental explosion along a railroad line under construction. The rod pierced Gage's skull and destroyed a significant swath of his frontal lobe, but he recovered and lived an ambulatory life for another 12 years. He regained most of his abilities regarding perception, motor control, and cognition (albeit with some degradation), but allegedly experienced significant behavioral and personality changes tending towards social irresponsibility and impulsive, “animal-like” behavior (e.g. Damasio 1994). Today, new research on Gage is challenging these allegations

• The Gage story has in recent times been used canonically to equate our personalties with our brain structures, as to show that our personalities do not have an “ethereal essence”; they are not powered by any sort of “transcendent soul”, and are thus entirely dependent upon physical factors. More recent reviews (citing evidence that Gage's alleged personality changes have been exaggerated) emphasize how the brain and mind have a remarkable flexibility and survivability; and even if we don't have eternal souls wandering like ghosts within our cranium, the brain-mind system still has a lot of “momentum factors” that prevent extreme changes in our thinking and behavior patterns despite on-going physiological changes.

• However, a wide range of other types of brain injuries and abnormal conditions have been shown to have clearly identifiable impacts upon perception and cognition as well as behavior. V.S. Ramachandran, Oliver Sacks and other neuroscientists and analysts have extensively discussed the implications of brain injuries upon how the mind works and what consciousness is. Some examples of these conditions are:

• Lesions of the prefrontal region and corpus callosum which produce the perception of an “alien hand”, where patients say that their hand has a will of its own and cannot be consciously controlled. Damage to the corpus callosum may also produce the related “anarchic hand syndrome”, where the two hands work against each other, e.g. one trying to pick up a pencil while the other keeps it down.

• Neglect syndromes such as anosagnosia and Antons syndrome, where a patient with a paralyzed body member or loss of sense (hearing, vision, etc) remains cognitively unaware of their handicap and continually formulates excuses as to why they can't lift an arm or see something in front of them. Although this behavior might seem psychologically motivated, research indicates these beliefs to be sincere, given that these people do not perceive their injuries or handicaps as most other people would, because of brain structure changes due to injury or disease.

Prosopagnosia, a brain injury condition in which a person can see a face quite well, can make out every detail, and knows that they are looking at a human face; but they cannot recognize whose face they are looking at – not even their own. This condition is typically caused by damage in the early visual processing areas, along the junction between the occipital and temporal lobes of the brain.

Hemifield neglect or “half-world syndrome”, where an injury on one side of the brain causes the mind of the sufferer to become unaware of one side of the body and of one half of the world around them. E.g. a patient may only wash the right side of his face and see the right side of a clock. Research indicates that the distorted beliefs associated with these conditions are sincere and originate somatically; they are not primarily psychological reactions These people have not lost an eye or ear or all feeling on one side of their bodies; but due to brain injury, their minds can now focus only on the signals from the other side, and thus extend this situation to their overall perception of the world. In some cases, a patient will come to know what would be within her or his full range of vision, e.g. seeing all the numbers on a clock by turning their head; but they will then describe or draw that clock as having the hour numbers (1 thru 12) on the right side of it only.

Blindsight, mentioned in Section 6, is the condition in which a patient who is blind because of a physical injury in the brain, and not because of any defect within the eyes, is not consciously aware of any sight perception – they describe themselves as completely blind. However, their behaviors show that the brain-mind system is still receiving and using eye sensory data in making sub-conscious behavioral decisions, e.g. avoiding obstructions while walking.

Phantom limb pain, where a patient who loses a body limb such as a hand or leg still feels pain symptoms which are clearly perceived to have originated in the lost member. In some such cases, neuroscientists are able to evoke a phantom limb sensation by touching a certain area of the body whose sensory signals to the brain are also processed in the postcentral gyrus, the areas that deal with somatosensory nerve inputs from the body. The area of the body being stimulated just happens to have its sensory signals processed next to where the now missing limb was processed. E.g. in some cases, touching an area of the face can induce the sensation of touching or pressure in the missing hand or leg.

Capgras syndrome, where a person becomes convinced that their spouses or close relatives and friends are impostors; e.g. the person who looks just like a brother is not really their brother. The sufferer’s eyes and brain have no difficulty seeing the brother or other relative, and their mind recognizes that the face is just like the face of the friend or relative (unlike prospagnosia, where the patient cannot identify the person behind any face). However, because of injury, the connections between the facial recognition area of the brain and the amygdala (which deals with emotional response) have been degraded. Thus, the perceived relative or friend does not arouse the usual affective response, and thus the mind of the sufferer manufactures the notion that the face in question must belong to a look-alike impostor. The explanation of the syndrome is thus anatomical, not psychological (aside from the “impostor” rationalization).

Synesthesia, where stimulating one sense evokes a perception in another, as when hearing a bird singing produces a tingling in the foot, or seeing the number 5 makes one see purple (sometimes, but not always, the number 5 will always look purple to the “synthestete”). These situations again have physiological causes and are not psychological, e.g. a matter of childhood associations. The secondary sensations are genuinely perceived. Increased cross-talk between the brain regions that are specialized for different perception functions may account for many of the different types of synesthesia; there is no one type of injury or change in a particular brain area that is canonically associated with synesthesia. The condition is fairly common (possibly between 0.05% and 0.3% of the population), and may be most commonly due to random variations within the structure and activity of the normal working brain.

Split brain conditions, when the corpus callosum and other connective tissue between the left and right brain halves are severed. These conditions are sometimes caused by injury, but were also surgically imposed during the 1960s as a means to control serious epilepsy seizures. Although such patients do not exhibit dramatic multi-personality behavioral swings, tests show that mind and body awarenesses are more bifurcated in split brain patients; e.g. a word or image seen in the right eye only can cause a reaction with the left hand; no reaction would be had via the right hand (recall the left-right swap between eyes and brain-side).

For example, if a split-brain person is told to pick out and point to a specific letter or number with their left hand as to match a number that will only be flashed before the right eye, they can do it; but if told to pick the number with the right hand, they will not be able to respond, even though the left eye clearly has seen the number. In some experiments, the right hand will be promoted to do something with a command only seen by the left eye. The person will then be asked why they just performed that action. The action was intermediated on the right side of the brain, whereas the left side does most of the “explanatory narrative”. Thus, the explanation given was usually an obvious fabrication, something being made up by a brain-side that did not know the real answer.

Some scientists and philosophers have argued that split-brain patients exhibit two independent wills, and thus represent multiple personalities and consciousness streams within one body. Obviously this idea could be troubling; was a new person created in the splitting operation? If the splitting operation could somehow be undone, would it be akin to murdering the new personality? Other analysts note however that most split brain patients appear to live normal lives and do not exhibit multiple personalities or report noticable discontinuities in their "trains of thought" and awareness. Some patients at first exhibit motor discoordination and "alien hand" behavior, where one hand works at odds with the other, even undoing or fighting the opposite hand's activities. But over time, such behavior disappears.

According to one analyst, the two coritical hemispheres observe each other and eventually learn to "think alike", such that each side can switch off with the other sequentially without radically effecting behavior or mental awareness. Another important fact is that the sub-cortical portions of the brain, including the brain stem and limbic system, are not severed, and therefore a unity of emotional and affective response remains. (The James-Lange/Damasio notion that body responses shape the actual conscious feeling of emotion, along with hormonal mechanisms, would also serve to unify the emotional state in split brain patients; such patients generally do not report experiencing conflicting states of anger and joy, for example). The thalamus, which acts as something of a "switchboard" for the brain, also remains intact. As such, split brain patients generally have a unified self-governing system, do not exhibit multiple wills, and experience only one sense of identity and stream of consciousness, despite cognitive and behavioral discontinuities brought out under unique laboratory conditions.

A prominent split-brain researcher, Michael Gazzaniga, feels that the left-brain is the ultimate mediator of our "theory of self and the world" (here's some research from Gazzaniga and his collegues about this). If so, then even though a split-brain patient may exhibit greater cognitive and behavioral discontinuity than a normal person, so long as his or her left brain is intact, he or she ultimately retains one pre-dominant sense of self and experience of conscious awareness, continuous with his or her pre-split life. However, some researchers argue that despite left-brain predominance, the disconnected right cortex might be able to develop its own primitive level of self-recognition, perhaps akin to the mind of a higher primate or a human infant. The still-unified limbic and hormonal systems, in mediating a still-unified emotional state, may thus be predominantly influenced by the split-brain patient's left hemisphere, but yet hear incongruent "whispers" from the patient's right side. Albeit, philosopher Thomas Nagel points out that this would not be all that different from what the normal unified-brain person experiences!

• Injury and disability of the cerebellum, e.g through stroke. The cerebellum is an area of the brain that has more neurons than the cortex, but which can be injured or removed without significant effect upon conscious perception (although motor control and coordination are seriously impacted). The cerebellum, given its specific A significant injury or even full loss of the cerebellum will not usually end consciousness. functions in coordinating motor activity, is felt to process information according to specialized, segregated routines. By comparison, the cortex operates in a manner involving much more coordination and recursive cross-talk amidst its components. I.e. the outputs of a cortex system are sequentially re-inputted, mixed-in with updated external inputs. Given that the a significant injury or loss within the cortex can change, severly degrade or end consciousness, while similar changes to the cerebellum do not have such an impact, the high connectivity, fluidity, and recursive nature of cortex functioning can be seen as critical to the formation of consciousness.

Epilepsy and other seizures, where consciousness can be lost when excessive levels of ongoing neural discharges, or waves of syncopated discharges reaching high amplitudes of discharge frequency, seem to disrupt and overwhelm whatever supports conscious perception and awareness.

• Amnesia conditions, where a person may lose their long-term episodic/narrative awareness of their past. In some cases, even middle-term awareness is impacted, such that the sufferer may need to be constantly reminded of their name and circumstances (this condition can be caused by injuries to the hippocampus). In other cases, a person may stop remembering anything after a particular date (other than short and middle-term memory support for current activities; one example is Korsakoff's syndrome caused by damage in the thalamus); the cut-off date would be on or close to the time of injury causing the amnesia. The amnesiac usually maintains their previous behavioral tendencies and emotional temperaments, but have little or no sense of a “continual self”, one grounded in past experience. Their autobiography is severely truncated or eliminated altogether.

Locked-in syndrome, a tragic state where the ventral portion of the brain stem is damaged, and most of the connections from the brain to the skeletal muscles are destroyed. As such, the sufferer cannot voluntarily move their body, save for the ability to vertically move their eyes. They are awake, can feel and hear and see (if their eyelids are opened by another), and are aware of their thoughts. Interestingly, they report (through eye-movement signaling, or other physiological testing) not experiencing any great emotional angst or terror from this condition. Anthony Damasio believes that this indicates that mental emotional states are largely triggered or at least highly amplified by the perception of body responses to emotionally inductive situations, whether threat or treat (actually, this concept dates to 1885, and is called the "James-Lange" theory of emotions; Damasio fleshes it out in the context of modern knowledge of brain functioning, citing the brain's internal mapping of the sensory states of the various components of the body). Given that locked-in syndrome sufferers cannot feel their stomach muscles tighten or their eyebrows cringe or their mouths smile, their emotional responses become appropriately muted.

Alexithymia, a condition where a person is not aware of feelings, i.e. does not have the normal emotional experiences of everyday life, such as joy, fear, disgust, anger, sadness, surprise, etc. At first, psychological causes were largely suspected, but research is increasingly pointing to physical factors in the functioning of the brain. This condition will be discussed further in the next section on the Importance of Emotions.

• The overall implication of these conditions and their effect on consciousness is that our sense of selves and our conscious awareness of things are heavily grounded in the chemistry, physics and structure of the brain. We certainly are functions of our environments, of what happens to us. And these environmental inputs change the chemistry, physics and structure of the brain, both in the short and long term. But the overall designs and mechanisms which were selected for us thru evolution and genetics (and sometimes by injury or disease during life) certainly set the stage for whatever our conscious experiences will be.


• Consciousness has a lot to do with emotions; or more precisely, with the feeling of emotions. What are emotions? At their crudest, most fundamental level, they appear to be a short-cut between certain perceived situations and certain behavioral responses. Evolutionary processes gave even the earliest and simplest mammals a way to respond quickly to overwhelming danger (fear), or to a serious but evenly matched challenge (anger), or to an opportunity to gain something helpful (joy). They get the heart pumping and the lungs inhaling more deeply and the stomach digesting faster. The situations which trigger emotions can't wait for the usual perception and conscious decision-making delays in the mind, even when those decision processes are largely pre-programmed by genetics, as in simple animals. Emotions are highly coordinated with, and mediated by, chemical processes which act quickly throughout wide regions of the brain. These processes (the amine fountains) in effect spray neurochemicals (dopamine, serotonin, endorphins, etc.) from a net of hose-like filaments emanating from the thalamus, located deep beneath the cortex.

• The evolution / emergence of consciousness was and is itself shaped and supported by the brain and mind's emotional processes. Does a person need to be conscious to have emotions, or are emotions more basic than consciousness? I suspect that emotions came first, emerging at an earlier stage of evolution than primates and hominoids. (Antonio Damasio emphasizes the difference between HAVING emotions and FEELING emotions; the latter state involves consciousness, the former state may not). Many simple animals seem to experience crude forms of emotion, especially fear. There is a fairly simple evolutionary explanation for fear, in terms of self preservation. Perhaps a crude form of love (certainly not everlasting love!) helps the mating / bonding and child-rearing process to continue after sexuality has died out. It's harder to say what function joy has, but some animals appear to experience it, e.g. playful dolphins. Perhaps it is a signal to take action so as to preserve a situation or resource having survival value.

• As animals became more complex (and arguably more "conscious", to some degree), fewer and fewer of their behaviors were guided by hard-wired genetic features of the brain. But the basic emotions remained as a pre-programmed legacy, a "fast response" system. As the ability to appreciate "qualia" developed in the higher animals (culminating in human self-consciousness), the ability to "feel" the orchestrated effects that emotions cause within the body probably had a synergetic effect on the brain's ability to appreciate qualia. Emotions also became more varied and complex, more tied-in with the abstract mental processes that humans engage in, such as patriotism, heroism, and appreciation of virtue.

• As such, I believe that an expanding repertoire of emotion was the basic evolutionary framework that allowed the construction of higher and higher levels of consciousness. Together with the development of abstract thought capacity and language, emotions helped kindle the "fire" of self- and social-centered consciousness that was ignited within our species. And as the mind and its emergent consciousness became more complex and abstract, emotions co-evolved into more complex and continuous forms. As the pharmacological neuroscientist Dr. Susan Greenfield states, emotions are constantly felt within the healthy, conscious human mind. The LACK of any felt emotion would be a sign of trouble for us, e.g. serious depression.

• Human consciousness came into full flower when humans became able to apply their ability to abstract and form a concept of themselves (this is akin to "higher order thought" in the representatinalist approach). Once they had a concept of "me", they could direct emotions toward this concept (i.e., toward themselves). Most experiences ultimately break down to emotion. Let's go back to Mary (from Section 9) and the first red rose that she sees. It's ultimately a question of emotions / feelings being triggered by that experience. Our self-emotions can be good (when we're happy) or not-good (when we're sad or depressed). Or they can be just about neutral -- when we feel "dead" or "zombie-like".

• Heightened consciousness obviously occurs during very positive and very threatening experiences. Let's think about the positive experiences, e.g. eating chocolate ice cream outdoors on a lovely day in late May. The heightened state of consciousness is arguably trying to protect, extend and maximize such an experience Emotions are constantly felt within the healthy human mind. The lack of any felt emotion would be a sign of trouble (decide to get another bowlful?). Perhaps it serves to record the experience into memory in maximum detail, as you would save a beautiful picture from your digital camera on your computer at the maximum pixel density and file size. Yes, I realize that endorphins and serotonin mediate a large portion of the emotional process (and the heightened sensitivity to feel those emotions, including the responsive "body states" that Damasio and the James-Lange theory cite as a key part of the human process of experiencing emotions). But that's not to deny that the mind/brain process building up in response to a positive set of stimuli contributes somehow to realizing the overall "feeling of being" that underlie every "quale" of conscious experience.

• Sidenote, in regard to Damasio's belief and the James-Lange theory that mental perceptions of body responses to emotional circumstances (e.g. tightening muscles in response to anger, or flushed skin in the presence of joy) are critical to the overall mental perception of an emotional "feeling" . . . research about people who do not consciously experience emotional responses (a condition called alexithymia) hints that emotional feelings may actually be more of an internal brain interaction. Based on fMRI brain scan studies, alexithymia is thought to involve a blockage of the neuronal signals that correspond with emotional feeling. These signals fail to arrive at the cingulate cortex, where neuronal signals corresponding with the many other various components of self-awareness congregate. And yet, people with alexithymia still report awareness of the usual body responses to emotional situations. If so, then Damasio's notion that emotional experience requires the feeling of a body response, and is not primarily a function of direct recognitional and cognitive processes within the brain (both conscious and sub-conscious), could be partly wrong and in need of clarification. Body reactions may still be required to have the familiar "big feeling" of emotion that most of us experience, but mostly because they provide a secondary, echo-like, effect, one that is primed or amplified by the initial sensory signalling to the cingulate. Also, a "body feel" mechanism might help the brain to quickly achieve a common emotional state between the left and right hemispheres, despite the famed ability of each brain hemisphere to "think independently", as shown in split-brain patient research.

• Heightened cogitation in the service of problem-solving may itself cause peak consciousness and emotional response, but only for certain people. Scientists and other thinkers occasionally have 'Eureka! moments' which are quite pleasurable, if not always significant. However, most peak experiences for most people do not involve highly abstract thinking. Still, for those who do experience emotional feelings in response to abstract mental activity, which usually does not involve significant body responses, the notion that emotional experience requires "direct internal connection" (contra Damasio) makes more sense.

• There actually is no pure thinking versus pure emotion within the brain-mind complex, as Descartes and the ancient Greek philosophers had imagined. Both occur simultaneously. What we feel as emotions are guided by logical processes, and our attempts at pure logic can never wring out all of our inner emotions, try as we may. Our seemingly logical decision-making processes depend upon emotional inputs; various studies indicate that when emotional processes in the brain are impaired, decision-making is also impaired. For example, a recent study regarding people with damaged amygdalas (the area of the limbic system in the core brain which is key to emotion formation and manifestation, see Section 24 above) shows them making irrational risk-taking decisions. Also, damage to the orbitofrontal cortex, which works with the amygdala to regulate emotions, has been found to impair the ability to make plans and make choices. Just as Einstein taught the world to see time and space as a unified concept, i.e. "timespace", psychology is now teaching us to see emotions and logic as aspects of a unified fabric. And as with relativistic physics, our sense and appreciation of TIME is an important dimension of the mental unity between thinking and feeling inherent to consciousness (see Section 34 below).


Memory is crucial to the survival of all creatures whose behavior is guided mainly by "thinking", versus simple, genetically coded "instincts". "Sensory memory" acts as a buffer that accumulates sense inputs for a few microseconds, then passes them to "working memory". Working memory is a form of memory with a long-enough timeline to allow us to finish a sentence, complete a thought, accomplish a project, find our way to a desired location, etc. It allows continuity in our actions. We only notice working memory when it is disrupted. E.g., we are in the middle of a sentence, or out searching for something, or going somewhere, and get distracted by a sudden interruption. If the interruption is notable and important enough, we "lose our train of thought". Psychologist Bernard Baars has postulated that the working memory process has a key role in the formation of consciousness, e.g. the pulling together of various sensory perceptions, sub-conscious cognitive concepts, memories and emotional feelings.

• While sensory memory involves time spans of microseconds and working memory involves seconds and minutes, shorter-term/non-permanent memory is the accumulation of significant events that occurred over the past few days, weeks and months (and the corresponding exclusion of mundane events that need not be remembered). Memory is crucial to the survival of all creatures whose behavior is guided by "thinking", versus simple, genetically coded "instincts". Brain studies indicate that shorter-term/non-permanent memories do not involve any large-scale re-arrangements of neuronal connection structures. They may be explained by the self-reinforcing nature of re-entrant looping underlying thought and experience; this looping process develops a momentum of sorts, and does not stop immediately after its triggering conditions cease; an "echo" continues. Such contingent memories may be looked at as "candidates" for long-term archiving, i.e. the establishment of more permanent neuron connection structures. If a particular memory is reinforced often during its non-permanent "consolidation period", i.e. if the memory is mentally referenced frequently, it has a good chance of being "inducted" into the long-term memory collection. If not, then it will likely fade away within 18 to 24 months.

• Non-permanent memory, at the neuron level, is possibly based around the mind's more permanent topic structures, using an encoding and elaboration process. E.g., the temporary memory for a recent Alaskan cruise might involve a set of ad hoc connections between a variety of relatively stable neuronal structures which represent "semantic memories", e.g. word concepts for "cold", "north", "sea", "nature", "adventure", etc.; plus the neuron memory structures representing people you were with, what you ate, what you did, etc. The temporary loop of neuron connections that supports this memory-in-process may change shape, shifting so as to exclude certain neuron regions and to include other ones during the memory "consolidation" period. Because these "reentrant loops" (discussed further in Section 31) can be influenced by other mental events and disruptions occuring during the holding period (including traumatic or emotional experiences, alcohol consumption, sleep deprivation, etc.), the content of memory itself can subtly shift, especially during the first few weeks after the event. This is discussed by Gerald Edelman in his book "The Remembered Present". This helps to explain why human memory is notoriously imperfect, especially for witnesses of one-time only events!

• It is well known that the hippocampus, a deep-brain structure, is centrally involved in the memory-making process. The hippocampus intermediates the selection and development of short-term memories into more permanent long-term memory. Studies of people with significant damage to the hippocampus show them to be unable to remember things happening within 2 years of the injury, but with unimpeded memory of things before then. Access to the "holding tank" of short-term memories is cut off when the hippocampus "goes down" through injury, disease or surgery; but long-term memories and very-short term "sensory memory" remain.

Long-term memory: If the short-term episodic memory pattern was kept alive by regular referral, the meta-processes of the brain in effect assume that the pattern is important enough to be given the limited space available in the temporal lobes for long-term memory retention. For example, when you make a new friend, for the first year or two, the memory structure referring to him or her shifts and changes within the "short term memory" zones. However, after a few years, given that this friend is still very important to you, he or she then gets their own stable neuron relationships in your brain -- in the long-term memory zones of the temporal lobe.

• The hippocampus is connected to the temporal cortex regions involved in long term memory via a connective white-matter brain structure called the uncinate fasciculus. Interestingly, an atypically developed uncinate fasciculus is associated with a condition called "highly superior autobiographical memory", where adults retain detailed memories of fairly routine experiences going all the way back to childhood, and can often also name the exact date for many of those usually forgotten events.

• Long-term memory can be "episodic" (a memory of a personal event) or "semantic" (memory of a fact or concept). Procedural memory is the remembrance of motor skills, such as how to parallel park or ride a bike or field a ground ball. Studies of brain injury patients have established that the procedural memory process is mostly (but not completely) separate from the episodic and semantic memory processes.

• What does memory mean for consciousness? Both conscious and unconscious processes access the various forms of memory. Perhaps some memories exist exclusively within the sub-conscious realm. But in general, memory plays a key role in all aspects of conscious manifestation, including thinking and decision making, long-term emotions and mood, and the process by which "arousal" varies from zombie-like indifference to peak engagement. The more intense moments of conscious experience, where emotions are usually involved, are better remembered than the mundane tasks of daily life. Sleepwalkers and sufferers of absence automatism or high level vegetative states generally don't remember anything. Thus, memory acts as both a litmus test of consciousness, and as a major contributor to the formation and content of consciousness.

• Perhaps memory, in both its short and long-term manifestations, serves as an "echo chamber" of sorts for consciousness, helping to make the experience of the present more vivid, meaningful and comprehensible. An interesting question is whether consciousness exists without some form of memory; and if it does, what is such consciousness like? (Not that anyone would remember it; without memory, no one could later discuss it.) Dr. Giulio Tononi has proposed a mathematical / conceptual measure of consciousness called "phi", to be discussed in Section 35. This metric is meant to reflect the degree of cross-integration that goes on between various flows of information in the brain, or in any other information processing device. According to Dr. Tononi, Phi is a continuous variable, and although conscious human brains exhibit a very high value of it, many other systems, living or mechanical, exhibit some degree of Phi. And thus, according to Tononi, they experience some degree of consciousness. But very few have means to remember it.

• According to Tononi, the human brain goes through periods of low Phi, i.e. when we are in a deep, seemingly unconscious phase of sleep (NREM Phase 4), or when under anesthesia, and probably while sleepwalking. When brought out of such situations, we usually don't remember any preceding conscious awareness. During these states, the extensive looping connections between various parts ["maps"] of the cortex, which brain imaging studies associate with conscious experience (as discussed in Sections 24 and 31), largely disappear. The high level of informational cross-talking between the many components of the brain dissipates, and thus data integration and Phi become much lower.

• However, they do not reach absolute zero; there is still some cross-integration between muscle pressure monitoring, sound monitoring, skin temperature monitoring, etc. Thus, Phi never reaches zero, and according to Tononi, some form of attenuated consciousness usually remains. But we have no memory of such a hazy, minimal form of conscious experience from those hours. One theory is that the dissolution of the loops, which returns the mind to these "unconscious" states, also isolates the brain areas involved with memory functions. As such, consciousness without memory availability and recording capacity is not in the same league, so to speak, with what we know of as "waking consciousness". It may be closer to how William James expected a newborn infant just come into the light of the world to experience the rush of new sense data: i.e., "one great blooming, buzzing confusion". I.e. a set of random, incoherent and rapidly changing images without context.

• The workings of the memory function within the brain are still not well understood. One interesting but very speculative theory was put forth by neuroanatomist Karl Pribram, called the holographic model (more properly the holonomic brain theory). This model postulates that memory information is not stored in the neurons in a digital fashion, as a computer stores a letter or a picture (i.e., each neuron involved has the potential to fire or not, like the 1 / 0 status of memory bits in a computer). Instead, a memory is stored in a "spectral" fashion, in terms of frequencies and wave interference patterns.

• Just what has such a frequency and is acting like a wave? Arguably, the many neuron dendrites, and the synapses that they tie into; they can be set for specific firing frequencies, i.e. so many times per second or minute. Somewhere in the meta-patterns between billions of neurons with their individual firing frequencies, somewhere in their "harmonics" and interferences, is an order, an abstract pattern that conveys data. These wave harmonics may be carried out through the interactions of the familiar "brainwaves" that are measured by EEG (electroencephalography), see Section 30.

• This is the principle behind a hologram; information is stored at each point on the hologram plate so as to respond to laser light as a frequency, and not as a series of designated bright or dark spots placed in time and space, Experiments with animals show memory in the brain to have characteristics similar to holograms. as with a TV screen or a digital file on a computer hard drive. With a hologram, it only takes a small area from anywhere on the plate to show the basic patterns of the image or information being stored. Detailed information about any and every point on the actual object is spread out throughout the holographic medium, not localized at a particular point or region on it. As such, a hologram can be cut up into bits, and will still show the same picture that the original hologram did, but with less resolution and detail. Experiments with animals show memory in the brain to have similar characteristics; removing a part of an animal brain does not cause the loss of most memories. Almost every memory remains, but all are degraded to varying degrees depending on the amount of the brain removed.

• The transformation of data between wave / frequency information formats and time-space formats is accomplished by a type of mathematics called Fourier analysis. Certain perceptual experiments have shown that people are sensitive to light and pattern changes that reflect significant Fourier changes. So there is some evidence that the brain has "taught itself" how to do Fourier mathematics (of course, this "learning" probably occurred via trial-and-error evolutionary processes over millions of years). According to Pribram, memories are formed by physical changes to synapses and dendrites which "polarize" them, make them act in a certain way regarding when and how frequently they fire (i.e., what their frequency will be).

• The holonomic model is a memory theory set forth within a global theory about how the brain works. It helps to explain how the brain could store so many memories in such a small space. (With billions of neurons and orders of magnitude more dendrites and synaptic junctions, the information-richness of the average adult's memory banks would overwhelm a standard, computer-like digital information storage format within the brain). Since the bottom line is the relationship of synapse firing rates, each synaptic junction can participate in multiple memory details (unlike a computer memory where an individual bit contributes to just one file). However, the holonomic model is very complex, and it is not intuitively apparent just what in the brain monitors and interprets the "wave interference" for a multitude of "singing synapses", and how it performs a "reverse Fourier analysis" to convert that information into 3D space and time formats relevant to making behavioral decisions and to directing the motor neurons that will execute them. Ditto for the Fourier conversion from information arranged according to space and time streaming in from the senses.

• This "conversion device" between frequency and space-time information mapping is conceptually termed a "lens", just as a real lens is needed to covert holographic light into a recognizable visual image on a screen. The brain is cybernetically pictured as a massively parallel and highly-layered neural network. The question is whether and how such a network could detect and convert the frequency and interference effects from the network's input signals and process them into output signals arranged according to space and time. An even bigger question is how that frequency information can be converted to phenomenal experience, e.g. the "qualia" resulting from recall of an episodic memory (although most qualia from long-term memory are quite diminished in vividness compared with real-time experiences). The hologram mind analogy is quite innovative and it has caused much excitement from those interested in spiritual and metaphysically speculative interpretations of the mind's relation to the cosmos. But despite providing some trenchant scientific insights, it is still a long way from becoming a detailed and fruitful working theory.

• A final thought about consciousness and memory. When you go to sleep at night and reach "NREM Phase 4", or when you allow a For a short period of time while sleeping or under anesthesia, you basically do not exist. medical expert to administer general anesthesia, you in effect place your trust in your brain's various memory devices; i.e. that they will restore your sense of identity and your accustomed ways of looking at the world when you wake up. For a short period of time, you basically do not exist; but the physical structures and processes are in place to revive you, just as you last remembered. Ponder this, if you would; it's really quite amazing.


• Given that our minds control our conscious perceptions, and that our past experiences shape our minds, phenomenal consciousness is sometimes called "the remembered present"; the exact same environment or scene will be experienced somewhat differently by any two people, because of their varying previous experiences. This idea was explained in detail by biologist Gerald Edelman in his book "The Remembered Present". Consciousness is not a completely neutral, fungible experience. Subconscious mind structures and personal memories are responsible for shaping the way that each of us perceives the world around us. Cognitive psychology speaks of the sub-conscious "mental contexts" which influence how we interpret sensory inputs and how we later ponder our memories of them. Perhaps our own self-concept is the most important "context" within our mental lives.

• Dr. Giulio Tononi summarizes this in his book "Phi" (at p. 289): "nothing can be seen if it was not already in store - memory is imagination peering into the past, and imagination, memory looking out into the future".

• Two related concepts that exemplify this are change blindness and inattentional blindness. Given that we shape our perceptions based on what we usually see, hear, feel and otherwise experience, we sometimes miss what should be an obvious (but unexpected) change. Or at best, it takes a few fractions of a second longer to perceive when something is very different from what we usually experience. Car accidents sometimes turn out worse than human reaction times would require, because an abnormal situation (like a car running a red light in front of you) takes a few parts of a second longer to comprehend -- your mind just doesn't believe it at first! Thus you don't step on the brake as quickly as you might have.

• Change blindness and inattentional blindness both indicate that our conscious picture of the world about us is not a faithful mental re-creation or replica built up from all of the sensory details that our eyes, ears, nose, skin, etc. upload to our brain. Presenting every detail in our "theater of consciousness" would in effect overload our circuits. Therefore, our brains obviously have a sub-conscious mechanism that filters out all but the most relevant information and details, based on our current biological and psychological concerns (see Section 21 regarding attention and focus). Occasionally, however, this mechanism blocks out (at least temporarily) something that would be of interest.

• The delays in sensing unexpected anomalies or sudden changes stems (at least in part) from the recursive, looping structure of information processing within the brain, to be discussed below in Sections 30-33. The looping of mental outputs back into the stream of system input (being mixed in with new external data from the senses and from memory) helps to preserve mental stability in the face of "noisy" sense data. However, this is balanced by the "chaotic borderline" nature of brain information processing, which helps to break the recursive momentum when necessary (see Section 32). Our brains are tuned between stability and nibleness, but in the case of a sudden and unfamiliar change, or even a static anomaly, the stability function may win out for a few extra moments, resulting in momentary change or inattention blindness.


• The concepts of dualism and materialism, which I have very briefly and inadequately sketched out, are "top down" approaches to the problem of understanding human consciousness. These concepts are quite interesting, but they haven't yielded anything that everyone can agree on. So, the brain research people try to approach the problem from the bottom up (i.e., "creeping up" on the hard problem). Using sophisticated imaging technologies such as CAT and fMRI scans, they try to look for pieces of the puzzle by seeing what areas of the brain become active in response to a particular stimuli (seeing, hearing, feeling, smelling), or in correlation with a particular kind of thought or mind state (fear, happiness, sleep, hunger, interest, boredom, anger, memories, etc.).

• Based on these experiments, brain researchers have a lot of good information now which has given a wealth of insights on how the mind works (e.g., the problem of "binding"; when you look at a blue box, for example, one area of the mind processes blueness; another area processes the boxy outline; another processes the texture of the box's surfaces; so how and where are all three "process states" brought together into a unified image of the box? This problem also applies to coordinating the sensory inputs of sound, smell and touch).

• One of the most important "neural correlate" of consciousness found to date involves "reentrant mapping" and "recursive looping", i.e. the fact that many different areas of the brain can variously become active during consciousness, and they appear to work together as if interconnected into one or more loops, where information is shared and decisions can be made in a collaborative process. This will be further discussed in Section 31.

• So far, however, the study of neural correlates have not resulted in the formulation of a universally accepted "theory of everything" on consciousness, i.e. a concept that explains why our conscious experiences seem greater than the summation of our sense perceptions, our memories, our emotional reactions, our thinking -- i.e., why do these things give the vivid feeling of "me". In a sense, the neuroscientists are wandering quite observantly in Lebnitz's "mental mill", looking for the feature or features that will point to the answer. Thus far, mostly what they see is gears. Although information on the mechanics of the brain have and will continue to give valuable insights on the workings of the mind, any approach to the "hard problem" of consciousness will probably occur on a much more abstract level involving the most fundamental natures of the entire universe, on both the quantum and cosmic levels. Tononi's Integrated Information / Phi concept along with Tegmark's "perceptronium" ideas are certainly a step in this direction, and will be reviewed below in Section 35.

• Sidenote on neural studies: At present, neuroscientists have two main approaches regarding the effects of consciousness on brain activity (or vice versa): they can study a small number of neurons in action, through implanted microelectrodes. Or they can watch billions of neurons in action at a fuzzy resolution, through MRI and other scanning techniques. What they really need, however, is a way to closely observe the interactions between "mission groups" of a thousand neurons or so (each group is thought to process a specific task, e.g. checking for sound patterns from the ear, or detecting disrespect in social interaction, or sensing sadness from our memories). These neuron mission groups or "maps" are thought to be extensively and flexibly connected to each other, allowing them to form temporary associations as processing loops (as in a case where disrespect and a certain sound bring on a sad memory). However, our current technology and ethical standards do not allow the brain to be observed at this level. So for now, we can only learn so much about how consciousness forms in the brain; we can't really figure out how the "consciousness software application" and the "general operating system" behind it work. It would be like trying to figure out how MS Windows works by watching the power consumption of the master components on a computer motherboard, or monitoring the electrical changes in one tiny section of the processor or virtual memory board. The point of view is either too big or too small.


• A neural network is a method of computing that is different from the conventional “linear-sequential method” used by most computers to date. Conventional programs make computers execute their steps in a fixed sequence. Depending on inputs, those steps can go up alternative branches; but the decision parameters are strictly fixed, and the consequences of following one programming path versus another because of input conditions are still entirely determined by the program. Neural networks are different; they are adaptive, changing their parameters based on the information that flows through the network. As such, they are more like “learners”; in a sense they are "self-taught" and "self-programmed". They are felt to come closer in character to biological processes in the brain than a standard linear-sequential computer program can. Unlike conventional programs, which were used unsuccessfully to study and simulate the brain (during the early phases of “artificial intelligence” research in the 1950s), neural networks have the ability to classify and categorize their inputs. As such, they exhibit rudimentary abilities to generalize and form abstractions, a critical trait of the human mind.

• A neural-networking computer program is designed around an interconnected group of artificial “neurons”, i.e. processing objects that are highly connected to each other, in a web-like arrangement. As such, neural networks reflect a “connectionist” approach. In a neural network simulation system each neuron monitors the "states" of its upstream neighbors to determine its own state. They have adjustable internal parameters that change over time, in response to feedback regarding the verity of the output after each data input iteration (neural networks are seldom one-shot processing arrangements; they process inputs on a continual basis, just as the brain does). These neuron-like processing objects are generally arranged in layers, and no one neuron-like object can act in isolation; the bottom line is the overall effect of all objects acting together.

• The original neural network simulation was based on a 1943 work by Warren McCulloch and Walter Pitts. Their networking concept was based around their simple mathematical model for the action of a neuron. They built up logical sequences of nerve-like connections based on the idea that a neuron fires in an all-or-none manner, depending on whether the (weighted) sum of firing inputs from all the other neurons connecting to it (or from the inputs from outside the system, in the case of the network's first row) has exceeded that neuron's action threshold.

• Neural networks are generally quite robust - if some neuron-like elements malfunction, the overall functioning of the network can continue, although with some performance decay. However, when feedback connections (known as recurrent pathways) are inserted so as to provide feedback which helps correct errors and allow the system to have time-lag features, i.e. short-term memory, the improvements and increased realism come at a price. At the higher levels of complexity, recurrent systems can become chaotic under certain input conditions; just as any real person with a real brain can "lose it" under the wrong circumstances!

• Philosopher Paul Churchland has done much work examining neural networks and their effectiveness in mimicking the workings of the brain. Churchland says that our understanding of how cognition arises in the brain has been advanced by computer simulations of neural networks. These simulations have been able to display: learning from experience; perceptual discrimination of features; development of a framework of concepts; short term memory; and variable focus of attention. According to Churchland, the representation of things in the brain is reflected in the patterns of activation in a neural network, and computation / cogitation in the brain roughly equates with the transformation from one neural network pattern to another based following specific inputs (inputs chosen to reflect the topic that is to be analyzed).

• But despite all of the amazing progress of neural network simulations and research, there remain significant differences between the neural networks in our brain and in our computers. For example, in real brains, quantitative sensory information seems to be coded in pulses, i.e. in frequency modulation form, rather than digital representations (i.e., like Morse Code). Our understanding of how cognition arises in the brain has been advanced by computer simulations of neural networks So somehow, natural neural networks process analog frequency inputs and not just digital codes. Also, our brains seem much more efficient; the number of layers of neurons between sensory input, decision processing and behavioral output is much less than what is needed for comparable functioning in computer-simulated neural nets. Another problem for neural nets regards pattern recognition, i.e. recognition on the basis of incomplete similarity. Human brains licked the similarity problem eons ago, but machine recognition simulations thus far remain slow and imperfect. They must go through a wide range of transformations and generalization steps where errors can multiply; whereas the brain seems to accomplish incomplete pattern recognition quite elegantly.

• And, although our computerized, neural net-driven robots can pick things up, can speak a rudimentary language, can move about and avoid obstructions, can remember where they put things, can tell an apple from an orange, and do many other amazing things, they still can't write a song or a poem, or decide what is best in a conflicting social situation. They don't commit suicide either -- something else that humans continue to do in spite of the self-preservation instincts instilled by the natural selection design process.

Alternative Mental Processing Designs

• Although neural networking excels at pattern recognition and motor coordination functions, and certainly could contribute to mood formation and emotion triggering, alternative paradigms may be more appropriate for other mind and brain functions, especially at the higher "executive" levels. In his book How the Mind Works, cognitive scientist Steven Pinker has proposed a "production system" bulletin-board model, based on a "computational theory of mind". This paradigm may better mimic the mind's executive functions (e.g., where to focus our attention in the moment-by-moment routines of daily life; and in the longer term, what our bigger decisions and directions in life should be) through conventional "sequential" forms of computer programming. Pinker focuses on elaborate programming routines (designed in large part by the evolutionary process) that would carry out human logic and problem solving. This hearkens back the "good old fashioned artificial intelligence" approach, which used conventional sequential computer programming and expert system sub-routines to apply heuristic "rules-of-thumb" meant to break down and solve problems. However, the details as to where and how the brain carries out such programs are quite sparse.

• Neuroscientific research by Gyorgy Buszaki and others have pointed to the importance of brain waves, which are synchronised electrical pulses arising from large groups of neurons interacting with each other, causing small electrical current flows across regions of the brain. Brain waves occur at different levels, ranging from cortex-wide to localized within a specific functional area of the brain. Brain waves have mostly been seen as a side-effect of brain regulatory activity, an indication of systemic brain states such as arousal, sleep, coma, cognitive activity, relaxation, etc. This may be because the larger-scale waves can easily be measured using electroencephalography. However, the harder to probe smaller-scale waves are very important in themselves, as they synchronize and regulate functions within neural ensembles. And even the large-scale waves may be critical in carrying out systemic decisions such as increasing or decreasing levels of attention, and focusing attention on specific environmental conditions. Brain wave synchronization is also important to the memory and learning process, see also Section 27 regarding the holonomic brain model. Overall, brain waves play a role in the overall brain not unlike the conductor of a symphony orchestra.

Effect of Psychoactive Substances

• Although the use of psychoactive substances will have a variety of effects on brain functions at multiple levels, the most immediate impact of "brain altering substances" are often upon the neural networks. The chemical imbalances and equilibrium disturbances introduced by such powerful substances, as they cross the blood-brain barrier, often immediately impact the neurotransmitters, causing carefully calibrated signal-processing relationships in neural networks to shift. As such, the neural network circuitry combinations that would, say, identify an apple from a particular set of sensory data (i.e., the sensory inputs from looking at an apple) may no longer trigger the apple-identification routine. Instead, it may trigger the combination of neural network identifications that indicate the presence of a huge, hairy black spider. Or it may not identify the apple at all, or take much longer than usual to identify it.

• Higher-level neural networks that impact mood or alertness can also be re-wired, such that a relaxed, low-anxiety mood suddenly becomes a tense, hypervigilant state (or vice versa). This is an over-simplification, but does reflect how psychoactive drugs can work to create hallucinations. Such substances also work variously at other levels (e.g., on the amine fountains) to alter moods, emotional triggers, levels of alertness, cognition, decision making and ultimately behavior. Although brain regulatory and processing mechanisms other than neural networks (including the looping and processing-speed mechanisms discussed in Sections 30 and 32) are also involved, the neural networks are the "front line" of the mind and brain, and thus often take the brunt of a "mind-invasion" caused by psychoactive substances.

31. NEURAL LOOPS and CONSTELLATIONS - How the brain might work, contd.

• A popular and well-regarded theory regarding the brain process underlying consciousness is the concept of "re-entrant mapping loops" from Gerald Edelman. The brain may be composed of neuron groupings / assemblies which work together for a particular purpose. These are the "maps"; maps may turn out to be a local neural network (see Section 30), or combination of such networks acting together as a meso-scale network mechanism, responsible for some specific aspect or component of perception, thought or memory. Each map is connected to many other maps; there are a lot of different routes for electrochemical signaling between maps. Each map may also have a "frequency" characteristic, regarding how frequently the neurons in the map fire and rest.

• There is other evidence from Francis Crick that binding (i.e., when color and shape and size and texture and other "conclusions" come together from different areas of the brain and form a unified mental image of a particular thing being looked at, e.g. oval, bumpy texture, yellow, about 2 inches -- a lemon) takes place when various areas of the brain get their frequencies in synch. So perhaps the content of consciousness is a function of what set of maps are connected and in synch at any one time. Perhaps a recursive re-entrant process acts as the "jig" to bring neuron programs (sensory processing, facts from memory, concepts, beliefs, motor action preparation) from various parts of the brain into a temporary, ad hoc interacting processing loops, with chemical effects facilitating and reinforcing the emotional process (or perhaps retarding it, in the case of depression).

• Edelman's theory that consciousness emerges from a re-entrant looping process relates to Giulio Tononi's Integrated Information Theory and his proposed "Phi" metric regarding the degree of consciousness of a brain process (or of any other information processing process, in fact). More on that below. "Phi" might be considered to be an attempted quantization of the "strength" or importance to the existence of consciousness of any particular re-entrant loop. Interestingly, Christof Koch, who had collaborated in a variety of consciousness research with Francis Crick before Crick's death in 2004, has recently partnered with Tononi in developing the Phi concept.

• Dr. Susan Greenfield provides an analysis regarding conscious states and neural networking in the brain that is generally consistent with Edelman's. In her book "The Private Life of the Brain", Dr. Greenfield proposes that conscious mind states are primarily shaped by the following factors. 1.) Connectivity conditions, which are determined by the actual neuronal density and architecture in various regions of the brain, and by the amine fountains, which distribute neurotransmitter chemicals throughout the brain so as to bolster or retard the ability of neurons to communicate with each other via the synapses. 2.) The background mental states that determine the level of attention and arousal given to a sensory input or to a thought or memory that would trigger or contribute to a conscious state. 3.) The degree of availability of pre-fabricated, semi-permanent neuron structures where a conscious loop is forming; e.g. a structure that might underlie an emotional bias towards a particular recognized entity; 4.) the mental turnover rate, reflected by whether thoughts continually change and are fleeting (e.g., when you are worrying about small problems at your job), or are stable and can be held for a while. Body states, which influence the brain via hormones as well as nerve transmissions, are also very important.

Greenfield has proposed a way of distinguishing several important mental states based on different combinations of these factors. These states include Dreaming, Pain, Abstract Thought, Thrill Seeking, Accidents, Depression, Sexual Arousal, Meditation, Childhood, Schizophrenia and Alzheimer's Disease. Sexuality, as a mental state, allows a very rapid succession of sensory and emotional states which reflect small and rapidly changing neural assemblies. For example, depression involves high levels of neuron connectivity, strong presence of pre-fabricated structures for integration into the "constellation", low states of arousal, and low rates of turnover of the assembly. This results in a very large and stable constellation controlling the frontal cortex, precluding sensory and emotional states, with their small size and rapid turnover, from occurring. Thus, the depressed person feels little emotion and takes little notice of his sensory inputs while under the weight of an overbearing neuron constellation. By contrast, sexuality involves low connectivity, strong presence of pre-fabricated components, high arousal and high assembly turnover. Sexuality, as a mental state, thus allows a very rapid succession of sensory and emotional states, which reflects small neural assemblies that rapidly "turnover", i.e. change.

• With regard to DEPRESSION, we need to note here that Dr. Greenfield's analysis is an overview and does not explain the mechanisms that might lead to the neuron constellation states that she describes. Much exciting work is currently being done with regard to the various environmental triggers and neuro/biological factors involved in chronic depression, including research on the importance of stress-induced cortisol levels; and also regarding overactivity in brain area 25, a tiny structure in the inner frontal brain that serves as a gateway between the emotion-driving limbic system and the frontal cortex, where higher thought and self-awareness generally reside. Overactivity in area 25 seems to inhibit activity in the pre-frontal cortex, perhaps thus fostering the low states of arousal and highly static cortical neuron structures that Greenfield postulates regarding depression.

• Dr. Greenfield's analysis shows just how closely tied our conscious experiences are to the physical processes in the brain. One minor criticism of Dr. Greenfield's paradigm might be that she equates "degree of consciousness" solely with the resulting size of the neuronal assembly (constellation). It might be argued that a person's final subjective impression of a conscious state or event is also very sensitive to the degree of attention and arousal afforded to it by sub-conscious processes. For example, pain, abstract thinking, meditation and depression all involve large neuron constellations. However, arousal levels vary greatly for these states, and the degree to which a person will remember any of these particular mental experiences (a rough measure of "vividness of experience") will vary. The relative size of the neuronal assembly would certainly be an important correlate and metric of consciousness (and is generally consistent with Tononi's Integrated Information theory of consciousness and his proposed "Phi" metric of it, as discussed in Section 35, so long as there were sufficient levels of cross-talk and feedback amongst the assembly components); but there may be other factors that can contribute to an objective (albeit limited) description of subjective experience.

• Keep in mind too that although the brain and neuron characteristics that Dr. Greenfield considers could possibly identify what type of mental state a person is experiencing, they still could not give many details. Perhaps a future brain-scan technique could determine that a person is depressed or in pain; but it could not say much more. We are still a long way, thankfully, from being able to read and broadcast a person's exact thoughts and dreams and feelings. The realm of experience remains quite subjective [D.N. Robinson, Consciousness and Mental Life (New York: Columbia University Press, 2008), 46], although progress is being made by neuroscientists in terms of identifying mental content. However, chaotic processes and perhaps even quantum mechanical influences may make it very difficult to ever completely "read the mind" of another person via a scientific device.

• Overall, the circular connectivity of brain processing areas within "reentrant loops" allows and effectuates the recursive processing of high-level brain information, i.e. of information that defines or nearly defines the contents of the conscious state. This recursional information structure, where systemic outputs are re-inputted along side of new information coming up from the sensory processing areas (and memory areas), allows a "momentum" and stability to the resulting high-level state, the state where executive decisions are formed and from which consciousness possibly stems. Feedback loops promote output stability in the face of "jumpy", variable input data. In the context of mind dynamics, they also contribute to the memory function (discussed in Section 27), at least in the context of very-short term memory, as they preserve "echos" within the consciousness process of what the mind was perceiving, was aware of, and was focusing on over the recent past.

• However, as discussed in the next section, the complexity introduced when several such loops interact causes chaotic looping patterns to evolve in the system's informational state-space, in the form of "strange attractors". This semi-chaotic behavior (largely manifested in quickly shifting attention patterns) helps to keep the brain and mind balanced between the stability afforded by feedback, and the responsiveness needed in dealing with a challenging, dynamic environment.


• Work by several brain scientists, most notably Walter J. Freeman, has show the importance to the functioning of the mind (and thus the nature of consciousness) of the emerging mathematical paradigms that have been developed over the past 25 years to elucidate chaotic behavior in complex and recursive systems. Freeman has studied the brain's “mesoscopic-level” activity patterns using EEG monitoring, studying responses to standard sensory stimuli. He has found that EEG patterns of such activity (involving interactions between mid-level sized brain components, i.e. not on the micro scale such as neurons, nor on the macro level e.g. brain lobes) show that the patterns of activity resulting from the same stimuli can vary quickly over time, and yet still seem to stay within certain boundaries. He feels that such activity patterns match the “strange attractor” paradigm of complex recursive systems involving the on-going interaction of various components. Strange attractors involve cycling systems which exhibit something of a repetitive, quasi-orderly pattern of behavior over time, while at the same time varying randomly in timing and pathway from cycle to cycle.

• An article on the Nautilus web site discusses how the inner electro-chemical dynamics of the collective neurons interacting within the brain are seen as operating on a thin boundary between stability and chaos. Consistent with Freeman's theories, the studied brain dynamics act as if they are cycling around “strange attractors”. The dynamics of electro-chemical interactions among millions of neurons within the brain act as if cycling around "strange attractors" Such an attractor system can sometimes flip to a different pattern, one with a cycle having different direction and space characteristics, and then flip back again to the original; but in both patterns, there appears to be an approximate center or a “strange attractor” around which the system characteristics revolve. So, you can have a one-attractor cycle, or a two-attractor cycle, or even more. And no particular cycle around an attractor is quite the same as the last one. The changes from cycle to cycle are unpredictable (same for the meta-cycle flip to alternative “attractor cores”), but the cycle or meta-cycle does have overall stability.

• Such systems are seen to exist on the border between either settling back into a fully-ordered and predictable path around some fixed attractor point, or pushing into full-blown chaos where the attractors (however strange) just fall away and the system's motions go wild and completely random. Researchers are finding that a healthy functioning brain lives on this knife-edge. Why did nature and evolution select such an arrangement?

• One clue can be found in the design of high-performance aircraft, especially modern fighter jets. Once upon a time, airplanes were designed for maximum stability against changing wind currents. Pilots manually controlled the aircraft flaps, which steer the plane and also allow the plane to respond to changing winds and turbulent air flows. Recall, however, that humans can only react to things so quickly (typical human reaction times between start of perception and recognition / mental reaction are between 0.15 and 0.3 seconds; then add even more time to carry out the responsive muscle motions); our brains and bodies need processing time. So it takes a while for the hand controlling the airplane flaps to react to what the pilot sees and feels from buffeting air currents. This is not a long time; but when a jet is barreling along at 900 mph, even a few tenths of a second might be too late to put the plane back on an even keel.

• In general, aircraft once had to be designed to be as naturally stable as possible. However, such design also made them more like battleships in the ocean, in that they took a relatively long time to change course when needed (such as when an enemy plane or missile is suddenly spotted). Thus, in modern jet fighters, the airframes are designed to keep the plane on the “edge of chaos” (aka “negative stability”); using modern electronics to sense unwanted direction changes and make adjustments to the flaps, airplanes can shake around just a bit as they cruise along, but not go over the edge into losing control. When it comes time to make an intentional course change, this shaking makes the plane very agile, able to shift its course very quickly.

• I'm not a scientist, but it seems logical that evolution used the same trick to give human beings the ability to be quick on the uptick. A brain on the edge of chaos is also a brain that can make fast decisions and thus better adapt to changing conditions. Which is not a bad thing when you live in a forest or savannah where a predatory animal might be lying in wait for you around the next corner. This characteristic still serves us well in modern society; human history took away the cheetahs, but we still are left with many unexpected threats here in "civilized" life.

• Why does a chaotic cycle (i.e., a wiggly loop) around a strange attractor evolve in the brain? Possibly because the cortex, in building up perceptions from raw sense and memory data and then deciding how to respond, involves recursion (i.e., constant looping) between multiple “maps”, as discussed in Section 31. The chaos mechanism helps to balance the inherent stability stemming from recursion with the brain's need to nimbly respond to a “true signal” of external change There are probably multiple map-loops going on in the brain at any one time (multi-tasking streams of thought, whether conscious or sub-conscious), and when multiple recursive systems interact, semi-chaotic behavior can ensue. Another source of recursion occurs on a smaller level, within the neural networks through which perceptions from sense data emerge. These neural networks sometimes team up and coordinate like a bunch of acrobats perched upon each other's shoulders, to form a meta-network; and at some level, these become the “maps” discussed in section 24, ad hoc areas of the brain that deal with a specialized, limited function, such as identifying the color green in a circular shape (or perhaps hearing a certain type of sound, or detecting a certain external temperature change from the face or hands).

• These loops impose a “recursion momentum” which helps to perpetuate the state that they currently detect or experience. They help to keep consciousness from being too jumpy; they help to make it look like a continuous flow. The whole point of “recursion”, i.e. feeding-back the outputs into the input side of the analytical mechanism (see the neural network diagram in section 30), is to promote the overall stability of output. In effect, recursion / feed-back makes the continually updating system ask, how much should I change what I last outputted, considering what I'm now seeing from the new inputs? (As compared with just independently coming up with new outputs based solely on what was just inputted.)

• If our brains are designed to update our perceptions every fraction of a second, and in making that update we take in newly arrived sensory data, and if we know (or the bio-evolutionary process “knows”, more accurately) that sometimes this new sensory data jumps around for just a moment and could cause the outputted “overall picture” to change radically – then we have to ask, do we (or again, would the bio-evolutionary process) always want to take that change seriously? If the radical change is mostly just a noisy fluke that usually fades away in another tenth of a second or so, then perhaps we don't want our output mechanism (which creates the “big picture” of consciousness) to take it so seriously. We only want the output picture to change after the changes become a repeating pattern, such that we know with more certainty that things are really changing outside. Feedback of output into input (mixing the output signals in with the new incoming external data) works to accomplish this. It helps to separate “noise” from true “signal”, as electronics designers would say.

• But there is a downside to this recursive stability, in that it prevents immediate change once external conditions indicate an unexpected but important shift. “Recursive momentum” is a good thing as it promotes stability, but it also causes “change blindness”, as discussed in Section 28. By “tuning” the overall brain-mind system to operate on the edge of chaos, this blindness can be broken before too long, letting us quickly update our perceptions, our action decisions and our responses, once a new external condition is recognized. The chaos mechanism helps to balance the inherent stability from recursion with the need to nimbly respond to a “true signal”, by introducing a constant and expected “background jitter”, as opposed to the unexpected and random “input jitter” that recursive feedback helps to control.

• In some cases, depression may involve a brain dynamic where change-imposing chaos is too low and recursive stability is too high. Under such condition, we can hardly change our current direction of thinking and behavior even in response to strong change signals, so we repeat accustomed patterns even when they become negative or destructive due to changing external conditions.


• Philosopher Daniel Dennett argues against any view of consciousness that lends it any ontological importance beyond whatever can be known through scientific study. One of his many arguments against a dualistic “irreducible reality” interpretation of consciousness, whether based on real substance or unique properties (properties not otherwise obtaining to the space, time, matter and energy that we can study), is the observation that a variety of sensory input interpretations co-exist in the brain at any point in time, i.e. "Multiple Drafts". It would seem, metaphorically, as though there is a cable TV in the “Cartesian Theater” (referring to Descartes’ body-soul mental dualism), and a homunculus (little person in the brain, or ghost-like soul) holding a remote, constantly changing channels.

• I.e, our daily real-life “trains of thought” aren't very train-like after all; unless faced with a compelling situation requiring utmost focus, our minds constantly wander back and forth between perhaps 3 or 4 different topics. After a while, one topic thread is ended and another begins. These various topics being juggled in our minds are “drafts” that we work on a little at a time. At some point we drop them and replace them with others, possibly because we have satisfied ourselves from our ponderings, possibly because we didn't get anywhere and are frustrated, possibly because other more important things have come up.

• To be fair to Dennett, I will point out that he closes down the Cartesian Theater and evicts its audience in the "Multiple Drafts" model. Dennett forbids one to imagine, even in an allegorical sense, a "television with remote" being controlled by some ghostly sentient force in the cranium. There may be multiple drafts, but they don't relate to any "central command" or centralized awareness in the head. Philosopher Thomas Nagel has gone so far as to say that split-brain patients (see Section 25 above) don't really experience a fundamental change in consciousness from the surgical separation of the left and right halves of their cortex (e.g., two parallel streams of conscious awareness), because consciousness is such an inherently jumbled experience from the start. I.e., splitting the thinking part of the brain can't make things any worse!

• According to Dennett, a "draft" is elevated to prominence by immediate environmental needs, as perceived; if you are driving and listening to music, you may be concentrating on the music and alternately thinking about your child's recent soccer game and an upcoming project at work, until a car runs a stop sign right in front of you. At that point, your "consciousness probe" shifts away from the thinking and hearing streams and towards the visual stream, and also towards your muscles as you hit the brake and jam the wheel to avoid a crash.

• These streams and the brain-mind's probing system "just are" consciousness; thus there is no hard problem of consciousness, no “essence” to ourselves nor any fundamental reality to our consciousness, according to Dennett. Dennett's multiple drafts theory and its alleged implication is close in some ways to British philosopher David Hume's bundle theory, in which the ontological nature of any object is simply seen to be the collection of its properties and relationships, nothing more. Hume felt that the self and the mind are only a bundle of perceptions with no underlying essential quality or identity to them. Ancient eastern Buddhist philosophy, which also doubts the existence of an "essential self", likewise refers to the "monkey mind", i.e. the inherent tendency of the mind to quickly wander from one topic to another. The Buddha, like Hume, was also a "multiple drafter".

• These assessments certainly describe a large portion of the mental life of most every individual. Under this viewpoint, the concept of "qualia" and "quales of experience" are seen as fundamental "bits of consciousness", which come together in an ad-hoc, constantly changing fashion, akin to color shapes in a kaleidoscope. However, there arguably exist certain experiences where consciousness seems to be experienced as a "smooth, integrated flow", as opposed to a "bundle of sticks". Two examples might be meditative experience, discussed in Section 36, and the experience of music (discussed in Section 9).

• Given the implications of the chaotic-edge physical design and operation of the thinking brain, one can perhaps stop short of any ontological conclusions about our identities and conscious natures simply because we tend to have a lot on our minds at any one time, and don't usually hold a particular thought for too long. Perhaps the mental limit cycle and strange attractor cycle remove the need for a Dennett or a Hume with regard to multiple drafts. And perhaps the Buddha had the right prescription for too much mental chaos, i.e. meditation.


• In a recently published work, evolutionary theologist Ilia Delio discussed the correlation of expanding spatial/environmental awareness with the evolution of consciousness over biological history. Citing the works of Dr. Antonio Damasio, Delio points out that early living organisms began with a survival-focus limited to their own internal state; then with evolutionary development over the eons, they expanded their horizon to proximate external concerns within their immediate environments, and as they further developed in sophistication and capacity, to more distal problems. For simple living things such as microbes and plants, the organism responds to chemical / temperature / pressure conditions within itself and immediately surrounding its border. Its controls are simple stimulus-response mechanisms, with little time awareness. E.g., a yeast cell is programmed to move towards any area on its membrane that detects sugar, and move away any area that detects acid.

• Once mammals emerged and continued evolving, advanced body sensors involving light, odors and sound allowed the organism to consider conditions at larger and larger distances from itself. The brain-minds of these evolving creatures slowly gained increased awareness of other beings (usually other mammals of the same species), and crude communication and cooperation routines started to emerge.

• With the evolution of sophisticated social arrangements and expanding brain cognitive abilities, the upper primates and homo sapiens were eventually able to greatly expand their “awareness distances”, and thus to better appreciate and understand what was beyond their immediate vicinity. Eventually, the distant mountains, clouds, wind, rain and even the sun, moon and stars became better and better understood; they were incorporated into the human mind's overall picture of reality. Social communication likewise allowed the evolution and everyday use of abstract concepts and symbolic language.

• The “distal factor” certainly seems to correlate, if crudely, with the emergence and development of consciousness amidst the more developed (post-microbe and post-flora) earthly life forms. It helps to elucidate Damasio's conceptual levels of evolving self-awareness and consciousness, starting from the pre-conscious proto-self, thru to initial core consciousness, then to an extended consciousness which includes what modern humans experience as an autobiographical self. (An interesting question would be whether the migratory species experienced corresponding brain capacity increases and expanding proto-conscious awareness, as their "environment horizons" began to span great distances and a wide variety of geographic conditions.)

• The distal factor may roughly correlate with the strength of a holographic-like relationship between consciousness and the overall universe, which is the environment in which consciousness manifests itself. Such a relationship is hinted at in Pribram and Bohms' Holonomic Brain Theory.

• An important abstract concept that likewise expanded human abilities and possibly helped to shape the emergence of human self-consciousness is time. Time is perhaps the most important concept that human minds abstract. Most animals, including many insects, have some way of responding to the past and to the future. The greatly expanded range of distance awareness and time awareness in the human species relative to other species is key to the evolution of consciousness as we know it The past affects the organism through conditioning, i.e. the brain mechanism affiliates stimulus X with something good or bad about to happen, based on repeated experience. For example, think of Pavlov's dogs, expecting that food will appear soon after a bell rings. As to the future, most animals know that an object in their field of sight which is growing rapidly in size and not moving laterally could well be a projectile in the air moving towards them, such as a rock. They can appreciate a threat to the future, and take appropriate evasive action. Their mind-brain systems thus process information at a rate appropriate to the challenges presented by the surrounding environment, by the overall interaction rates within that environment.

• However, in most organisms with these abilities, the sense of past and future is fairly simple and mechanical, closer to what we could call "instinctive". Humans have the ability to generate an abstract, flexible concept of time, and to integrate their recent memories and their anticipations of the future into the matrix of emotions that underlies momentary consciousness. Rarely is our consciousness fully focused upon the present. Our expanded sense of time, along with our universal appreciation of 3-dimensional space, is an important, perhaps critical dimension behind the emergence of our "feeling of being", i.e. our human consciousness.


• In the late 00's, a new “mathematics-like” concept and description of consciousness was proposed by neuroscientist Giulio Tononi. This concept is called the "Information Integration Theory of Consciousness." In a nutshell, Tononi proposes that consciousness equates to the degree to which our brains (or any other information processing systems) integrate and recursively process tremendous amounts and varieties of environmental information coming in from our senses and from the memory centers within the brain. In other words, it's not only relevant that information is flowing in from your sensory organs; it's that your brain somehow intermixes all of the individual sensory signals, and also blends in other signals (e.g. from memory functions), and processes this information into an integrated "mental picture". This integrated "view of things" allows your mind to decide what to do next and what to remember for the future. According to Tononi, the high level of informational cross-referencing is a sure sign that conscious awareness is in the mix.

Phi: The Metric of Information Integration

• Tononi derived a statistical measure of information enhancement, called "phi", meant to gauge the degree of information added by the brain's processing structure to the information contained within the sensory inputs; i.e. the information added because of the recursional processing that the system does with the input and output information. (If no output recursion, i.e. feedback, then there is no PHI in the system). The Phi measure is also influenced by the amount of integrative cross-talk in sharing and processing information as it flows through the system; but integration with the recycled outputs from the last round of system processing is a "sine qua non" of Integrated Information Theory (I.I.T.) and Phi. Again, "Phi" is a generic measure that applies to all information processing systems, not just the human brain. Phi increases relative to the degree of information generated AMONG the operating components (elements) of a system, as compared to information generated WITHIN them.

• The calculation of Phi for any information processing system involves analyzing all of the ways that the system could be broken down between internal combinations of its processing elements. For example, a system has five element-points where the flowing data is either distributed or combined and passed on; at each of these points, the data is "worked-on" to varying degrees as in a computer algorithm. Say those points are called A, B, C, D and E. Based on their interconnections (cross-talk and feedback), you could break the overall system into "chunks" in several alternate ways; let's here say that A+B+E and C+D, or A+B and C+D+E, or A+C and B+D and E were possible chunking seperations).

• For each possible chunk seperation, a calculation is done to gauge the informational enhancement or synergy that the overall system output would lose when the chunks were seperated. This reflects what each possible "combination of chunks" particularly identifies and contributes from the raw data that is not directly communicated in that data, such as a relational pattern or trend -- for example, every 5th number from one data input equals four multiplied by every 3rd number from another data input. Then every possible "chunking" arrangement is analyzed for the total amount of informational enhancement that could be lost by seperating the chunks. The arrangement that makes the least overall contribution is selected as the "minimum information partition". We then know that this is the "irreducible" structure, you can't go any lower in terms of the information synergy effect stemming from the system's integrated design. We thus know that information integration within the system, in and of itself, enhances the system's overall information output by at least this much. This is what Phi attempts to gauge. Phi itself does not have a dimension such as seconds, inches, or percent; it is meant for relative comparsion between different systems.

• Given that Phi should be a generic measurement applicable to any data processing system, Tononi compares the brain to other mechanical devices that process information. The ability of the brain to cross-reference and cross-talk so many of its information flows in a recursive fashion gives it a very high "Phi", distinguishing it from devices that use information from Tononi proposes that consciousness equates to the degree to which an interactive system integrates and recursively processes tremendous amounts and varieties of environmental inputs and stored information one source (say a temperature sensor) to do but one function (e.g. turn on or off the furnace) without any feedback mechanisms that allow reconsideration of the most recent outputs. For humans, by contrast, the information that our skin temperature sensors send to the brain is intermixed and cross-referenced in myriad ways with other information flowing into and through the body/brain system (e.g. from the eyes, nose, ears, muscles, memory areas, etc.), and processed in a recursive fashion; the overall outputs tell our body mechanisms to make adjustments, informs our executive center on what sort of clothes to use, defines our feelings of comfort or discomfort, and sets our moods and high-level outlook about our lives. And all of these conclusions are then re-inputted and thus re-considered for updating as new information comes in from our skin and other senses (along with internal memories being drawn upon).

• The healthy human brain that is in a state of "consciousness" takes in a LOT of information about the world (e,g., the smells of food, the intentions of the Russian government, the memories of a sunset in San Francisco, the voice tones that we hear, the funny feeling in our stomachs, the twinkling of the stars, the chemical structure of gasoline additives, the music of a Bach symphony . . . on and on) and intermixes it all using recursion to construct a VERY big and rich informational picture of the world.

• According to Tononi and his I.I.T. and "Phi" concept, however, consciousness does not relate to the fact that the human brain-mind system intakes such a vast assortment of information; nor even that it eeks out so much usable information regarding relevant patterns and underlying trends about our world during processing. No, according to Tononi, the key factor is recursion. As such, Tononi affirms what Douglas Hofstadter put forth in his “Strange Loop” concept of consciousness, i.e. that the more recursion there is in an information system, and the more that recursion does in terms of helping to process and internally interpret each new round of input data, the more conscious the overall system is. Integrated Information and Phi also relate strongly to Gerald Edelman's "Re-Entrant Looping" concept of consciousness. Phi could be considered an attempt (at least on the conceptual level) to quantify and measure the strength or importance to conscious experience of a particular re-entrant loop arrangement.

• Tononi also presents interesting math-like and graph-like ways of thinking about qualia, i.e. conscious perceptions. His theories involve a “qualia-space”, a multi-dimensional “state-space” analysis involving some abstract mathematical qualities that make up the Tononi also presents math-like, graph-like, multi-dimensional ways of thinking about qualia dimensional axes of an abstract state-space. Tononi describes these dimensions and how any individual “quale” occupies a particular space or shape in the overall qualia state-space. This space or shape can be related to other quale shapes, can be seen to change over time, and can (in theory) be analyzed in terms of the reported qualities and effects that the quale has on the individual. Note, the number of dimensions used to represent the nature of qualia exceeds the usual length, width and height that can be visualized on graph paper; Tononi's analysis involves extremely complex hyper-dimensional geometric concepts.

• Although most descriptions of qualia imply that a "quale" is a significant portion of one's overall "conscious picture" of the world, such as a red rose or a bright turquoise color patch in a paint store, it is not necessarily the whole of the overall experience (which would include background sounds, smells, sensations, body states and other visual features). However, Tononi interprets qualia as the "overall experience" of consciousness at any instant, and that the "quale of experience" is comprised of "concepts" or "mechanisms" which assumedly reflect the various sensory components of a conscious state. These concepts / mechanisms are supported by "complexes", to be discussed next. In his book Phi (at p. 277), Tononi says that the "many mechanisms of a complex, in various combinations, specify repertoires of states they can distinguish within the complex, above and beyond what their parts can do: each repertoire is integrated information – each an irreducible concept. Together they form a shape in qualia space. This is the quality of experience, and Q is its symbol."

Complexes -- Sub-Components With Phi

• An information system is comprised of multiple processing elements (i.e., a component that combines data flows, distributes a unified flow, or processes a data flow using an algorithm, akin to what we use in computer programs); often very many elements. The Phi calculation method is usually applied first to the overall system of elements. However, you could also take a limited group of elements within the overall system and apply the Phi calcuation exclusively to those elements, and find that this grouping in itself has some "Phi". These element groupings are called “complexes”. They have cross-integration and recursion with their own limits, resulting in some Phi of their own. Complexes can overlap (share certain elements with other complexes in the overall system), or be independent "black boxes" that "plug-in" to the main system.

• An interesting side-effect of I.I.T. results from the fact that a non-conscious system (i.e., having little or no Phi) can be composed of one or more sub-systems (i.e. "complexes") that have substantial Phi of their own. Thus, an overall functional entity may have little or no conscious quality to it, but some of its parts (i.e. "complexes") could be conscious.

• Tononi has thus noted that an overall system must generate its own "Phi", and cannot "piggyback" on any Phi occuring within an independently functioning, input-output "blackbox" component of that system. E.g., if three people form a small business that operates successfully over time, perhaps that business enterprise will gain some small degree of Phi; but it cannot be said to have Phi equal or superior to a human's just because it contains human components.

• Since our own brains may encompass internally recursive structures or ad-hoc neuron connection arrangements that process information independently or semi-independently (and thus can be considered "complexes"), we all theoretically have independent consciousnesses going on unbeknownst to our overall conscious awareness! (Perhaps this is consistent with the notion of "sub-consciousness".) In fact, there may be many of them, if each set of clustered neurons that interpret some aspect of sense data using neural-networking architecture is counted (akin to the "maps" discussed in Sections 24 and 31), along with each set of excitatory and inhibitory neurons that regulate individual body functions.

• Tononi also adapted an anti-nesting exclusion postulate indicating that an entity with semi-independent / partly overlapping sub-components cannot be considered "independently conscious" if any of its overlapping components have a phi level superior to the whole, i.e. are more conscious than the collective. If there is any level of functional overlap or sharing between the sub-systems (i.e., they share some elements and thus are more than plug-in "black box" devices), consciousness will exist only in the sub-system(s) with the superior degree of informational integration. One portion must become the tail, the other the dog (even if the "tail" is then the overall system, and the "dog" is the hi-Phi component).

• A crude example / anology for this involves enchiladas. An enchilada, once baked, is a system of components (sauce, meat, cheese, tortillas, chilies, etc.) which chemically and molecularly overlap because of the heating process. When you eat an enchilada, does its "Phi" (in the sense of flavor) come from the whole enchilada, or from some part within it? If all of the parts work together such that when you bite in, you don't notice any one of them, i.e. you only experience "the whole enchilada", then it seems obvious that the "flavor Phi" of each component was less than the "flavor Phi" of the whole. By comparison, let's say that the sauce had way too much garlic or corriander, or the chilies were very hot and assertive; in that case, you would not experience "the whole enchilada", but instead would notice the overly-spicy sauce or the raging hot chilies. Your experience would be defined by the "flavor Phi" of the superior complex (the sauce or the chilies), because their Phi exceeds the Phi of the overall system (i.e., the baked enchilada). The enchilada thus no longer has an independent "flavor consciousness".

• This rule recently helped Tononi to wiggle out of a criticism by philosopher Eric Schwitzgebel that the United States is an interactive system with a sufficient level of data integration and feedback to have "Phi", and thus has consciousness (recall the "China Brain" thought experiment, which is similarly critical of functionalism). Given that humans, as significant components of the United States, manifest superior consciousness, any Phi attributed to the interactive amalgam (i.e., the USA) of overlapping components that within themselves have more Phi than the overall amalgam is not true consciousness, per Tononi. Although these "caveats" to the application of Phi make sense in many contexts, Schwitzgebel point out that they may not amount to usable generalizations, and will not always resolve the "borderline" cases in a sensible fashion.

• According to Tononi, the "complexes" in our own human brains and minds reflect the subjects of our experiences. They reflect the many ways that we evaluate our "picture of the world around us" (or perhaps it is more like a movie), and how we likewise evaluate our thoughts. I.e., from a functional perspective, our "complexes" group around the many topics of importance to us in our lives and in our survival concerns, such as warm versus cold, safe versus unsafe, loud versus soft, hostile versus friendly, pleasurable versus painful, etc.

• The Phi concept appears to be useful in explaining some aspects of the human brain and mind system. A key example cited by Tononi and his supporters regards the cerebellum. The cerebellum has more neurons than the cortex; and yet it can be removed and not significantly impact a person's consciousness. The Integrated Information Theory and Phi represent an attempt at a mathematically-oriented description of consciousness . . . Although the workings of the cerebellum are not completely understood, this portion of the brain is generally equated with motor control and coordination (although one study indicates its involvement in memory retrieval). It is felt to be something of a specialty section, a brain area that functions more like the conventionally-programmed computers that we have now in most every office, home and pocket. As such, the cerebellum is mostly a feed-forward system, and does not have a high degree of cross-talk among its lines of processing. But more importantly for I.I.T. and Phi, it does not take its outputs and feed them back into its processing inputs. As such, the fact that the cerebellum is not critical to human consciousness seems to accord with the underlying assumptions of Phi and the I.I.T.

• Another interesting example regards the dreamless, slow-wave phase of sleep when humans are minimally (if at all) conscious. There is increasing evidence that the brain and mind are not entirely inactive in this phase; some experiments indicate that subconscious notions are formed or strengthened during this part of the night. However, even if some thought processing is going on, the overall feedback-loop structure is largely unplugged (and thus Phi is low); Tononi believe that this would explain why the brain is not in any way aware of this nocturnal cogitation (unlike dreams, which can be remembered).

Memory and Behavior Issues

• Despite its promise, the I.I.T. / Phi approach has been criticized for propounding a continuum of consciousness, and yet not being able to explain why low-Phi states such as deep sleep, sleepwalking episodes, absence seizures or the state of anesthesia are not sensed on a continual basis, i.e. why conscious experience as we know it seems to have a "cut off". Tononi states that "Perhaps a whiff of consciousness still breathes inside your sleeping brain but is so feeble that with discretion it makes itself unnoticed" (in his book "Phi" at p. 275). But then, what is the "discretion" process that makes some human consciousness "unnoticed"? Is it that we can't remember such "low consciousness" because our memory functions are largely inactive at the time, such that even the few seconds of time needed to waken someone would dissipate any mental trace of this experience? The vexing question then arises as to whether "low consciousness" could ever be empirically verified.

• (However, conceptually, a low level of Phi in the human brain might correlate with the disengagement of memory storage functions, along with minimal system recursion. Recall that recurrant looping, i.e. feedback among the brain's processing elements [akin to "maps", see Section 31], in and of itself establishes a form of short-term memory. A low-Phi brain state would thus have but a fleeting "feedback memory", and probably no access to specialized memory formation functions. These factors possibly explain why a threshold between what we know of as waking consciousness and unconsciousness is crosses as Phi decreases, at least for the human brain. The question of consciousness awareness in non-human systems remains a mystery. However, one could speculate that the "minimal consciousness" resulting from a low-Phi condition in any mechanism cannot influence its behavior due to the unrecorded and fleeting nature of that conscious experience; but once Phi increases to the point where a "system-wide memory echo" arises within the system's information feedback loops, then perhaps some behavioral effect from consciousness [or correlating with the state of consciousness] would occur. The behavioral impact would be amplifed still further when specialized memory devices that can archive selected system-states and state dynamics are integrated into the overall system.)

Other Criticisms

• Computer scientist Scott Aaronson has put forth a variety of criticisms of Tononi's I.I.T., including the fact that a complex parallel feed-forward system has absolutely no “Phi”, i.e. no consciousness-related quality, whereas a very simple and trivial recursive system would show a small but positive Phi. As such, Tononi's Phi measure would equate the quality of consciousness to too many things in the world. It implies that the quality of consciousness ultimately has one threshold condition, i.e. the presence of cross-linked information-recursion / feedback-of-output between multiple columns of information processing. Once a system has any degree of that, its degree of consciousness is graded; and thus, consciousness can (theoretically) be found in small amounts even in certain simple mechanical systems, and in many living things. (I.I.T. is related to, but somewhat different from panpsychism, which attributes some degree of consciousness to everything; but "Phi" still seems to hit too many targets).

• The I.I.T. and Phi bring up important issues relevant to the abortion debate, i.e. when does conscious human life begin? Philosopher Paul Churchland said that from the perspective of neural network activity in the brain (see Section 30), "there is no network activity within the first or second trimester fetus, because there is as yet no network there" (in his book The Engine of Reason, the Seat of the Soul, p 308). However, EEG studies indicate fetal brain wave activity as early as 6 weeks after conception, well within the first trimester. Brain waves indicate the presence of synchopated multi-neuronal activity, which arguably points towards some level of Phi. If Tononi's Phi is strictly equated with the quality of being conscious, regardless of degree and other conditions, then even first trimester abortion must be interpreted as terminating the life of a conscious human being (albeit, one not yet able to behaviorally respond to that conscious state). Given the objections that other neuroscientists have had regarding early fetal consciousness, the interpretation of Phi as an absolute correlate to consciousness remains problematic.

• It also needs to be said that most of the I.I.T. is theoretical and has not been applied to any real human being. Although Tononi presents complex math equations, they are not completely specified in terms of observable data from any human brain or mind (i.e., physiological or psychological data), especially with regard to qualia space. And even to the degree that certain aspects of the theory, such as the Phi factor, can be computed for simple systems or experimental “toys”, it would require enormous computer processing capacity to apply to even a simple mammal's actual brain. Kristoff Koch said that to accurately evaluate Phi for a roundworm brain would be utterly unfeasible, even if using all of Google’s more than 100,000 computers.

• At least in a conceptual sense, I.I.T. and Phi measure the information that a parallel-stream data processing system having feedback and cross-talk contributes, in and of itself, to the raw information coming in from the inputs to that system -- a gauge of the pattern recognition and stability of output that the system imposes on the raw data. The design of the system and the 'training' within it arguably adds informational value (reduction of the overall uncertainty regarding output distributions) to the inputs. But I.I.T. and Phi remains focused upon recursion and the stability of output that it causes. Why is this necessarily inherent to the nature of what conscious experience is? Recursion may be a correlate to consciousness -- it may be a necessary condition, but is it sufficient? Is positive Phi always an indication of consciousness? Aaronson's examples indicate that perhaps this may not be the case, not in any sensible way. And are there no other key requirements of consciousness? (These questions are similarly relevant to Hofstadter's "Strange Loop" theory of consciousness.)

• In sum, the I.I.T. and Phi are an attempt at a mathematically-oriented description of consciousness. As science writer Margaret Wertheim said, Tononi "wants to explain subjective experience by generalised empirical rules, and he tells us that such experiences have shapes in a multidimensional mathematical space." The Integrated Information Theory also implies and accepts that most any information machine with feedback (recursion) supports the "geometry" of conscious experience, to some degree; but as with Douglas Hofstadter and his Strange Loops, Tononi does not provide compelling reasons on why such output recursion should cause what we know of as consciousness. He basically starts with the idea that the human brain is conscious and has a high degree of cross-integration connectivity and geometric output recursion, so therefore recursion must be the key [not "a" key, but "THE" key] to cross-integration and thus to consciousness.

Phi: Necessary But Not Sufficient

• Phi and I.I.T. do NOT directly gauge the inherent value of information synergy and pattern recognition, i.e. the idea that certain independent information, when grouped and cross-analyzed together, adds increased, synergistic value to the system -- i.e., in terms of relevance and "meaning" to the system's survival and well being, and not just not in the Shannon "bulk-information" sense. I.e., I.I.T. focuses on the reduction of output uncertainty stemming from systemic recursion, but does not address the relevance of that output to the organism's survival concerns and overall purpose . . . a linear feed-forward system with high amounts of cross-talk during its processing steps has zero Phi; and yet, it could add synergistic value in terms of relevance by identifying otherwise unseen trends and patterns.

• Tononi defines “information” somewhat more expansively than Shannon, saying that information is basically the reduction of uncertainty. However, for a living system trying to survive in a hostile environment, which humans and all other fauna and flora do, something more is needed than Shannon's basic "entropic bulk" notion of information, and even more than an overall “reduction of outcome uncertainty”. What "uncertainty" is to be reduced? Relative to what outcomes? If a system can produce millions of different outputs, and the gross uncertainty regarding which answers are more likely is reduced, this may still not be relevant to the overall functioning of a system "out in the wild".

• The notion of informational relevance and usefulness, beyond gross improvement (narrowing) of the overall number of output possibilities, certainly does become a key factor in the real life of an organism that has to fend for itself. And Phi mostly misses this. The question remains as to which concerns are most relevant to the human organism and its mind. Many analysts avoid this question for being extremely difficult to objectively define and quantify. And to the degree that they do address it, they usually stop at reproductive success and the ability to impose ones genes onto future generations, as Dawkins would have it. Is there something more to human life? Phi diverts us from this key issue, a question regarding the "je ne sais quoi" of human existence . . . i.e. human consciousness.

• Another potential criticism is that Phi can be quite high for a brain with no neurons firing! In effect, phi is not sensitive to throughput of information, of volumes of information flow. It looks at the system design, at the wiring diagram, at what would happen if information were to be exchanged; but not at how much information is or could actually be processed (akin to classic Shannon information concerns). Could a mostly inactive and yet complex and recursive system operated occasionally for short spells be considered just as conscious as a similar system interacting with its environment on a continuous basis? (Sort of like asking whether a field-goal kicker on a football team is just as "game conscious" and strategically involved as a quarterback or the coach).

• Also, Delio/Damasio's “distal factor” in the evolutionary emergence of consciousness implies that the breadth of information and conceptual understanding (especially regarding time) is also an inherent factor which sets a border between the conscious and the non-conscious. This “information breadth” factor may incidentally contribute to the Phi calculation and to the degree of outcome for a system when there is recursion, but does not appear to be accounted for intentionally within the Information Integration Theory. Another concern would be the importance of "memory engagement" to consciousness as humans normally experience it, see Section 27. To the degree that a consciousness-enhancing "memory echo" is mediated by recurrent looping, Phi might account for this; but if the engagement of specialized memory areas is also necessary, then Phi would not fully reflect the need for memory engagement to avoid James' "great blooming, buzzing confusion".

• As an example of the relevancy limitations of Phi and I.I.T. for an environmentally challenged system (as well as Phi's lack of appreciation for information breadth and rate of flow relative to the environment), consider the fact that our federal law enforcement and intelligence agencies were hardly able to 'cross talk' their info on the Al Qaeda personalities involved in the 9-11 attacks during the months prior. They also didn't do much monitoring and updating of what they did know. Had the overall system (i.e., the federal government) been set up for more constant monitoring and in-process cross-talk, it might have detected patterns that pointed to an evolving plot, in time for a response. But the “stove-pipe” system of parallel processing among different agencies lacking real-time information kept the overall government from seeing (i.e., "being conscious of") something valuable. Each agency saw a little part of the picture, and at different times; an overall awareness would have been available only through a broad and continuous inter-relation of the parts (i.e., joint analysis), and through continual interaction with the environment (actionable "real-time" data), regardless of output recursion (feedback of individual conclusions) within or across those components. I.e., mere feedback of bad individual outputs, however cross-distributed, would not have made for a better overall output -- even if the Phi measurement might have been higher.

• Even with extensive inter-agency monitoring and in-stream cross-talk on a continuous basis, however, I.I.T. and Phi would hardly distinguish between the better system and the one that failed to respond. This is NOT to say that an improved US government intelligence mechanism should be considered "conscious", at least not in the sense of Chalmers' "hard problem". Perhaps "general awareness" or "system-wide awareness" are better ways to describe what Phi actually captures, albeit imperfectly. But it does suggest that cross-talk and continuous monitoring are necessary although not sufficient conditions of consciousness, relative to its environment. As such, perhaps Phi does not properly weigh all aspects of “organic awareness”, and thus is an incomplete (and in some ways misleading) gauge of what consciousness subsumes.

• Admittedly, however, an even better intelligence system, wherein the cross-talking functions recursively reconsider each collective “picture” that they draw (for purposes of improving that picture as new information comes in) might move the “Phi” needle somewhat higher, given the increased "awareness" that would result.

Next Page » An Approach In Search of a Theory

Page 1   |   PAGE 2   |   Page 3   |   HOME

. . . if you'd like to talk about this: eternalstudent404 AT gmail DOT com

Last Updated: November, 2014