The Guardian has published this edited extract from The Idea of the Brain by Matthew Cobb.
Why your brain is not a computer
(The book will be published in the UK by Profile on 12 March, and in the US by Basic Books on 21 April.)
The analogy of the brain to a digital computer is the founding metaphor of Cognitive™ Psychology and of its Siamese twin, Generative™ Linguistics. It has been a vital element of how these fields have been successfully marketed to funders, students, and the public beginning in the late 1950s right up to the present. During the final preparation of my chapter in the Interdisciplinary Handbook of Perceptual Control Theory for Elsevier, I verified with Henry Yin that neuroscientists may be approaching consensus in abandoning the computational metaphor because, however appealing, it is unsupported by evidence. I have written elsewhere about why this metaphor has been so appealing, and I will touch on that briefly at the end of this post.
Cobb says “neuroscience is at a key historical juncture, similar to that which saw the slow transformation of alchemy into chemistry.” In the Handbook Henry Yin’s chapter is about the crisis in neuroscience. He says much the same as the German neuroscientist Olaf Sporns, as quoted by Mr. Cobb: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” I am preaching to the choir here when I say that the framework that provides the needed organizing principles is Perceptual Control Theory.
Cobb calls out the neuroscience observation that systems in the brain
are interconnected, and … are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these [systems] to the behaviour of the animal misses the point of all that processing."
PCT gives strong evidence that those systems are predominately structured as negative-feedback control loops. Each (except at the sensory periphery) combines a number of perceptual signals of a lower order into one perceptual signal; each synapses that one inhibitory signal to an internally generated excitatory reference signal, generating an error signal. The reference signal at a given level results from synapsing error signals output by one or more systems of higher order. Error outputs at the motor periphery produce effects upon that which is the origin in the environment of the controlled perceptual input signals. All ‘computation’ is analog. By these means, “the brain does not represent information: it constructs it”.
PCT is not a metaphor. It does not propose an analogy from the brain to familiar physical mechanisms. Borrowing two phrases from this article, PCT is not “ordering us to do something”, it is “telling us the truth” because it provides an empirically testable quantitative model.
Preaching to the choir again: This model enables construction of computer simulations which have been demonstrated many times to behave identically (near 100%) to the modeled organism–measured inputs to perception, measured disturbances to the state of that aspect of the environment whose perception is controlled, measured behavioral outputs which (if control is occurring) precisely negate those variable and unpredictable disturbances continuously, with effectively no lag. The brain is a ‘black box’ for which the PCT model is a ‘white box’. The fidelity of performance of such simulations, and the explanatory generality of the model, are strong evidence that there are systems in the brain with structures and functions identical to the structures and functions of control loops in the white box of the model.
That said, at the higher levels of the perceptual hierarchy which are of the greatest interest outside the subfield of the psychology of motor control, precise quantitative measurement and simulations have so far been difficult to achieve.
Cobb mentions how different parts of the brain are distinct in anatomy, development, and evolutionary origin. The integration of these has been becoming clear in the PCT model, but an enormous amount of work is still before us, and this is true of neuroscience as well as PCT. For example, Cobb says
… most neuroscience sensory research has been focused on sight, not smell; smell is conceptually and technically more challenging. But the way that olfaction and vision work are different, both computationally and structurally. By focusing on vision, we have developed a very limited understanding of what the brain does and how it does it.
The main distinction in PCT is at the lowest levels, where the perceptual hierarchy has a somatic branch concerning internal conditions in the body and a behavioral branch by which the body interacts in the environment. Of perceptions in the somatic branch we usually have no awareness, but it encompasses much more than the homeostasis first identified by Cannon in the early part of the 20th century. I’ve lately taken some interest in its role in what we identify as ‘feelings’ and inchoate sensations in the body from which the brain constructs the higher-level perceptions that we call emotions. There is some evidence that expansion of the cerebellum in humans may be the anatomical basis of the kinds of cognitive process that is analogous to the manipulation of configurations. PCT explains the consistency of controlling perceptions of other agents on a par with perceptions of inanimate objects, and also opens the way to understanding conflict, cooperation, and all forms of collective control by which we create, sustain, and benefit from social arrangements and the patterned perceptual constructs of a culture.
Outside of neuroscience, the process of abandoning the computational metaphor might exemplify Bohr’s observation (in physics) that the old generation has to die first. There are many reasons for the “over my dead body” effect. Cognitive™ Psychology still does not question the premises of Miller et al 1960 Plans and the structure of behavior, because the stepwise T.O.T.E. scheme sort of works for the higher levels of control (as long as you don’t actually put it to the test of survival in an unpredictable environment). The institutionalization of CogSci was supported by prestigious connections to Harvard and MIT and was self-consciously marketed as a ‘revolution’ in academic politics. S-{information processing}-R linear causation answers the old yearning of economic, political, and military patrons for “prediction and control of behavior”, opening the purse strings for research during the Vietnam war with an abundance of funding through ARPA and DARPA seeking practical solutions for voice-actuated command-and-control (“Computer, what is the disposition of the enemy in the western quadrant”), real-time translation, encryption, robotics, and all the shiny baubles of Artificial intelligence–see e.g. Murray (1998), Knight (2016) for documentation. Deans placed their bets with that money on the sexiest-looking stuff to attract students and impress alums. And once the field was established, reputation, standing in the field, publishability, self-image, and many other important perceptions are controlled by conformity to what became the main stream.
But ‘mainstream psychologists’ can no longer gesture down the hall and over the wall to that other discipline of neuroscience. That hand-waving no longer works.
Knight, Chris. (2016). Decoding Chomsky: Science and Revolutionary politics. New Haven: Yale University Press.
Murray, Stephen O (1998). American Sociolinguistics: Theorists and theory groups. Amsterdam: John Benjamins.
Nevin, Bruce E. (2016). Understanding the Labyrinth: Noam Chomsky’s Science and Politics. Rev. of Knight (2016). The Brooklyn Rail September 2016. URL: https://tinyurl.com/y5j3oge7