The pervasiveness of reorganization

The question might not be under what conditions does the brain reorganize.
The right question might be what slows it down and promotes stability.

Pervasiveness-of-reorganization.pdf (881.6 KB)

Rapid and continuous “synapse turnover” is characteristic of fetal development. The above findings show that it does not cease with maturation. On the other hand, failure to rein it in is implicated in autism:

Imaging Synapse Formation and Remodeling In Vitro and In Vivo

Although more than 90% of spine synapses are stabilized for more than [missing number?] months in the adult mouse neocortex, neural circuits were found to be under active rewiring in the early postnatal period, with the rate of synapse turnover comparable to that in dissociated cultures. Subsequent stabilization of spine synapse dynamics takes place at postnatal 3–4 weeks. Because multiple mouse models of autism spectrum disorders show identical phenotypes in upregulation of spine turnover at postnatal 3 weeks, prolonged functional impairment in the adult neocortex may be derived from increased misconnection of developing cortical neurons through accelerated turnover.

I feel that this could be one core purpose of consciousness if we think of consciousness as purely functional simulation then it doesn’t matter which specific neurones had to coding but that relative to one another and call your organisation today survive the same functional organisation

This January, an article in Science (Chvykov et al., Science 371, 90-95: doi: 10.11.26/science.abc6182) introduced me to a measure of organization they called “rattling”. To cut a long story short, “rattling” is closely related to diffusion in its effects on an organization. An organization of interacting units will tend to drift, faster when the rattling measure is higher, slower when rattling is lower for the entire organization.
Hence, organizations will tend toward lower rattling configurations, no matter what the interacting units are. They could be sub-organizations, mechanical bots, or, as in cases of interest to PCTers, control loops whose actions influence other control loops through direct effects or side-effects. Whatever kind of entities constitute the organizations , organizations of interacting entities tend to drift less as the organization matures, and the cross-influences of units change, in effect, on average from confrontation toward cooperation.
The actual theory is much more sophisticated, but it seems to fit both what is described in the PDF and what we know as reorganization of the perceptual control hierarchy.

I was reading the rattling chapter before the most recent interruptions. I hope to get back to it soon. Kind of turns the 3rd law of thermodynamics on its head.

I don’t see the bearing of consciousness here, Warren, except that it is pervasive, therefore also here. I take consciousness to be primal. All the difficulties arise when we try to make it emergent.

But I do take your point about the structures necessary for continued control of input being constantly recreated in use, like Neurath’s ship.

We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction.

I am more in sympathy with C.S. Peirce’s metaphor (quoting from that linked WP article):

Inquiry is the process of acquiring beliefs by making adjustments to our body of background belief. We revise our beliefs (and add or subtract beliefs) so as to better account for and deal with experience. … Inquiry ‘is not standing upon the bedrock of fact. It is walking upon a bog, and can only say, this ground seems to hold for the present. Here I will stay till it begins to give way.’ (CP 5.589)
Misak, Cheryl (1995). Verificationism: Its History and Prospects. Philosophical Issues in Science. London; New York: [Routledge] p. 113. doi: 10.4324/9780203980248.

Likewise, then, not just for beliefs (concepts and principles) but for all control loops. It’s boggy footprints all the way down the hierarchy.

Thanks so much for this metaphor - I’m using it!

Yes, I like the bog metaphor too. If it is a really wet and soft bog then you cannot stop at all but must walk quickly all the time. Also Martin’s last message about rattling helped a lot.

Now all the pieces are falling into place – I just had to listen this Irish classic: Rattling Bog https://www.youtube.com/watch?v=WxgmAwoqr4E

(many nice versions in the Youtube)

How does it turn the Third Law on its head?

I think the metaphor is somewhat misleading, in that the ship parts are treated as being of the same structural importance. The perceptual control hierarchy, however, is based on foundations that are more solid the more frequent are the conditions in which they serve. The lowest levels serve the most frequently encountered requirements.

For example, if one wants to perceive oneself acting in any way whatever that involves moving parts of the body relative to other parts, one must be able to control perceptions of muscle tensions. At that level, control of perceived muscle tensions (absent structural damage) is rock solid, whereas control of the trajectory of a golf ball is not used in many situations, and is far from rock solid, even for professional golfers.

Metaphors are very useful as suggestive guides, but they are but metaphors, and unless the fundamental functions are the same for the metaphoric and the target situations, the guide posts are not very securely fixed, and following them too far can lead one drastically astray.

I should say consequences and corollaries of the 3rd law.

Control systems are negentropic. Consequently, the emergence of control systems is negentropic, as is improvement in their ability to control.

Rattling (through reorganization) promotes the emergence of control systems and improvement of their ability to control. A process which inherently tends toward equilibrium (entropy, ultimately nothing preferentially over anything else) promotes emergence and refinement of control structures which inherently impose preferences upon their circumstances

Haken talks about self-organization of open systems (‘synergetics’).

I look forward to learning from you where I am mistaken in this.

I wouldn’t say you are totally mistaken, except in one thing, you ignore the difference between open and closed systems.

Control systems reduce the entropy of the part of the environment involved in the perceptions controlled by the loops in the system. In the process they use a through flow of energy to export that entropy to the rest of the external environment (mitochondria to heat, or food to heat, possibly). A resting human is said to produce about 100W that will heat up a room the same amount as a 100W light bulb.

A control loop is an open system. The “Laws of Thermodynamics” apply to closed systems. It’s not even clear that the entire Universe constitutes a closed system, given its ongoing expansion, which allows for continually increasing variation in the relationships among its components. All the “Laws” apply to closed systems, which by definition are not observable from outside the system itself. Any system we can observe may be nearly closed, but it cannot be actually closed. How it is open “just a crack” determines what we can see in it. This applies just as much to quantum theories and observations (e.g. entanglement) as it does to living control systems. What you get is what you see.

One of the biggest problems of having a big brain with many synapses is the export of the heat produced by the firings of nerves and other chemical and electrical variations in the limited space allowed to the brain. That space is limited for at least two reasons, firstly so that a baby can be kept inside the mother for long enough to grow capable enough for basic survival “in the wild” while having a head that can still pass through the birth canal, and secondly to minimize the physical distance and hence the travel time of signals that coordinate across different parts of the brain.

Treating perceptual control as though the interesting signals in and into a control loop or system of loops were a complete description of the process usually doesn’t matter for the discussion or analysis. When you deal with rattling, however, it matters that the point of rattling is the reduction of the entropy attributable to the structure of an organization as opposed to the reduction of entropy that is the objective of perceptual control itself. The entropy that is removed in the rattling process has to be swept away into the environment by a through energy flow, just as is the entropy reduction within a control system.

Control systems are indeed negentropic — locally. So if there is any error in your most recent comment, it is only in ignoring that the “Laws” apply to closed systems, of which a control structure is not an example.

The point I intended with the metaphor derives from the article quote=“bnhpct, post:2, topic:15822”]
Imaging Synapse Formation and Remodeling In Vitro and In Vivo
[/quote]

The same control loop persists while the neurons and synapses that constitute it are continuously changing. In this respect, the bog metaphor is more apt than the ship metaphor. (Different structural members of a ship actually do not all have the same structural importance, but I don’t think that’s relevant to the point.)

The reference is to two views of how science works. Neurath’s image:

We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction.

Neurath’s incrementalism opposes the Cartesian conception often advanced for PCT. Quite in keeping with the puritanical absolutism of his era, Descartes proposed that science always needs to tear down the entire prior edifice and build afresh on a new foundation.

Having now looked at the linked article, the metaphor that comes to mind is more of a “front”, as in a forest fire front or the front of a growing ice sheet on fairly still water. Using that metaphor, I think an appropriate name might be a “learning front”, during the building and reorganization of the perceptual control hierarchy. Behind and ahead of a “learning band” like the band of where there are visible flames in a forest fire, things are relatively stable, but all is flux within a band around the core of the front, cooling embers perhaps flaring up again behind the front, future fuel drying and heating and being ignited by flying embers ahead of the front. The front changes one stable form ito a different, less complex stable form.

In the “learning front” application of the metaphor, synapse interaction patterns are relatively simple and stable behind the front (low levels in the perceptual control hierarchy), and are complex, incoherent, random, interactions ahead of the front, susceptible to change as the front advances. Behind the front, the “learning fire” lacks fuel. In front of it lies plenty of fuel for reorganization into controllable perceptions whose control in the particular environment helps stabilize the intrinsic variable structures.

Stability that fits the developing control to the environment in which the living control system lives lies behind the “learning front”. Ecological stability that may have developed through complex internal feedback loops lies ahead of the front. The learning front turns one stability into the other and moves on as time passes and the organism matures.

Anyway, that’s the metaphor I get from the link. It does seem to apply better than the metaphor of ship or bog, to me at least.

I like that!

The ‘burning’ question has always been, “What localizes reorganization to where it is needed?” My superficial grasp of rattling (and the leaf-shadow metaphor) suggests that it provides an account of that.

Other bases of metaphor include the development of pot-holes in roads, and their evolution into ‘washboard’ and ‘roller-coaster’ dirt roads; the role of wind in building ocean waves; and perhaps best studied:

I don’t remember which version you have been reading, but I have removed everything about sand dunes (barchan and more general) and snowdrifts, as being interesting but not very relevant. Leaf piles are enough, or so I thought. The same applies to the downstream effects of irregularities in flows. such as washboard roads in traffic flows (whose effects in my youth I used to avoid by driving faster than the normal traffic speed). It’s all very interesting in itself, but overkill when dealing with PCT — or so I think.

Returning to the topic after our several digressions. I just posted a reference topic quoting from Bill’s 1963 and 1973 sketches of a reorganization system.

Much remains to be done for this to be more than a sketch. In B:CP the intrinsic signals and intrinsic reference signals are avowedly “convenient fictions”, and “This reorganizing system may prove to be no more than a convenient fiction; its functions and properties may some day prove to be aspects of the same systems that become organized.”

My own suspicion is along the latter lines. My leading question is, “what’s in it for the cell?” To understand reorganization we might well learn from how organisms like slime molds, mycelia, and perhaps even complex intracorporeal systems like the gut microbiome and the nervous system organize and reorganize themselves.

Our characterization of intrinsic error is of course from our accustomed corporeal point of view: intrinsic quantities which must stay within certain bounds or corporeal life is at risk and may end.

But from the point of view of a cell or cellular community (viz. slime mold, mycelium, microbiome, etc.) the same intrinsic quantity is a perceived environmental variable, and an intrinsic quantity that is out of bounds (too great or too small) is a disturbance to a perceptual variable that may be controlled by cells either individually or collectively.

In vitro, neurons brachiate and make and break synaptic connections at random. Have experiments been done to see what quiets that random activity? What systemic properties of a stable control loop can be a cell’s controlled input, genetically predetermined in the normal structure and function of the cell? Perhaps look at the chemical environments of synapses.