Geoffrey Hinton and quite a few other are getting excited about error forward propagation. We (myself and PaulMiller) have been at it for the last 5 years. In contrast to Hinton we have philosopical argument: autonomous agents are closed loop and they control their inputs and not outputs. For example “no tiger!”.
The FCL project page at GitHub includes a line-follower demo, C++/Python implementations, and the following description [emphasis added]:
For an autonomous agent, the inputs are the sensory data that inform the agent of the state of the world, and the outputs are their actions, which act on the world and consequently produce new sensory inputs. The agent only knows of its own actions via their effect on future inputs; therefore desired states, and error signals, are most naturally defined in terms of the inputs. Most machine learning algorithms, however, operate in terms of desired outputs. For example, backpropagation takes target output values and propagates the corresponding error backwards through the network in order to change the weights. In closed loop settings, it is far more obvious how to define desired sensory inputs than desired actions, however. To train a deep network using errors defined in the input space would call for an algorithm that can propagate those errors forwards through the network, from input layer to output layer, in much the same way that activations are propagated.
All I can see is I think you need to explain PCT to these guys. When you do though, make sure you explain (immediate, automatic) control error and reorganisation (pooled over time for learning) error separately…
Anyway, I like the idea of providing an easy-to-use API to help people try the model. Python code is very user friendly and legible, even for beginners.
Yes, but my current understanding (described in PPC Chapter II.10) is that there are two interacting tracks of control, one conscious and involving slow thinking about how to act in order to control when the parallel fast, reorganized, perceptual control hierarchy doesn’t work in this probably novel situation. Ina (my wife) and I described the perceptual side of these two tracks in reading in our 1983 book “The Psychology of Reading”. PPC deals more with the output action side of the control loop, with the conscious, thinking, side using the fast reorganized side whenever it can, and the reverse whenever it must.
For as long as I can remember on CSGnet and on this forum I have tried to get people to be clear (in a way that Bill P. wasn’t) about the important separation of the action side of the reorganized perceptual control hierarchy from the thinking side of control in as yet unfamiliar situations. It makes a huge difference in control loop delays from the lags due to the Output Function of the loop.
Thinking should eventually allow you to act effectively, but by that time the tiger might have eaten you. A quick approximate method of error correction already reorganized into the perceptual control hierarchy is checkable later by thinking about it and improving what you did for the next time — if you have survived the problem this time by acting fast.
The latter is FCL. It complements the reorganization principles of PCT, but because it doesn’t separate the conscious and non-conscious control functions, it looks weird to us.
P.S. by the way, some decades ago I used to spend some time with Hinton discussing feedback processes in multi-layered perceptual networks, which is what the perceptual side of a PCT control hierarchy is. I don’t know whether I would say now what I said then in our discussions.
This is rather different than the PCT model, where thinking is the control of imagined perceptions, which can happen consciously or unconsciously. And, of course, in PCT, when this thinking is about planning how to “act” it is actually about planning what perceptions to control, not what actions to take.
I suggest that you consider the possibility that it was you who didn’t have a clear understanding of the PCT model. In PCT, the control loop lags increase as you go up the hierarchy, to more complex controlled perceptions; we control confguration and transition perceptions (demo here ) at a faster rate (shorter loop lags) than sequence and program perceptions (demo here). This is presumably true whether we control those perceptions in reality or in imagination.
It is also presumably true whether we control them consciously or unconsciously. As the control demos pointed to above show, this is definitely true when we control in reality. And I think there is some evidence, such as the Shepard/Metzler mental rotation research, that it is also true of control in imagination. I don’t know how you would test whether it is true of conscious versus unconscious control (in reality and imagination). But I’ll think about it.
If you are in a situation where you have to control in imagination (consciously or unconsciously) to get away from immediate danger then you are in deep doo doo.
I think what you are talking about is reorganization, specifically reorganization at the program level. One of my favorite studies in cognitive psychology (Atwood and Polson,1976) addressed this in a water jar problem solving experiment. The thinking in their study involved controlling a program perception in imagination.
The program was contingent pouring of water from one jar into another to reach a goal state of a certain amount of water in each jar. Atwood and Polson developed an excellent PCT-like model of the process as control in imagination with random reorganizarion when the imagined program didn’t actually get them to the desired state.
So I think I’ll just stick with the PCT model of Forward Closed-Loop Learning (FCL), which involves controlling perceptions in imagination, consciously – if reorganization is involved – or unconsciosly. The Atwood/Polson model might be a good place to start looking for what a reorganizing, program-level FCL could look like.
When I talk about my current understanding, what is “current” depends on what date you are using as a viewpoint datum. My thinking about PCT started to diverge from Powers in two ways, one by enquiring into the implications of lateral connections, the other by treating the “neural current” not as a rigid average but as the result of averaging a distribution of neural sensitivities to any specific pattern of inputs, to which the firing rate distribution has a central peak with a spread of lesser sensitivity, like a normal distribution, rather than a rectangular distribution with a sharp cut-off. There’s a lot that comes from thinking about neurons rather than neural currents, and a lot more that comes from the implications of lateral inhibition between neuron sensitivities. Powers acknowledged these issues, but as I understand what he said privately, he didn’t want to include them in B:CP, because the first (zeroth?) approximation of the perceptual control hierarchy itself was complex enough for a book intended to be understood by a general academic public. Powers understood a lot that he didn’t make public, and misunderstood a lot, too, such as the relation between Laplace Transforms and the asymptotic values of variables in a network after a step change in one of them.
I think Powers is a great guide, but should never be treated as a purveyor of PCT gospel.
I think Bill is less a guide than a the purveyor of gospel since he’s the one who developed the PCT model and brought it to the world. The model, not Powers, is the guide so I think it’s important to have a clear understanding of the PCT “gospel” before one starts making changes to it. And changes to that gospel should be based on observation, testing and modeling. Otherwise, PCT is, indeed, a religion.
Hi Martin, thank you. That’s very kind. I’m a book collector, so I enjoy looking for rare and out of print titles. I’m also building a physical library related to PCT in our research group, so my students can access and read the fundamentals freely.
You can post it if you want. Google sells it for an exorbitant price, but my memory is that Academic relinquished the copyright to Ina and me a long time ago. I found a message (letter?) that says they intend to, but I can’t find one that says they actually did it. But maybe I misunderstood the import of what they said, and have long thought that the copyright had reverted to us, when legally maybe it didn’t.
So Google’s lawyers may say that they hold the copyright, and I don’t want ever to get entangled with lawyers. Whether that matters in practice depends, I suppose, on how persnickety they want to be, because I can find nothing that explicitly says we were given the copyright rather then that Academic intended to give it to us.