[Martin Taylor 960115 16:00]
Comments on several postings from Shannon.
Shannon Williams (960113)
What I think that you must have in mind is for the hundreds of original
perceptual signals from the outside world to enter hundreds of e=r-p
control loops. And these must eventually be weeded/selected down to the
e=r-p loops which directly generate behavior. I do not believe this.
Nor, I think, would most practitioners of PCT.
I think most would agree that there are hundreds--more probably hundreds of
thousands--of control loops. What they (and I) would not believe is that
perceptual signals enter from the outside world, or that there is any
"natural" sense in which some loops "directly generate" behaviour and some
I believe that our internal workings are much simpler than that. I believe
that the hundreds of original perceptual signals from outside the brain are
just considered one big perception of the world.
Where do these "original perceptual signals from outside the brain" come
from? What are they? Who is it (to pre-empt Bob Clark) that "considers" them
to be "one big perception of the world?" The usual PCT stance is that they
are elements of control loops, not "considered" at all.
What you say may be simple, but only in the way that creationist
science is simple: anything that isn't understood is pushed out of the
model with the words "and then a miracle happens". That's very simple, but
not very helpful.
In other words, I agree when you say: For each perceptual signal there is a
single neural current that embodies it. But I do not believe that this
perceptual signal is the input in an e=r-p loop.
You just didn't agree, and now you agree.
Just what is an "e=r-p loop"? Without trying to be pernickety, it just doesn't
make sense to talk about control loops in that language. You might just
as well call them "s=o+d loops" (sensory input = output plus disturbance).
As you said elsewhere, no particular point in the loop is especially
privileged --- EXCEPT ONE. That one point is the output of the comparator,
which the loop acts to stabilize at zero. (Well, perhaps it is a vector
zero, but in the scalar version that has seemed adequate so far, it is
a numeric zero). In that sense, and that alone, it makes some sense to
identify a control loop with the error signal that it continuously tries
to bring to zero. But that's only a label, which is far from descriptive.
And we react with simple
input/output mappings according to what we perceive.
And want to perceive. If you include that addition, you are describing
the output function of an ordinary control loop.
Our thinking appears
complicated because we not only react to our current perceptions, but we
predict how these perceptions can change, and we react (with simple
input/output mapping) according to our predictions.
Possibly. Hans Blom and Bill P. have had a long discussion on this topic.
As I understand the state of play, it is acknowledged that some prediction
is sometimes useful (personally I'd go a little further in that direction
than I think Bill is prepared to go. To me, it's a question of disturbance
bandwidth how much prediction is possible and useful. How much is used is
a separate question.)
3. A control system acts so as to keep its perceptual
signal close to its reference level.
Yes. We can describe the relationship between perceptions and behaviors
as a control loop.
No, I think that is a misleading way to state it. It is the loop that is
the loop. The loop requires two relationships, at least: the relation
between perceptions (including desired perceptions) and actions, and the
relation between actions and changes in perceptions. The latter relationship
can only be statistical, because the effects of disturbances have to be
added in, and they are independent of the actions.
The brain has the ability to predict how its perceptions will change
if it does not output any behavior.
Statistically, yes. The accuracy of its prediction must diverge over time,
but some predictability for some perceptions can be pretty good for long
periods in most instances. For example, if I predict that big rock will
still be there when I come back next week, next month, next year... I will
probably be right, but it could happen that a construction company is
contracted to build a football stadium there, and the rock might have gone
by tomorrow. But on most occasions when I make such a prediction, I'll
On the other hand, if I predict where my car will go if I keep the steering
wheel at top dead centre, I'll be pretty far wrong (and possibly dead)
in only a few seconds.
The brain also has the ability to
predict how its perceptions will change if it does output behavior.
Under the same constraints. However, the brain can determine in what
direction perceptions are likely to change if it outputs certain kinds
of behaviour, perhaps not explicitly, but in PCT terms, the outputs will
ordinarily (disturbances permitting) have the correct sign to give the
loop involving a particular perception a negative overall gain. The point
here is that the laws of the universe are more stable than the positions
of objects in it. The environmental feedback functions (to use a PCT term)
are more stable, at least in their signs, than are the disturbing influences.
If that were not so, we couldn't control at all, and neither would your
brain be able to "predict how its perceptions will change if it does
this capacity, the brain is capable of selecting the perceptions that it
prefers to receive or avoiding perceptions that it prefers to avoid.
Depending on how you mean the words, this is either magic or PCT. I don't
see any other possibility.
The brain simply generates output that is associated with the perceptions
that it elects to receive.
Now you DON'T have an input-output mapping. An I/O mapping can be associated
with the perceptions it IS receiving, but not with what it isn't.
There is nothing mysterious about this. It is
a simple input/output mapping of perception to behavior.
No it isn't. And it would indeed be mysterious even if it were. However,
you are (I think) describing the actions of either one complicated comparator
function or a bunch of simple ones. I'm not sure, because your language
is inconsistently used. That's OK when you are working within a framework
agreed between you and your readers, but when you are trying to change the
linguistic frame of reference, you have to be careful to be consistent.
We don't need HPCT.
Possibly not. Can you demonstrate the truth of my old conjecture that this
is so? I'd be very happy if you could, because nobody else has been able
to show that it is either true or false. You assert that it is true, and
that's the first time anybody has come out with such a flat statement. If
it is true, then experiments are possible to detect the difference between
the two concepts. That would be exciting.
Shannon Williams (96011.02:30) To Bill Powers
You are thinking: output = (input2-input1)/time
I am thinking: input1 = reference
input2 = input
where: [input] ---> [neural network] ---> [output]
[ment ] <---------C
The output is LEARNED by making it equal the reference
for whatever is input.
Fine. That's one of the standard ways of using delay in a neural network.
The network learns a compact and efficient representation of the input.
But in the case of generating appropriate actions in the outer world, why
would you want the output to equal an earlier input? You might want a
later input to equal an input you had earlier found to be useful, but
you wouldn't get that by producing the same output as last time, even
if setting the output equal to the input would have given it you on the
The neural network will learn to predict only when prediction is viable.
If you teach it that A follows B, then that is what it learns. If you do
not give it any consistant input, then it does not learn anything. It only
recognizes the order/predictableness of its universe.
True. And the control system that never tries to predict can control better
in an unpredictable environment than can one that has learned in a more
stable environment that then goes mad. "It ain't so much what you don't
know as what you know that ain't so" is what hurts.
How can an output be generated to avoid situations that are unpleasant?
This question should not be causing problems for you. This is a question
you must ask yourself even in your current version of PCT. The question
for you is: How do we learn which outputs we need to generate to control
In answer to your question: We try an output. If it helps, we try it
again. If it doesn't help, we try something else.
You are describing the "classic" process of reorganization within PCT, but
I don't see where it links with your I/O modelling. What you describe is
indeed one way that the HPCT structure learns which outputs are needed to
control which perceptions; and when its outputs do control its perceptions,
it stops trying something else. That's one of the ways reorganization is
supposed to work. It's like the way dead leaves in the autumn "learn" to
build themselves into neat little piles behind hedges--so long as the wind
affets them, they move, but when they get into a calm place, they don't move
any more. So long as the HPCT structure is having trouble controlling, it
moves (changes structure or weights), but when things calm down and control
is successful, it doesn't move any more.
One thing to remember is that the input to the neural network is a pattern.
This means that you can map a behavior to a pattern. If that pattern
later fails to remove an error when the behavior is output, then you can
alter your network to generate another output. BUT- the effect of what you
are doing is this:
at first: neural input = 0X11XX01 ==> neural output = 11111111
after adjustment: neural input = 00110X01 ==> neural output = 11111111
= 01110X01 ==> neural output = 11101110
If you take those 1's and 0's to represent the signs of output connections,
you have a good description of one aspect of reorganization.
Shannon Williams (9614.03:30)
Networks give you an input/output mappings.
Fine, with the proviso I pointed out earlier, that this is true only if the
network is non-recurrent. So say "Networks CAN give you an input/output
mapping" and I'll go along.
Modifiable networks give you
the capacity to learn input/output mappings.
The cummulative effect of
these mappings allows an external observer to describe an organism's
behavior as e=r-p.
That's the leap that makes no sense to me. Under what conditions would
an external observer so describe an organism'a behaviour? Or perhaps the
question should be: what does it mean for an external observer to describe
the organism's behaviour as e=r-p? I like straw men to be visible, at least,
before they are knocked down, so that I can determine whether they are
indeed built of straw.
It's a very simple concept.
Perhaps. But you don't make it so.
Maybe you could figure out a question that identifies what you don't
understand. Or sketch what you think that I am saying, and point out its
OK. I've played with neural networks of different kinds for over 30 years
now, in an on-again, off-again way. I think I have a pretty good grasp of
what they do, and to what they are equivalent, though I'm by no means
expert in modern network theory. I've been a psychologist longer than that,
but have been acquainted with PCT for only 4 years. None of that helps me
to understand the concept of describing behaviour as "e=r-p", or as
"e= anything at all". I can imagine describing the relation between
input and output as a mapping--people have been doing that for decades,
with often great mathematical sophistication (have you looked up J.G.
Taylor's book yet?). As a PCT student, I can imagine seeking out controlled
perceptions by disturbing environmental variables in different ways. I
can imagine modelling the results of these tests to see if I can find
control elements that remain consistent across differen kinds of tests.
But until I have some notion of what it means to describe behaviour as
error, I cannot make the conceptual leap between saying that an S-R account
is adequate for psychology and saying that such an account can lead to
an organism whose behaviour can be described as "error=something."
It's NOT a simple concept.
If you insist that all control loops are operating
in parallel, connected directly to sensory input and to the muscles, then
the argument from existence should be enough to say that there is a method
whereby those loops came to exist.
You are right that the way that I am visualizing the control loop explains
not only the control loops' existance but their origin. But I am not
visualizing parallel loops. If the loops were all parallel, we would never
Several different problems here: I said that your assertion that the loops
exist implies that a method must exist whereby they come to be. You now
read that as saying that you have described, or are prepared to describe
(I'm not sure which), the way those loops have come to be. That's a very
different kettle of fish.
You don't imagine parallel loops, and you don't imagine a hierarchy of loops,
but you do conceive that there are many loops. What configurations do you
imagine that are neither parallel nor hierarchic?
Conflict occurs when two control systems attempt simultaneously to control
some perception to two different values. So far as I can see, that is an
assertion that conflict occurs only between control systems operating in
3) If I am going to model behavior using a neural network, I can
simulate the evolution of behavior using the visualization in #1.
But I can't if I am using the visualization in #2.
Draw a sketch outlining how PCT models the evolution of behavior. I want
to see each little stage of the evolution. Show me this, and I will answer
your question. (No perhapsial handwaving, please).
You don't want a sketch. You want a model. We have a model, on which we would
like someone to run the experiments to see whether it works, in a specific
situation very much like the one with which you started this thread. Would
you like to run the experiments, helping to debug the model in the process?
If you have a reasonable Mac, I'd be quite happy to send it to you. It's
probably far from being properly debugged, but at least some of it works
Our model is, of course, very simplified in comparison to real living
organisms. But it is intended to test three (possibly four) modes of
reorganization. (I don't know why I'm writing this, since I've done so
several times in the last couple of weeks...but anyway...)
The Little Baby works in an environment consisting of a simple formal
language whose syntax is exactly prescribed by a BNF grammar. Each output
symbol of the grammar is assigned a position in a 3-D "feature space",
and the output stream from the grammar defines a trajectory through this
feature space. The trajectory is a smoothed version of the jumps between
the locations of successive symbols, so that often it won't arrive at the
actual location before the next symbol is emitted. The grammar engine is
unaffected by what the Little Baby does, so all the LB could possibly do
is to learn to "interpret" the grammar. Its objective is to keep its "finger"
on the target--the tip of the moving trajectory output by the grammar--and
I expect it to do this by learning, at different hierarchic levels, the
syntactic constructs in the grammar, or something that produces equivalent
results (in other words, it might learn a different, but functionally
Internally, the LB is born as a layered set of ECUs. It is layered, because
the interconnections of upgoing and downgoing connections are consistent,
the perceptual signals generated at layer N all providing sensory inputs
to ECUs at layer N+1, and the output signals from an ECU at layer N all
providing inputs to reference inputs at layer N-1. Which ECUs are connected
to which others, and with what weights (positive or negative), is initially
completely random. The level 1 ECUs provide outputs that together drive
a "finger" in the 3-D space, and the sensory inputs from the 3-D space
are the positions of the finger and of the target (the end point of the
symbol-based trajectory). By the way, all the ECUs have the possibility
of having delayed inputs, which makes derivative and acceleration perceptions
possible. Whether those possibilities are used depends on the weights.
The LB has a reorganizing system based on intrinsic variables. It REALLY
doesn't want there to be any deviation between the finger position and the
target--tracking by following is no good to it. Also, it doesn't want to have
error in any ECU. The value of an error-based intrinsic variable is
a*E^2 + b*(dE/dt)^2, where a and b are some positive constants. If a
particular ECU has a high value for its "error-based intrinsic" variable,
it is more likely to "reorganize" during some compute cycle than if it
has a low value for that variable.
The "finger-target" "intrinsic" variable is actually taken to be a top-level
reference for the control hierarchy, rather than sitting atop a separate
"reorganization" hierarchy, as Bill P would probably prefer, so reorganization
happens because other control systems can't eliminate the error in the
top-level control systems. The "error-based intrinsic error" is the only
direct influence on reorganization rate.
The experiments planned for the LB before the money ran out were to test
different aspects of reorganization. There would be no "perhapsial
handwaving" about it, (as there has been none in anything I have written
in response to your postings).
Experiment 1. Reorganize output connections and their signs without changing
weight values. This works fine so far as getting the LB to track a symbol
stream of the kind "ABBBAABAAABABBAAAB..." and, though we haven't been able
to run the study for the aforementioned financial reasons, there seems no
reason it wouldn't work just as well for "ABCBFHAUD..." in 3-D rather than
1-D. One or two sign reversals, and tracking proceeds (in a following mode,
of course, there having been no structure for the LB to learn).
Experiment 2. Hebbian changes of input weights (like the neural network
remapping you have been going on about). This experiment treats the perceptual
input functions and their connections like a multilayer perceptron that
learns to perceive only what it can control given its "born in" set of
output connections. This hasn't been tested, but the method is prescribed
and programmed (but possibly not debugged).
Experiment 3. Hebbian changes of output weights. The LB learns to control
whatever it can perceive given its "born in" perceptual functions. This also
is programmed, if not debugged.
Experiment 4. Do 2 and 3 together. The LB learns to perceive what it can
control and to control what it can perceive. There's a danger here that
it will learn to ignore all but a small subspace of the feature space, so
that it fools itself into thinking it's doing fine when the experimenter
knows it is ignoring, say, stop-continuant contrasts. That would be like
a baby learning to ignore phonetic contrasts that don't occur in its own
language, so if that happens, it could be an illustrative "danger".
Experiment 5. Add and subtract ECUs, changing the hierarchic structure.
This is not yet programmed, but it was planned. The idea is that ECUs that
see little and do little would be removed from the hierarchy, and that
continued intrinsic error in finger-target distance would cause the addition
of new ECUs at random points in or above the existing hierarchy.
It would be really nice for someone either to provide money or to provide
time and effort to continue these studies. The results might well help
you to visualize how a control network would evolve. Personally, I think
it likely that the growth and evolution of a control hierarchy would be
much more stable and quicker than that of a feed-forward neural network,
because in the control hierarchy "error" can be assigned to each node (ECU)
whereas in the normal NN it can be assigned only to the network as a whole.
Incidentally, Shannon, if you've read the Layered Protocol tutorial I sent
you, you will have noticed several comments about the characteristics of
control hierarchies developed through reorganization, in particular the notion
of critical stability embodied in the term "the Bomb in the Hierarchy."
These ideas are ones that I'd love to see investigated in an extension of
the LB project. But at present, the LB is little more than an embryo,
far from the child whose temper tantrums would be the observable consequence
of the "Bomb."
Shannon Williams (960114.01:00) to Bill Powers
You can use a trainable
neural network to explain how memory, imagination, language, math, ANYTHING
can work. I even see how the networks could have successively evolved as
species evolved. It's easy.
You sound like some of the early enthusiasts. It ain't so easy in practice.
Yes, they can learn all kinds of patterns, but I don't think anybody I
know of has tried letting a NN train itself by navigating in a dangerous
world. There are self-training networks, but what they learn is related
to the statistical structure of their inputs, not to how to bring their
inputs to "desired" states.
Maybe it's easy, but lots of people who have said that have found otherwise
when they actually have tried. You may be different. You'll become very
rich if so.
But if my car is veering to the
left, it is easy for me to recognize/predict that it will continue veering
to the left unless I or some obstical does something about it.
Why so? Maybe it veered to the left because the left front wheel hit a bump,
and in the next millisecond the right front wheel will hit one. Maybe not.
How is it easy for you to recognize which is the case? The whole point of
PCT is that things in the world that you haven't seen can affect you at any
I think that is enough to start with.
Why do you think this?
I think he meant that if you read what he wrote, you might ponder it for
a while before responding. And then he could continue the discussion based
on a thoughtful response from you. Whether you have provided that response
is not for me to say.