Optimal control theory

Hi, Tim –

TC: Firstly, does this mean that when we’re planning (in the
imagination mode for example that Warren has mentioned) and we think up a
goal … something like “I’d like to phone my sister by the end of the
week” is this the perception we’re experiencing not a reference?
BP: First ask where the perception is coming from. If it’s coming
from sensory organs or lower-level control systems, it’s what I call a
real-time perception. But if the system is in the imagination mode, the
same perceptual signal, which looks just like a real-time perception but
lacks the usual background of other perceptions at the same and lower
levels, is being manufactured inside the brain.

The imagination connection manufactures perceptual signals by rerouting
the outputs that, acting as reference signals, would normally tell lower
systems to produce certain inputs to the higher system’s perceptual input
function. The outputs are simply turned back into the higher system’s
perceptual input function, making it appear that the requested
perceptions from below have actually – perfectly and instantly –
happened. von Holst called this “reefference”.

This means you should describe the reference signal not as something in
the future, but as if it’s happening: It’s the end of the week and I’m
phoning my sister. If signals equivalent to that image are sent to some
set of lower-order control systems as a (compositive) reference signal,
at first they simply produce error signals because that is not perceived
to be happening. But the lower systems act in such a way as to produce
exactly that set of perceptions which the higher system then experiences:
It’s the end of the week and I’m in actual fact phoning my sister. Now
the error is zero.

In the real-time mode, each reference signal relating to the
situation we describe here in words would enter a lower system which
would act to return a perceptual signal matching it. In the imagination
mode, those return signals would come directly from the higher system’s
output instead of the lower systems’ perceptual signals. As far as the
higher system can see, it’s the same as if the lower systems had acted
correctly.

This quite neatly explains most of the phenomena of imagining and
planning, doesn’t it? It is a very plausible hypothesis. We don’t know if
it’s any more than that, so we must use it with caution until somebody
figures out how to see if it’s
correct.

TC: In the standard closed causal loop diagram I use I would have put
that beside the arrow for the reference signal, with something like
“haven’t phoned sister” beside the arrow for the perceptual
signal.
BP: I would just note that the perceptual signal that actually
exists doesn’t match the reference signal. I guess that higher systems
can detect that fact, perhaps indirectly, so we can perceive the
mismatch, but in the current version of the model, we don’t perceive
error signals. From von Holst’s re-afference principle, it’s possible to
constructed a modified feedback hierarchy in which error signals do
actually enter higher-order perceptual input functions – I’ll post the
diagram again if anyone wants it. I’m not happy with it because it
requires that two signals originating in different places somehow find
their way to a very specific common destination so the lower-order error
signal can be subtracted from the right lower-order perceptual signal,
which doesn’t sound very likely. But it does work and it lets error
signals get into the perceptual channels.

Of course nothing prevents some higher system from knowing that there is
an imagined reference signal saying I am calling my sister, and other
perceptions in other channels clearly showing that I’m washing the dog.
But that produces a perception of a difference without directly
generating an error signal via a comparator, and would be a much slower
form of control, if any attempt to bring the mismatch to a reference
level of zero occured. It would be like a simulated control system. Or
perhaps it’s just conflicting
information.

TC: In one of the diagrams I use I have “what is expected” as the
label for r and “what is experienced” as the label for p but based on
what you’ve explained the r label should be something like “how much”
rather than “what”.
BP: Sure, how much should occur versus how much is occuring. The
what is determined by the perceptual input function and isn’t
controlled at that level. Only the how much is controlled. A
higher level determines the what by sending reference signals to one
lower order controller rather than another. I’m calling my sister instead
of writing to her or emailing.

Concerning expectation, the reference signal indirectly shows what the
higher system expects, not the lower one receiving the reference signal.
The higher system’s input function processes the imagined inputs to
produce a perceptual signal in the higher system, which is the perception
that is expected when the model is switched to real-time control.

TC: I’ve also used your figure from LCSIII of the closed causal
negative feedback loop and, in looking at it again, in the context of
this current conversation I noticed that the label in the input function
reads “Converts state of input quantity into magnitude of perceptual
signal”. From what you’re saying here though it also seems to convert the
input quantity into the kind or quality of perception at that level. Is
that right? So it doesn’t just create a particular magnitude but creates
a particular type of perception with a particular
magnitude?
BP: It does create a signal indicating only the amount, not the
kind, of perception. It’s the relationship between that magnitude signal
and lower-order perceptions or external variables that is determined by
the form of the perceptual input function. That relationship itself is
not perceived; it’s part of the machinery. You don’t perceive the lens of
your eye, even though it determines how external objects will be
projected and focused onto your retina. You don’t perceive any perceptual
input function.
TC:I’m just checking to make sure I’m understanding what you’re
explaining.
And as usual your questions are very helpful to me in figuring out
how to explain better. Your questions always give me a workout.

Best,

Bill

···

At 01:33 PM 11/5/2011 -0700, Tim Carey wrote:

Hello, all –

HY: You have written about model based control before. I think
our discussion of von Holst and efference copies, which you used to
reject, may have started some reorganization in you. Change is
always good if it leads to understanding, but I think you are going a bit
too far in trying to accommodate other people. It’s misleading to
suggest that you are now on the model-based bandwagon.

BP: It certainly is. I was trying to show that models can be seen simply
as collections of perceptual input functions, inmplying that what we call
the real world is already a model. There is such a thing as model-based
control, which is used to land airplanes in fog and dock spaceships to
the space station, but it’s strictly a substitute for the much faster and
more accurate control of real-time perceptions when that is
possible.

HY: The mainstream model-based approach is not based on the
kind of hard thinking you have been doing. It’s not based on much
of anything, as far as I can see. As Hamlet said, just words,
words, words.
BP: I don’t think it’s that bad. It’s a reasonable guess if you
don’t happen to know much about negative feedback control systems. It’s
unnecessarily complex and very computation-intensive and slow, but it can
be made to work. That’s better than not understanding control at all and
using stimulus-response
thinking.
HY: Like you I’m not very comfortable with Ross Ashby’s
writings on this, but I find it shocking that you repeatedly criticized
Ashby for writing nonsense when he is the father of this approach and by
far the most thoughtful of them all.
I got mad at Ashby when he decided that compensatory control was
faster and more accurate than closed-loop control, and sent a couple of
generations of behavior modelers off into a blind alley where most of
them still are. His Design for a Brain was actually my initial
inspiration and got me started with behavioral control theory. It was
only many years later, when I saw what his ideas had done to cybernetics,
that I started seeing the
flaws.
HY:I understand your desire to bridge the gap between mainstream
neuroscience and PCT–a peaceful merger if you will. Unfortunately,
if intellectual history has taught us anything, things are probably not
going to work out this way. There will indeed be a battle of ideas,
on the battlefield of experimentation and modeling. Out of the
kindness of your heart you might not want anyone to lose, but a lot of
people will.
BP: You would prefer that I got hard-hearted and egotistic and
didn’t like anybody very much? No, I don’t believe we would be
friendly now if that’s how I wanted to be. I do want a peaceful merger
but only because I want the reassurance of knowing that other people have
really looked at my ideas and agreed with them. I don’t want to change
any of my ideas I think are really important and right, but I will if I
have to. I hate looking foolish even more than I hate finding out I was
wrong.

Best,

Bill

···

At 02:11 PM 11/4/2011 -0400, Henry Yin wrote:

Hi, Tim --

BP: This means you should describe the reference signal not as something in the future, but as if it's happening: It's the end of the week and I'm phoning my sister. If signals equivalent to that image are sent to some set of lower-order control systems as a (compositive) reference signal, at first they simply produce error signals because that is not perceived to be happening. But the lower systems act in such a way as to produce exactly that set of perceptions which the higher system then experiences: It's the end of the week and I'm in actual fact phoning my sister. Now the error is zero.

TC: I got a little stuck on this part ... do you mean that goals should generally be described as if they're happening now? Is this the way goals can be set and then sort of 'stored' - I'll phone my sister by the end of the week but it's not the end of the week yet so I don't have to act. I've had the experience with a goal like that that I periodically become aware of it again, make another decision "Nah, I'll do it later, there's still a few days left" and 'store' it again. It's almost like there's some constant monitoring going on outside awareness that, from time to time, brings awareness back over to it.

I don't mean that the goal should be stated in words. If you're planning to call your sister, imagine that you're calling your sister: phone in hand, poking buttons, hearding ring tone. That is the perception that you want to get -- not your own voice saying "I'm going to call my sister." That is a reference signal for saying "I'm going to call my sister."

The reference signal should be stated in the form that the perception is going to have, so the perception can match it. "I'm going to call my sister" does not match "I'm calling my sister," whether the memory is in words or images.

You may be talking about remembering something you think of doing later, not of doing it. To do it, you imagine doing it and you're doing it before you've even finished imagining it. Remembering to set that reference signal, on the other hand, is a different matter. You have to try to establish some association with something that's going to happen, like turning the page of a calendar. You could leave the telephone on top of a picture of your sister. You could set your cell phone to remind you. You could write a message to yourself on a wall calendar, if you usually look at the calendar every day. This is a matter of remembering, not doing.

Does tht make more sense?

Best,

Bill

···

BP: This quite neatly explains most of the phenomena of imagining and planning, doesn't it? It is a very plausible hypothesis. We don't know if it's any more than that, so we must use it with caution until somebody figures out how to see if it's correct.

TC: Yep, sure. Caution is always good. You're right though about it neatly fitting and seeming very plausible. I'm with Warren on this one ... it's a very groovy part of the theory.

TC: In the standard closed causal loop diagram I use I would have put that beside the arrow for the reference signal, with something like "haven't phoned sister" beside the arrow for the perceptual signal.

BP: I would just note that the perceptual signal that actually exists doesn't match the reference signal. I guess that higher systems can detect that fact, perhaps indirectly, so we can perceive the mismatch, but in the current version of the model, we don't perceive error signals. From von Holst's re-afference principle, it's possible to constructed a modified feedback hierarchy in which error signals do actually enter higher-order perceptual input functions -- I'll post the diagram again if anyone wants it. I'm not happy with it because it requires that two signals originating in different places somehow find their way to a very specific common destination so the lower-order error signal can be subtracted from the right lower-order perceptual signal, which doesn't sound very likely. But it does work and it lets error signals get into the perceptual channels.

TC: Nope, that's fine with me. I like the idea that what we experience is our perceptions so anything in awareness is coming from the perceptual signal part of the diagram. I guess to illustrate that we could grey out the rest of the loop and just have a the perceptual signal in clear, bright focus. This also brings in the issue of 'we' again as in 'we don't perceive error signals' and while I don't want to go down that path again at the moment it does raise another issue about the way we describe the control systems. It seems to me that, largely, the control systems that we are operate automatically. We don't perceive error signals but we also don't compute the error signals or send them to lower systems, etc. That all happens by the functioning of the control systems but there's not a conscious 'me' that's doing it. I just 'know' the perceptual signals that come out of perceptual input functions. I'm not sure if I'm making sense here (I think I know what I mean but it's a slippery thing to describe) so I'll leave it at that for now.

BP: Sure, how much should occur versus how much is occuring. The what is determined by the perceptual input function and isn't controlled at that level. Only the how much is controlled. A higher level determines the what by sending reference signals to one lower order controller rather than another. I'm calling my sister instead of writing to her or emailing.

TC: OK, great. That makes sense. So the reference signal - specifying how much - arrives at the phoning control system rather than the writing or emailing control system. That is very cool.

BP: Concerning expectation, the reference signal indirectly shows what the higher system expects, not the lower one receiving the reference signal. The higher system's input function processes the imagined inputs to produce a perceptual signal in the higher system, which is the perception that is expected when the model is switched to real-time control.

TC: Very, very nice. It's one of those counterintuitive, slipper thoughts that I have to keep turning over but it's a great concept (when I get it!).

BP: It does create a signal indicating only the amount, not the kind, of perception. It's the relationship between that magnitude signal and lower-order perceptions or external variables that is determined by the form of the perceptual input function. That relationship itself is not perceived; it's part of the machinery. You don't perceive the lens of your eye, even though it determines how external objects will be projected and focused onto your retina. You don't perceive any perceptual input function.

TC: OK, so back to the sister example ... is the idea that the form of the input function for the phoning control system will be different to the form of the input function for the writing control system and they'll both be different to the form of the input function for the emailing control system? By 'form', do you mean the computation that goes on there? So there'll be a magnitude signal coming out of each of these three input functions but one will be the magnitude of a phoning perception, one will be the magnitude of a writing perception, etc? So this would mean that the perceptual signal is 'how much' is currently being experienced whereas the reference signal is 'how much' should be experienced?

Am I still on the same page??

Tim

Hello, Tim –

BP earlier: A higher level determines the what by sending reference
signals to one lower order controller rather than another. I’m calling my
sister instead of writing to her or emailing.
TC: It might be obvious but how does higher level output ‘know’ to
arrive at the phoning control system rather than the writing or emailing
control system? I’m not sure I’ve ever considered this before and now
that I do I realise I don’t have an answer for it.

“Know to” isn’t how I’d put it. It considers one activity it
knows how to perform, then another, then another … until it finds one
that accomplishes the desired end, getting a message to the sister. I
assume this would be some sort of search procedure, or else (if this
never had happened before) random reorganization.

If you have misplaced your car keys, how do you “know to look”
in all the places you actually look for them? The answer I would offer is
that you don’t: you just start looking around for something big enough to
hide them, and as you see each place, you investigate. You find the keys
in the last place where you look for them, because that corrects the
error that is keeping the search going.

You bring up a good point because the PCT standard diagram doesn’t seem
to have any place in it for extending thinking and reasoning. The
imagination mode is probably part of the answer, but the process of
logical reasoning or other information processing seems to call for some
kind of general-purpose computer, analog/digital in nature. I don’t quite
see where to fit that in.

Best,

Bill

(Gavin Ritz 2011.09.11.10.19NZT)

Dear Bill

You bring up a good point because the PCT standard diagram doesn’t seem to have
any place in it for extending thinking and reasoning. The imagination mode is
probably part of the answer, but the process of logical reasoning or

other information
processing

I hope you don’t want to go down
this path, please say no it’s a dead end, you have already shown that
with your experiments. I think it’s killed Cybernetics.

seems to call for some
kind of general-purpose computer, analog/digital in nature.

Maybe my comments about diophantines was
not so dumb. (joke). I hope that we aren’t some kind of computer, that
would be rather disappointing.

I don’t quite see where
to fit that in.

I have been on about this for at least the
last four years here on the list.

I have put forward about a ½ dozen
diagrams and concepts for open discussions.

I’ve been discussing Imperatives, declaratives
and interrogative logic (for 4 years here) and other theories like Elliot Jaques theory of Requisite Organisation
where he has already created a theory of logical reasoning with tests, evidence
and observational corroboration to boot.

That is why I wanted to know what the propositions
of PCT are, and when I look at them they all look like to me to be proved. What
you set out to prove has been proved.

PCT is stuck like Cybernetics simply maybe
the questions and propositions put forward are mostly answered and proved (some
I’m not so sure they even can be).

However it’s not the answers we
really want.

So my questions is:

What is it we really want to Know?

Propositions propose that what we strive to
know and seek?

Kind regards

Gavin

Hi, Tim --

···

At 12:08 PM 11/8/2011 -0800, Tim Carey wrote:

OK, thanks. More to think over! Presumably the 'contact my sister' control system is conitnuously connected to all three lower level systems of 'phone', 'write', and 'email' but only one gets the signal to create the contacting sister perception. It doesn't seem right that people would often intend to email their sister but find themselves phoning her instead ... although I have had the experience of starting an email and then thinking "Oh heck, this'll take too long - I'll just phone".

The latter is more like what I had in mind. There are usually more than one or two ways to accomplish any end, and at some level (I don't know what it would be) one of the things we can do is survey the available means and pick one that doesn't conflict with anything else.

Bill

[From Bill Powers (2011.11.10.0851 MST)]

(Gavin Ritz
2011.09.11.10.19NZT)
BP earlier: You bring up a good
point because the PCT standard diagram doesn’t seem to have any place in
it for extending thinking and reasoning. The imagination mode is probably
part of the answer, but the process of logical reasoning or
other information processing
I hope you don’t
want to go down this path, please say no it’s a dead end, you have
already shown that with your experiments. I think it’s killed
Cybernetics.
seems to call for some kind of
general-purpose computer, analog/digital in nature.
Maybe my
comments about diophantines was not so dumb. (joke). I hope that we
aren’t some kind of computer, that would be rather disappointing.
I don’t quite see where to fit
that in.
What’s the problem? When I sit down and work out a model, I do a lot
of calculating, planning and drawing diagrams, writing and rewriting and
editing, and so on - a lot of extended thought and reasoning. I don’t
know enough about where that sort of thing fits into the current PCT
model. Are you saying you never do any numerical computing or logical
reasoning? I can see where rule-driven reasoning fits (at the logic level
or level 9 – well, levels 7 through 9 maybe as I’ve defined them). But I
don’t have a picture of how it all works. It’s the same problem as
understanding language. The PCT model gives us a general picture, but
most of the details are missing. I don’t think your classifications of
types of logic get us very far – they sound pretty made-up to me. Is
there any reason to think they’re universal?

Bill P.

···

I have been on about this for at least the last four years here on the
list.

I have put forward about a ½ dozen diagrams and concepts for open
discussions.

I’ve been discussing Imperatives, declaratives and interrogative logic
(for 4 years here) and other theories like Elliot Jaques theory of
Requisite Organisation where he has already created a theory of logical
reasoning with tests, evidence and observational corroboration to
boot.

That is why I wanted to know what the propositions of PCT are, and when I
look at them they all look like to me to be proved. What you set out to
prove has been proved.

PCT is stuck like Cybernetics simply maybe the questions and propositions
put forward are mostly answered and proved (some I’m not so sure they
even can be).

However it’s not the answers we really want.

So my questions is:

What is it we really want to Know?

Propositions propose that what we strive to know and seek?

Kind regards

Gavin

Hi, Tim --

···

At 10:20 AM 11/10/2011 -0800, Tim Carey wrote:

Yep, thanks I can relate to that. I guess it was the 'pick one' that I was interested in. By 'pick one', I'm assuming that means set a reference for a particular (rather than some other) lower level system. That's the bit I was curious about. How does the 'pick one' happen?

Yep, that's the question all right, you've put your finger right on it. What's your answer?

Bill

(gavin Ritz
2011.11.10.9.38NZT)

[From Bill Powers
(2011.11.10.0851 MST)]

(Gavin
Ritz 2011.09.11.10.19NZT)

BP earlier: You bring up
a good point because the PCT standard diagram doesn’t seem to have any place in
it for extending thinking and reasoning. The imagination mode is probably part
of the answer, but the process of logical reasoning or

other information
processing

I hope
you don’t want to go down this path, please say no it’s a dead end,
you have already shown that with your experiments. I think it’s killed
Cybernetics.

seems to call for some
kind of general-purpose computer, analog/digital in nature.

Maybe my
comments about diophantines was not so dumb. (joke). I hope that we
aren’t some kind of computer, that would be rather disappointing.

I don’t quite see where
to fit that in.

What’s the problem? When
I sit down and work out a model, I do a lot of calculating, planning and
drawing diagrams, writing and rewriting and editing, and so on - a lot of extended
thought and reasoning.

We all do this, extended thought and reasoning,
that’s why you and many others develop ideas, THAT’S CREATION. I
hope you don’t think that computers can create.

I don’t know enough about
where that sort of thing fits into the current PCT model. Are you saying you
never do any numerical computing or logical reasoning?

I have no idea how you got out to this.
This I have never said and it’s just not in any way correct. Numerical
calculation is the principle link with theory and language.

I can see where
rule-driven reasoning fits (at the logic level or level 9 – well, levels 7
through 9 maybe as I’ve defined them).

Yes they do fit there, I have spoken so
many times about
it. But it’s not explicitly stated so. I can tell you exactly which logical
connective is associated with which level.

But I don’t have a
picture of how it all works. It’s the same problem as understanding language.

I just mentioned how language and logic is
put together with mathematics, nobody hears. Everyone makes such fuss about language,
that’s only because they don’t understand how logic and language
and mathematics connect. Worse they have no intentions of wanting to understand.

The PCT model gives us a
general picture, but most of the details are missing.

Well I have been trying to help with many
of the details, but I guess it falls on deaf ears. It’s just what some
are controlling for, PCT puts that so wonderfully.

I don’t think your
classifications of types of logic get us very far – they sound pretty made-up
to me.

I would have loved to have made up the concepts
of logic. This is pretty basic logical categories and not made up be me.

Is there any reason to
think they’re universal?

Of course they are our entire thinking is bootstrapped
using logical connectives (or, and, implies) connecting declarative statements,
it’s how we build our theories, we link the logic of language to mathematics
to theory in bi-conditional ways.

Regards

Gavin

Bill P.

I have been on about this for at least the last four years here on the list.

I have put forward about a ½ dozen diagrams and concepts for open discussions.

I’ve been discussing Imperatives, declaratives and interrogative logic
(for 4 years here) and other theories like Elliot Jaques theory of Requisite
Organisation where he has already created a theory of logical reasoning with
tests, evidence and observational corroboration to boot.

That is why I wanted to know what the propositions of PCT are, and when I look
at them they all look like to me to be proved. What you set out to prove has
been proved.

PCT is stuck like Cybernetics simply maybe the questions and propositions put
forward are mostly answered and proved (some I’m not so sure they even
can be).

However it’s not the answers we really want.

So my questions is:

What is it we really want to Know?

Propositions propose that what we strive to know and seek?

Kind regards

Gavin

Hi, Tim --

Speaking of answering questions from your own real experiences, do you have a sister and have you been thinking of calling her (or emailing etc)?

If not, is there any other similar experience you have had that you can draw upon? All you have to do is pay attention to what goes on as you decide how to get in touch.

Best,

Bill