Principles of PCT-Guided Research

[From Bruce Abbott (950110.1750 EST)]

Anyone have a day stretcher handy? (I have a SERIOUS need for one: 24 hours
simply isn't long enough to get everything done I have to do...). Spring
semester started yesterday and I'm still not ready for it.

I've read with much interest the posts that have appeared in response to my
"principles of PCT-guided research." One thing to keep in mind is that I
want to keep it concise while conveying an accurate summary of the model,
goals, and methods. When I find some spare time I'll try to incorporate
your suggestions into a second draft and we can go into Round Two.

Meanwhile, my sponsoring editor is experiencing a large error in his
perception of having the completed revision of Research Methods in his
possession, and I've got to get back to work reducing that error....

Regards,

Bruce

[From Shannon Williams (960113)]

CHUCK TUCKER 960107--

                    Principles of PCT-Guided Research

       THE THEORY

       1. Behavior serves one function and one function only:
            to control perceptual signals.

The power of PCT is in the concept of the control loop. No one aspect of
that control loop should be emphasized more than the others.

Therefore, if you think that it is important to emphasize that behaviors
exist only to control perception then you must equally emphasize that
perceptions exist only to elicit behaviors. If you don't, then you
emphasize one biased and limited aspect of the loop, rather than the
concept of 'loop'.

       2. Perceptual signals are neural currents. For each
            perceptual signal there is a single neural current
            that embodies it.

What I think that you must have in mind is for the hundreds of original
perceptual signals from the outside world to enter hundreds of e=r-p
control loops. And these must eventually be weeded/selected down to the
e=r-p loops which directly generate behavior. I do not believe this.

I believe that our internal workings are much simpler than that. I believe
that the hundreds of original perceptual signals from outside the brain are
just considered one big perception of the world. And we react with simple
input/output mappings according to what we perceive. Our thinking appears
complicated because we not only react to our current perceptions, but we
predict how these perceptions can change, and we react (with simple
input/output mapping) according to our predictions.

In other words, I agree when you say: For each perceptual signal there is a
single neural current that embodies it. But I do not believe that this
perceptual signal is the input in an e=r-p loop.

            The system making use of a
            particular neural current does not "know" what
            perception the neural current represents.

Agreed.

       3. A control system acts so as to keep its perceptual
            signal close to its reference level.

Yes. We can describe the relationship between perceptions and behaviors
as a control loop.

            It does this
            by

                (a) comparing the perceptual signal to the
                    reference signal,

No.

The brain has the ability to predict how its perceptions will change
if it does not output any behavior. The brain also has the ability to
predict how its perceptions will change if it does output behavior. Using
this capacity, the brain is capable of selecting the perceptions that it
prefers to receive or avoiding perceptions that it prefers to avoid.

              and

                (b) generating an output signal whose effect
                    tends to reduce the error between
                    perception and reference.

The brain simply generates output that is associated with the perceptions
that it elects to receive. There is nothing mysterious about this. It is
a simple input/output mapping of perception to behavior.

             This effect is called negative feedback. Because
             error between the perceptual signal and the
             reference signal generates output which in turn
             affects the perceptual signal and thus the error,
             a control system is a closed loop system.

OK.

       4. Control systems are organized hierarchically, with the
            outputs of higher-level systems serving as reference
            levels (or other parameters such as gain values) for
            lower-level systems. The higher-level systems control
            their perceptual signals through the activities of the
            lower-level systems, which serve to close the loop so
            as to produce negative feedback in the higher-level
            systems.

See #2 above. We don't need HPCT.

       THE GOALS

I agree with everything else.

-Shannon

[From Rick Marken (960115.1000)]

Shannon Williams (960113) --

The power of PCT is in the concept of the control loop. No one aspect of
that control loop should be emphasized more than the others.

Therefore, if you think that it is important to emphasize that behaviors
exist only to control perception then you must equally emphasize that
perceptions exist only to elicit behaviors. If you don't, then you
emphasize one biased and limited aspect of the loop, rather than the
concept of 'loop'.

Here's a simple control loop:
               r
               >
               v
          p-->|C|-->e
          > >
    d--> i<--------o

The variables in the control loop are p, e, o and i; the variables that
are external to the loop but influence the behavior of variables in the loop
are r and d.

No variable in the loop has a priviledged status as the cause of the behavior
of any other variable in the loop; p is no more important as a cause of e
than e is as a cause of p. However, there is only one variable in this loop
that is _controlled_ (brought to a particular state and maintained against
disturbance); the _perceptual variable_, p. The loop keeps p = r. If there is
no explicit reference, then the loop keeps p = 0.

No other variable in the control loop, besides p, is controlled (protected
from disturbance). So p has a priviledged status in a control loop; it is
the controlled variable. The mantra "behavior is the control of perception"
refers to this characteristic of a control loop (where the term "behavior"
refers to the behavior of the loop as a whole, not just to the output
variable).

Control loops are organized around the control of one particular variable, p.
In order to understand the behavior of the control loop, you must know what
the control system is controlling (the environmental correlate, i, of the
perceptual signal, p); you must know what perceptual variable the system is
controlling.

Control engineers build control loops so that p is a precise measure of the
environmental variable that they want controlled; so control engineers
doesn't need to find out what a control system is controlling; they are
making sure that the system controls a particular variable. Psychologists,
on the other hand, don't know what variables a living control system is
controlling; living control systems were built by the big engineers in the
sky (Evolution and Learning). So the main job of the psychologist who wants
to understand what a living control system is "doing" is determine the
environmental correlate of the perceptual signals that it is controlling.
The only way to do this is to Test for controlled variables.

Once we know what variables a living control system is controlling we can
start trying to understand _how_ these variables are controlled (the loop
gain, actions used in the control process, etc) and _why_ they are
controlled (what other perceptions are kept under control by controlling
these variables).

Best

Rick

[Martin Taylor 960115 16:00]

Comments on several postings from Shannon.

Shannon Williams (960113)

What I think that you must have in mind is for the hundreds of original
perceptual signals from the outside world to enter hundreds of e=r-p
control loops. And these must eventually be weeded/selected down to the
e=r-p loops which directly generate behavior. I do not believe this.

Nor, I think, would most practitioners of PCT.

I think most would agree that there are hundreds--more probably hundreds of
thousands--of control loops. What they (and I) would not believe is that
perceptual signals enter from the outside world, or that there is any
"natural" sense in which some loops "directly generate" behaviour and some
don't.

I believe that our internal workings are much simpler than that. I believe
that the hundreds of original perceptual signals from outside the brain are
just considered one big perception of the world.

Where do these "original perceptual signals from outside the brain" come
from? What are they? Who is it (to pre-empt Bob Clark) that "considers" them
to be "one big perception of the world?" The usual PCT stance is that they
are elements of control loops, not "considered" at all.

What you say may be simple, but only in the way that creationist
science is simple: anything that isn't understood is pushed out of the
model with the words "and then a miracle happens". That's very simple, but
not very helpful.

In other words, I agree when you say: For each perceptual signal there is a
single neural current that embodies it. But I do not believe that this
perceptual signal is the input in an e=r-p loop.

You just didn't agree, and now you agree.

Just what is an "e=r-p loop"? Without trying to be pernickety, it just doesn't
make sense to talk about control loops in that language. You might just
as well call them "s=o+d loops" (sensory input = output plus disturbance).
As you said elsewhere, no particular point in the loop is especially
privileged --- EXCEPT ONE. That one point is the output of the comparator,
which the loop acts to stabilize at zero. (Well, perhaps it is a vector
zero, but in the scalar version that has seemed adequate so far, it is
a numeric zero). In that sense, and that alone, it makes some sense to
identify a control loop with the error signal that it continuously tries
to bring to zero. But that's only a label, which is far from descriptive.

And we react with simple
input/output mappings according to what we perceive.

And want to perceive. If you include that addition, you are describing
the output function of an ordinary control loop.

Our thinking appears
complicated because we not only react to our current perceptions, but we
predict how these perceptions can change, and we react (with simple
input/output mapping) according to our predictions.

Possibly. Hans Blom and Bill P. have had a long discussion on this topic.
As I understand the state of play, it is acknowledged that some prediction
is sometimes useful (personally I'd go a little further in that direction
than I think Bill is prepared to go. To me, it's a question of disturbance
bandwidth how much prediction is possible and useful. How much is used is
a separate question.)

       3. A control system acts so as to keep its perceptual
            signal close to its reference level.

Yes. We can describe the relationship between perceptions and behaviors
as a control loop.

No, I think that is a misleading way to state it. It is the loop that is
the loop. The loop requires two relationships, at least: the relation
between perceptions (including desired perceptions) and actions, and the
relation between actions and changes in perceptions. The latter relationship
can only be statistical, because the effects of disturbances have to be
added in, and they are independent of the actions.

The brain has the ability to predict how its perceptions will change
if it does not output any behavior.

Statistically, yes. The accuracy of its prediction must diverge over time,
but some predictability for some perceptions can be pretty good for long
periods in most instances. For example, if I predict that big rock will
still be there when I come back next week, next month, next year... I will
probably be right, but it could happen that a construction company is
contracted to build a football stadium there, and the rock might have gone
by tomorrow. But on most occasions when I make such a prediction, I'll
be right.

On the other hand, if I predict where my car will go if I keep the steering
wheel at top dead centre, I'll be pretty far wrong (and possibly dead)
in only a few seconds.

The brain also has the ability to
predict how its perceptions will change if it does output behavior.

Under the same constraints. However, the brain can determine in what
direction perceptions are likely to change if it outputs certain kinds
of behaviour, perhaps not explicitly, but in PCT terms, the outputs will
ordinarily (disturbances permitting) have the correct sign to give the
loop involving a particular perception a negative overall gain. The point
here is that the laws of the universe are more stable than the positions
of objects in it. The environmental feedback functions (to use a PCT term)
are more stable, at least in their signs, than are the disturbing influences.
If that were not so, we couldn't control at all, and neither would your
brain be able to "predict how its perceptions will change if it does
output behavior."

Using
this capacity, the brain is capable of selecting the perceptions that it
prefers to receive or avoiding perceptions that it prefers to avoid.

Depending on how you mean the words, this is either magic or PCT. I don't
see any other possibility.

The brain simply generates output that is associated with the perceptions
that it elects to receive.

Now you DON'T have an input-output mapping. An I/O mapping can be associated
with the perceptions it IS receiving, but not with what it isn't.

There is nothing mysterious about this. It is
a simple input/output mapping of perception to behavior.

No it isn't. And it would indeed be mysterious even if it were. However,
you are (I think) describing the actions of either one complicated comparator
function or a bunch of simple ones. I'm not sure, because your language
is inconsistently used. That's OK when you are working within a framework
agreed between you and your readers, but when you are trying to change the
linguistic frame of reference, you have to be careful to be consistent.

We don't need HPCT.

Possibly not. Can you demonstrate the truth of my old conjecture that this
is so? I'd be very happy if you could, because nobody else has been able
to show that it is either true or false. You assert that it is true, and
that's the first time anybody has come out with such a flat statement. If
it is true, then experiments are possible to detect the difference between
the two concepts. That would be exciting.

ยทยทยท

-------------------
Shannon Williams (96011.02:30) To Bill Powers

You are thinking: output = (input2-input1)/time

I am thinking: input1 = reference
              input2 = input

    where: [input] ---> [neural network] ---> [output]
                                  /|\ |
                                 [adjust] \|/
                                 [ment ] <---------C
                                                   /|\
                                                    >
                                                    R

              The output is LEARNED by making it equal the reference
              for whatever is input.

Fine. That's one of the standard ways of using delay in a neural network.
The network learns a compact and efficient representation of the input.
But in the case of generating appropriate actions in the outer world, why
would you want the output to equal an earlier input? You might want a
later input to equal an input you had earlier found to be useful, but
you wouldn't get that by producing the same output as last time, even
if setting the output equal to the input would have given it you on the
first occasion.

The neural network will learn to predict only when prediction is viable.
If you teach it that A follows B, then that is what it learns. If you do
not give it any consistant input, then it does not learn anything. It only
recognizes the order/predictableness of its universe.

True. And the control system that never tries to predict can control better
in an unpredictable environment than can one that has learned in a more
stable environment that then goes mad. "It ain't so much what you don't
know as what you know that ain't so" is what hurts.

How can an output be generated to avoid situations that are unpleasant?

This question should not be causing problems for you. This is a question
you must ask yourself even in your current version of PCT. The question
for you is: How do we learn which outputs we need to generate to control
a perception?

In answer to your question: We try an output. If it helps, we try it
again. If it doesn't help, we try something else.

You are describing the "classic" process of reorganization within PCT, but
I don't see where it links with your I/O modelling. What you describe is
indeed one way that the HPCT structure learns which outputs are needed to
control which perceptions; and when its outputs do control its perceptions,
it stops trying something else. That's one of the ways reorganization is
supposed to work. It's like the way dead leaves in the autumn "learn" to
build themselves into neat little piles behind hedges--so long as the wind
affets them, they move, but when they get into a calm place, they don't move
any more. So long as the HPCT structure is having trouble controlling, it
moves (changes structure or weights), but when things calm down and control
is successful, it doesn't move any more.

One thing to remember is that the input to the neural network is a pattern.
This means that you can map a behavior to a pattern. If that pattern
later fails to remove an error when the behavior is output, then you can
alter your network to generate another output. BUT- the effect of what you
are doing is this:

at first: neural input = 0X11XX01 ==> neural output = 11111111

after adjustment: neural input = 00110X01 ==> neural output = 11111111
                              = 01110X01 ==> neural output = 11101110

If you take those 1's and 0's to represent the signs of output connections,
you have a good description of one aspect of reorganization.

---------------

Shannon Williams (9614.03:30)

Networks give you an input/output mappings.

Fine, with the proviso I pointed out earlier, that this is true only if the
network is non-recurrent. So say "Networks CAN give you an input/output
mapping" and I'll go along.

Modifiable networks give you
the capacity to learn input/output mappings.

Yes.

The cummulative effect of
these mappings allows an external observer to describe an organism's
behavior as e=r-p.

That's the leap that makes no sense to me. Under what conditions would
an external observer so describe an organism'a behaviour? Or perhaps the
question should be: what does it mean for an external observer to describe
the organism's behaviour as e=r-p? I like straw men to be visible, at least,
before they are knocked down, so that I can determine whether they are
indeed built of straw.

It's a very simple concept.

Perhaps. But you don't make it so.

Maybe you could figure out a question that identifies what you don't
understand. Or sketch what you think that I am saying, and point out its
logic flaws.

OK. I've played with neural networks of different kinds for over 30 years
now, in an on-again, off-again way. I think I have a pretty good grasp of
what they do, and to what they are equivalent, though I'm by no means
expert in modern network theory. I've been a psychologist longer than that,
but have been acquainted with PCT for only 4 years. None of that helps me
to understand the concept of describing behaviour as "e=r-p", or as
"e= anything at all". I can imagine describing the relation between
input and output as a mapping--people have been doing that for decades,
with often great mathematical sophistication (have you looked up J.G.
Taylor's book yet?). As a PCT student, I can imagine seeking out controlled
perceptions by disturbing environmental variables in different ways. I
can imagine modelling the results of these tests to see if I can find
control elements that remain consistent across differen kinds of tests.
But until I have some notion of what it means to describe behaviour as
error, I cannot make the conceptual leap between saying that an S-R account
is adequate for psychology and saying that such an account can lead to
an organism whose behaviour can be described as "error=something."

It's NOT a simple concept.

If you insist that all control loops are operating
in parallel, connected directly to sensory input and to the muscles, then
the argument from existence should be enough to say that there is a method
whereby those loops came to exist.

You are right that the way that I am visualizing the control loop explains
not only the control loops' existance but their origin. But I am not
visualizing parallel loops. If the loops were all parallel, we would never
experience conflict.

Several different problems here: I said that your assertion that the loops
exist implies that a method must exist whereby they come to be. You now
read that as saying that you have described, or are prepared to describe
(I'm not sure which), the way those loops have come to be. That's a very
different kettle of fish.

You don't imagine parallel loops, and you don't imagine a hierarchy of loops,
but you do conceive that there are many loops. What configurations do you
imagine that are neither parallel nor hierarchic?

Conflict occurs when two control systems attempt simultaneously to control
some perception to two different values. So far as I can see, that is an
assertion that conflict occurs only between control systems operating in
parallel.

       3) If I am going to model behavior using a neural network, I can
       simulate the evolution of behavior using the visualization in #1.
       But I can't if I am using the visualization in #2.

Whyever not?

Draw a sketch outlining how PCT models the evolution of behavior. I want
to see each little stage of the evolution. Show me this, and I will answer
your question. (No perhapsial handwaving, please).

You don't want a sketch. You want a model. We have a model, on which we would
like someone to run the experiments to see whether it works, in a specific
situation very much like the one with which you started this thread. Would
you like to run the experiments, helping to debug the model in the process?
If you have a reasonable Mac, I'd be quite happy to send it to you. It's
probably far from being properly debugged, but at least some of it works
properly.

Our model is, of course, very simplified in comparison to real living
organisms. But it is intended to test three (possibly four) modes of
reorganization. (I don't know why I'm writing this, since I've done so
several times in the last couple of weeks...but anyway...)

The Little Baby works in an environment consisting of a simple formal
language whose syntax is exactly prescribed by a BNF grammar. Each output
symbol of the grammar is assigned a position in a 3-D "feature space",
and the output stream from the grammar defines a trajectory through this
feature space. The trajectory is a smoothed version of the jumps between
the locations of successive symbols, so that often it won't arrive at the
actual location before the next symbol is emitted. The grammar engine is
unaffected by what the Little Baby does, so all the LB could possibly do
is to learn to "interpret" the grammar. Its objective is to keep its "finger"
on the target--the tip of the moving trajectory output by the grammar--and
I expect it to do this by learning, at different hierarchic levels, the
syntactic constructs in the grammar, or something that produces equivalent
results (in other words, it might learn a different, but functionally
similar, grammar).

Internally, the LB is born as a layered set of ECUs. It is layered, because
the interconnections of upgoing and downgoing connections are consistent,
the perceptual signals generated at layer N all providing sensory inputs
to ECUs at layer N+1, and the output signals from an ECU at layer N all
providing inputs to reference inputs at layer N-1. Which ECUs are connected
to which others, and with what weights (positive or negative), is initially
completely random. The level 1 ECUs provide outputs that together drive
a "finger" in the 3-D space, and the sensory inputs from the 3-D space
are the positions of the finger and of the target (the end point of the
symbol-based trajectory). By the way, all the ECUs have the possibility
of having delayed inputs, which makes derivative and acceleration perceptions
possible. Whether those possibilities are used depends on the weights.

The LB has a reorganizing system based on intrinsic variables. It REALLY
doesn't want there to be any deviation between the finger position and the
target--tracking by following is no good to it. Also, it doesn't want to have
error in any ECU. The value of an error-based intrinsic variable is
a*E^2 + b*(dE/dt)^2, where a and b are some positive constants. If a
particular ECU has a high value for its "error-based intrinsic" variable,
it is more likely to "reorganize" during some compute cycle than if it
has a low value for that variable.

The "finger-target" "intrinsic" variable is actually taken to be a top-level
reference for the control hierarchy, rather than sitting atop a separate
"reorganization" hierarchy, as Bill P would probably prefer, so reorganization
happens because other control systems can't eliminate the error in the
top-level control systems. The "error-based intrinsic error" is the only
direct influence on reorganization rate.

The experiments planned for the LB before the money ran out were to test
different aspects of reorganization. There would be no "perhapsial
handwaving" about it, (as there has been none in anything I have written
in response to your postings).

Experiment 1. Reorganize output connections and their signs without changing
weight values. This works fine so far as getting the LB to track a symbol
stream of the kind "ABBBAABAAABABBAAAB..." and, though we haven't been able
to run the study for the aforementioned financial reasons, there seems no
reason it wouldn't work just as well for "ABCBFHAUD..." in 3-D rather than
1-D. One or two sign reversals, and tracking proceeds (in a following mode,
of course, there having been no structure for the LB to learn).

Experiment 2. Hebbian changes of input weights (like the neural network
remapping you have been going on about). This experiment treats the perceptual
input functions and their connections like a multilayer perceptron that
learns to perceive only what it can control given its "born in" set of
output connections. This hasn't been tested, but the method is prescribed
and programmed (but possibly not debugged).

Experiment 3. Hebbian changes of output weights. The LB learns to control
whatever it can perceive given its "born in" perceptual functions. This also
is programmed, if not debugged.

Experiment 4. Do 2 and 3 together. The LB learns to perceive what it can
control and to control what it can perceive. There's a danger here that
it will learn to ignore all but a small subspace of the feature space, so
that it fools itself into thinking it's doing fine when the experimenter
knows it is ignoring, say, stop-continuant contrasts. That would be like
a baby learning to ignore phonetic contrasts that don't occur in its own
language, so if that happens, it could be an illustrative "danger".

Experiment 5. Add and subtract ECUs, changing the hierarchic structure.
This is not yet programmed, but it was planned. The idea is that ECUs that
see little and do little would be removed from the hierarchy, and that
continued intrinsic error in finger-target distance would cause the addition
of new ECUs at random points in or above the existing hierarchy.

It would be really nice for someone either to provide money or to provide
time and effort to continue these studies. The results might well help
you to visualize how a control network would evolve. Personally, I think
it likely that the growth and evolution of a control hierarchy would be
much more stable and quicker than that of a feed-forward neural network,
because in the control hierarchy "error" can be assigned to each node (ECU)
whereas in the normal NN it can be assigned only to the network as a whole.

Incidentally, Shannon, if you've read the Layered Protocol tutorial I sent
you, you will have noticed several comments about the characteristics of
control hierarchies developed through reorganization, in particular the notion
of critical stability embodied in the term "the Bomb in the Hierarchy."
These ideas are ones that I'd love to see investigated in an extension of
the LB project. But at present, the LB is little more than an embryo,
far from the child whose temper tantrums would be the observable consequence
of the "Bomb."

---------------
Shannon Williams (960114.01:00) to Bill Powers

You can use a trainable
neural network to explain how memory, imagination, language, math, ANYTHING
can work. I even see how the networks could have successively evolved as
species evolved. It's easy.

You sound like some of the early enthusiasts. It ain't so easy in practice.
Yes, they can learn all kinds of patterns, but I don't think anybody I
know of has tried letting a NN train itself by navigating in a dangerous
world. There are self-training networks, but what they learn is related
to the statistical structure of their inputs, not to how to bring their
inputs to "desired" states.

Maybe it's easy, but lots of people who have said that have found otherwise
when they actually have tried. You may be different. You'll become very
rich if so.

But if my car is veering to the
left, it is easy for me to recognize/predict that it will continue veering
to the left unless I or some obstical does something about it.

Why so? Maybe it veered to the left because the left front wheel hit a bump,
and in the next millisecond the right front wheel will hit one. Maybe not.
How is it easy for you to recognize which is the case? The whole point of
PCT is that things in the world that you haven't seen can affect you at any
time.

Bill said:

I think that is enough to start with.

Why do you think this?

I think he meant that if you read what he wrote, you might ponder it for
a while before responding. And then he could continue the discussion based
on a thoughtful response from you. Whether you have provided that response
is not for me to say.

Martin

<[Bill Leach 960115.21:28 U.S. Eastern Time Zone]

[Shannon Williams (960113)]

       1. Behavior serves one function and one function only:
            to control perceptual signals.

The power of PCT is in the concept of the control loop. No one aspect
of that control loop should be emphasized more than the others.

Therefore, if you think that it is important to emphasize that
behaviors exist only to control perception then you must equally
emphasize that perceptions exist only to elicit behaviors. If you
don't, then you emphasize one biased and limited aspect of the loop,
rather than the concept of 'loop'.

From a pure math analysis standpoint you are absolutely right. As a

practical matter you are "so far off base" as to not even be in the
field!

Living beings, particularly human, are purposeful creatures and for so
long as they individually happen to be living and functioning properly
then the reference(s) for their system is the apex of and point of all
studies. Unfortunately for us all, the reference is actually
unaccessible (even our own) and thus the best we can do is to infer it
(to the best of our abilities) by finding the environmental variable(s)
that are the subject of the perceptual control actions by said being.
The TEST is currently the best that we have for that purpose.

       2. Perceptual signals are neural currents. For each
            perceptual signal there is a single neural current
            that embodies it.

What I think that you must have in mind is for the hundreds of original
perceptual signals from the outside world to enter hundreds of e=r-p
control loops. And these must eventually be weeded/selected down to the
e=r-p loops which directly generate behavior. I do not believe this.

Forget HPCT for a moment... It seems to me to be pretty clear that you
do not have a grasp of multilevel control systems.

I believe that our internal workings are much simpler than that. I
believe that the hundreds of original perceptual signals from outside
the brain are just considered one big perception of the world. And we
react with simple input/output mappings according to what we perceive.
Our thinking appears complicated because we not only react to our
current perceptions, but we predict how these perceptions can change,
and we react (with simple input/output mapping) according to our
predictions.

Shannon, you are of course, free to believe whatever you wish to believe
but this previous quote from you is a simple "Stimulus-Response" system.
The only difference between "normal" S-R and your stated version is that
both S and R are postulated to be patterns.

I will give you the same challenge that Bill P. has attempted. Provide a
_WORKING_ model of your system. The field is full of "fine theories" and
"wonderful models"... on paper. At the moment, PCT is the only game in
town where an actual model provides actual behaviour that matches to a
high degree of correllation the behaviour of actual humans.
Incidentally, some of the early predictions concerning how the model
would behave turned out to be incorrect when the model actually ran. The
terrible discouragement of such a state of affairs however turned to near
jubulation when human subject result matched the working model's
behaviour and not the paper prediction.

Yes. We can describe the relationship between perceptions and
behaviors as a control loop.

            It does this
            by

                (a) comparing the perceptual signal to the
                    reference signal,

No.

The only way to make and believe your statement here is if you do not
understand in its' simplest form how a closed loop negative feedback
control system works. No matter how complex a control system becomes,
the elements and functions of the basic system must exist.

The brain has the ability to predict how its perceptions will change
if it does not output any behavior. The brain also has the ability ...

Imagination is another matter. While some postuates exist as to how such
a thing might function (and seem to be "reasonable" to most PCTers),
these are (at least currently) untestable.

The discussions on PCT range far and wide at times but PCT as a science
and a research effort deals only with that which is measurable and
testable on individual specimens -- both in such a fashion that others
can duplicate the work without bias.

              and

                (b) generating an output signal whose effect
                    tends to reduce the error between
                    perception and reference.

The brain simply generates output that is associated with the
perceptions that it elects to receive. There is nothing mysterious
about this. It is a simple input/output mapping of perception to
behavior.

I'm sorry Shannon but it must be quite mysterious as PCTers have been
unable to locate any working models other than their own. I know we must
be "pretty stupid" here since it has also proven to be impossible to
create working models of anyone else's paper model.

One of my favorite jokes applies here...

The blackboard has hundreds of complex equations leading to an answer
with a section in the middle marked "Magic Occurs Here". The professor's
comment is that a little additional work is need!

You then "turn around" and say "OK" to the negative feedback statement!
Such is likely to confuse everyone as to just "where you are coming
from". Either you do or you do not accept closed loop negative feedback
control system operation or you do not.

-bill

<[Bill Leach 960115.22:51 U.S. Eastern Time Zone]

[Rick Marken (960115.1000)]

Excellent posting Rick... with one exception. :slight_smile:

... disturbance); the _perceptual variable_, p. The loop keeps p = r.
If there is no explicit reference, then the loop keeps p = 0.

The last sentence here is not correct. In a "bidirectional" or
effectively bidriectional control system, a "reference of zero" result in
for a particular controlled perception means that a loop has a non-zero
reference (ie: The zero reference is inverted to maximum).

In a simple unidirectional control loop the reference of zero means that
the perception is not controlled and may assume any (real) value.

-bill