Perceiving error

[From Bill Powers (930922.0710 MDT)]

Greg Williams (930922)

Nice to hear from you again.

I didn't get much chance at the meeting to ponder the materials
you brought. The exerpts you cite now raise some questions in my
mind.

1. "When driving an automobile, the HUMAN OPERATOR perceives a
discrepancy or error between the desired state of the vehicle
and its actual state. The car may have deviated from the center
of the lane or may be pointing in a direction away from the
road. The driver wishes to reduce this error."

This says explicitly that what the driver perceives is an error,
implying that the error exists outside the driver. If the error
exists outside the driver, then the comparison process must also
take place outside the driver, interpreting certain relationships
of the car to the road as errors and presenting this interpreted
result to the driver's senses.

In PCT, what is perceived is simply the relationship of car to
road that exists at any time. It is neither correct nor
incorrect; it is what it is. The perception of this relationship,
inside the driver, is compared with a reference signal, also
inside the driver. The driver can select any state of the
perceived relationship as a reference condition, and manipulate
the car to achieve the reference relationship -- for example, car
straddling the lane-separator line, car crosswise in the road,
car leaving the road altogether (as when turning a corner), car
crashing into a bridge abutment in a suicide attempt. What comes
in to the driver's senses is not an error signal, but simply a
report on the current state of affairs as the perceptual
equipment represents it. There are no errors or "discrepancies"
represented in the perception.

2. (regarding tracking of time-varying inputs in general:) "It
is assumed that the human 'intends' to produce a given
position."

The implication here is that the intended position is the actual
position of the car relative to the actual road (etc.), outside
the driver. Putting "intends" in quotes suggests that the author
doesn't treat intention as a real phenomenon, but just as a way
of speaking. I don't see that Wickens has made it clear that an
intended position is an intended _perception_ of position, as
opposed to an actual position objectively defined. Nor is it
clear that Wickens identifies an intended position as an
adjustable reference signal inside the controlling system.

Wickens doesn't seem to grasp the situation as we do in PCT.
Milsum, I agree, did better.

Perhaps what we need is a survey of all the major manual control
experts' way of describing simple control situations, to get an
idea of how many understand that only perceptions can be
controlled, and that intentions and error signals exist inside
the driver. From your citations from Wickens, I'd say he may
understand control theory but he flunks PCT, although perhaps he
redeems himself in other parts of the book.

We'll put in an interlibrary request for the Latash book.

···

---------------------------------------------------------------
Best,

Bill P.

[From Bruce Abbott (960926.2040 EST)]

Rick Marken (960926.1530) --

Bruce Abbott (960926.0820 EST)

Under this condition, to state that "the experimenter is _not_ a 'source of
information as to error' " (emphasis mine) is, strictly speaking, incorrect.

Bill Powers (960926.1430 MDT) --

We should all kick ourselves. KR is effectively part of the controller's
input function

Nice observation. But KR is still not "information as to error". At best, it
is what you say: "experimenter-supplied information about the state of an
otherwise unobservable controlled variable." Saying that it is "information
as to error" suggests that error is something we perceive;

I think we're getting into a language problem here. If the participant has
adopted the experimenter's reference (or if the experimenter has adopted the
participant's reference), and the experimenter is reporting the deviation of
the CV from reference, and the participant takes corrective action on the
basis of that deviation, isn't that the same as if the participant observed
the CV, compared it to the reference, and took corrective action on the
basis of that deviation? If so, then it would not be incorrect to say that
the experimenter is providing "information as to error" (deviation of CV
from reference)? Perhaps I can perceive neither the reference position nor
my current position, so I delegate to the experimenter the job of comparing
the two and reporting the error to me. I then do what I always do with such
errors when computing them myself.

And who says we can't perceive error? Are you asserting that when I wish to
be at A, and perceive myself to be at B, that I can't perceive the error in
my position? And act on it?

As Buddha might have said: we do not perceive error: error is a result of
comparing what we perceive to what we _want_ to perceive.

No, wise Buddha would have perceived the error in stating that we do not
perceive error, and thus would have said no such thing!

Regards,

Bruce

[From Bruce Gregory (960927.0935 EDT)]

(Bruce Abbott 960926.2040 EST)

>Rick Marken (960926.1530) --

>>Bruce Abbott (960926.0820 EST)
>
>>Under this condition, to state that "the experimenter is _not_ a 'source of
>>information as to error' " (emphasis mine) is, strictly speaking, incorrect.
>
>>>Bill Powers (960926.1430 MDT) --
>
>>>We should all kick ourselves. KR is effectively part of the controller's
>>>input function
>
>Nice observation. But KR is still not "information as to error". At best, it
>is what you say: "experimenter-supplied information about the state of an
>otherwise unobservable controlled variable." Saying that it is "information
>as to error" suggests that error is something we perceive;

I think we're getting into a language problem here. If the participant has
adopted the experimenter's reference (or if the experimenter has adopted the
participant's reference), and the experimenter is reporting the deviation of
the CV from reference, and the participant takes corrective action on the
basis of that deviation, isn't that the same as if the participant observed
the CV, compared it to the reference, and took corrective action on the
basis of that deviation? If so, then it would not be incorrect to say that
the experimenter is providing "information as to error" (deviation of CV
from reference)? Perhaps I can perceive neither the reference position nor
my current position, so I delegate to the experimenter the job of comparing
the two and reporting the error to me. I then do what I always do with such
errors when computing them myself.

When we delegate to someone else the responsibility for
determining the error, do we not tacitly shift our goal? I
think this is what happens whenever a teacher "steers" a
discussion or asks questions to which he or she knows the
answers. The students' goals then become, not to understand in
terms of their own base of knowledge, but to say what they
believe the teacher wants to hear. Students are very good at
mindreading. In many ways they understand how to test for
whatever the teacher is controlling for much better than the
teacher ever understand how to test for what each of the
students is controlling for. Perhaps it is because they work
cooperatively and the teacher is alone with a different set of
goals.

Bruce

[From Bill Powers (960927.0645 MDT)]

Bruce Abbott (960926.2040 EST)--

I think we're getting into a language problem here. If the participant has
adopted the experimenter's reference (or if the experimenter has adopted the
participant's reference), and the experimenter is reporting the deviation of
the CV from reference, and the participant takes corrective action on the
basis of that deviation, isn't that the same as if the participant observed
the CV, compared it to the reference, and took corrective action on the
basis of that deviation?

A long time ago, and for reasons that are partly neurological and partly
arbitrary, I set the rule for myself that perception is associated ONLY with
afferent pathways in the brain. By that I mean the pathways that begin with
sensory neurons and build higher and higher levels of representation on the
signals from those neurons. A strict interpretation of that rule says that
we do NOT experience reference signals or error signals, or the signals
emitted by any level to lower levels (efferent signals).

It was the strict application of this rule (which dates back close to the
beginning in the period with Bob Clark, 1953 - 1960) that led to proposing
the imagination connection. If only afferent or upgoing signals can be
experienced as perceptions, how do we know the values of reference signals
before we put them into effect? We obviously do: I can tell you what my aim
is before I carry it out. But that means we are perceiving reference
signals, and the rule says that we do not perceive reference signals, which
are efferent signals.

It took a while before the answer popped up. In order not to violate the
basic rule, somehow we have to get the reference signal information into the
afferent channels so it can be perceived. The simple-minded way to do this
(my normal way) was to route a copy of the reference signal through a
short-circuit into the perceptual input function of the system that is
providing the reference signal. Then the input function of the higher system
receives a signal just like the one it would receive if the lower system had
actually made its own perceptual signal match the reference signal it is
receiving from above. Normally the higher-level input function receives
copies of lower-level perceptual signals; substituting the reference signals
sent to each contributing lower-level system would create the same situation
that would hold if the lower systems were all controlling perfectly.

This immediately suggested "imagination," because imagination has the same
sort of properties that this connection would have. First, the
short-circuited reference signal would enter the same input function that
the lower-level perception would normally enter, so the signal would receive
the same interpretation it receives in the normal mode of operation. Second,
the conversion of the reference signal into a perception would be immediate
and effortless -- control of the perceptions would be perfect. Third, it
would be possible to imagine perceptions in this way which the lower-level
systems could not actually provide, either because doing so would be
physically impossible (imagine lifting your car over you head) or because
you don't happen to be in the required environment (while you're driving to
work, imagine having a conversation with your boss). Fourth, the terms in
which we imagine are exactly the same terms in which we perceive. Such
properties certainly fit what we call imagination. Incidentally, brain
research has recently shown that animals which are anticipating the
appearance of a stimulus show activity in the same perceptual channels that
are involved when the stimulus actually appears, which is strong support for
this picture.

It was only a short step to get memory into this. If efferent signals from
one level of systems are translated by a memory function into the actual
signals received as references by lower systems (a feature which would take
care of translating from the terms of a higher system into the terms of a
lower one), then the same connection could explain remembering as well as
imagining. The memory aspect of reference signals comes in when you "do the
same thing again." If the "thing" is turning left at the third traffic
light, what you are repeating is obviously the perception of turning left,
not the actions that lead to this result in the traffic of today, and with
the timing of today's arrival at the traffic light which could be either
green or red (or broken) when you get there. What you're repeating is the
_perception_ of turning left, not the _actions_, which means that the
perceptions must be remembered somehow and used as reference signals. But if
you can do that, you can also simply remember without acting, because the
source of the reference signals is memory and the imagination connection can
route the output of memory into the perceptual input functions that will
give that output the correct interpretation.

So we get imagination, remembering, and "doing the same thing again" all
from the basic postulate that all perception is of afferent signals. Quite a
harvest from planting a single seed. And of course we also get planning and
dreaming -- model-based control is the "modern" term.

OK, I wanted you to see the structure of the model that explains why I want
to keep the postulate that all perception is associated with afferent
channels only.

And who says we can't perceive error? Are you asserting that when I wish to
be at A, and perceive myself to be at B, that I can't perceive the error in
my position? And act on it?

This is how levels of perception get into the act. I'm on the left side of
the street (A) and I wish to be on the right side of the street (B). How do
I perceive being on the right side of the street when I am not there? By the
above reasoning, I must imagine being there. Imagining provides a signal
(actually a lot of signals, because we're really talking about multiple
systems operating in parallel) that is like what I would experience if I
were in position B, on the other side of the street. Normal-mode perception
provides similiar signals representing being in position A, the position
where I am. I can now _perceive the difference in positions_ at a higher
level of perception. One of the positions is real, the other imagined. And I
can perceive the relationship between them.

The relationship is a certain distance between A and B. And what is the
_reference_ relationship? ZERO distance between A and B. So the higher
control system turns this error (which I do not perceive) into an action
that moves me from A to B and corrects the relationship error.

If the control-system error signals were the errors we perceive, then we
would always have to reduce all errors to zero. It's very easy to see the
target in a pursuit tracking task as setting the reference position and the
cursor position as being the controlled variable. Obviously, good tracking
requires minimizing the "error" in cursor position. If you've become
satisfied with this interpretation, it comes as a great shock to see the
controller suddenly start keeping the cursor two inches to the right of the
moving target. I always try to administer this shock when demonstrating
control, as with the rubber band experiment. The point is that the
controlled variable is a spatial _relationship_, and cursor-on-target is
just one of a possible range of relationships that could be maintained. It's
not an error in the control-system sense, it's just a distance. If you
always pick the reference condition of zero distance you have a degenerate
case, because zero error does happen to coincide with zero distance between
target and cursor. But if you pick a different distance like two inches to
the right for the cursor, zero error corresponds to a NON-ZERO distance
between cursor and target. And then you can see that while the spatial
relationship is perceivable, the error signal is not.

Best,

Bill P.

···

---------------------------------------------------------------------------

[From Rick Marken (960927.0900)]

Me:

Saying that it [KR] is "information as to error" suggests that error is
something we perceive;

Bruce Abbott (960926.2040 EST) --

I think we're getting into a language problem here.

I think it's quite a bit deeper than that. If the PCT model is correct, then
saying that any perception (including KR) is "information as to error" is
not a language problem; it is a gross misrepresentation of how a control
model actually works. I think it also represents a failure to understand the
deepest insight PCT gives us about the nature of human nature, viz. that
perceptual experiences have no inherent value; they are just representations
of external reality. Different people will seek or avoid the same perception
because people _want_ different perceptions. The value (good, bad, error,
correct) of perception is determined by what people want (references for
percpetion); not by the perception itself.

If the participant has adopted the experimenter's reference (or if the
experimenter has adopted the participant's reference), and the experimenter
is reporting the deviation of the CV from reference, and the participant
takes corrective action on the basis of that deviation, isn't that the same
as if the participant observed the CV, compared it to the reference, and
took corrective action on the basis of that deviation?

In this case the KR does, indeed, correspond to the "error" in the
participant/controller. It's the same as the situation that exists in a
tracking task where the subject has agreed to control the distance between
target and cursor, keeping that distance at zero. The distance between
target and cursor corresponds fairly closely to the actual error signal in
the subject's brain. You might be content call the distance between target
and cursor -- the KR in the tracking task -- "information as to error". But,
as Bill Powers (960927.0645 MDT) noted "if you've become satisfied with this
interpretation, it comes as a great shock to see the controller suddenly
start keeping the cursor two inches to the right of the moving target".
Suddenly the subject seems to be ignoring a rather consistent piece of
information about error.

You might say that calling the KR "information as to error" is still
legitimate; after all, you understand PCT so you know that when the cursor is
being kept two inches to the right of the moving target the subject's
reference for the distance between target and cursor has changed. So the
"information as to error" that exists in the perception has changed. Now
there is information as to error only when the cursor is _not_ two inches to
the right of the target.

But I don't like this linguistic slight of hand because it obfuscates a
beautiful and powerful PCT observation with language. The beautiful and
powerful observation is that the "value" of an experience (like KR) is
determined by the subject; value does not "inhere" in the experience itself.
The tracking experiment shows that the "value" of a particular experience
(the distance between target and cursor) can change; at one time zero
distance between target and cursor is "valued"; at another time a two inch
distance between target and cursor is "valued". This change in value occurs
(according to PCT) not because of any change in the experience itself but
because of a change in the subject; a change in one of the subject's goals
(reference signals). This change in goal changes what was once a valued
perception (zero distance between cursor and target) in one that is actively
avoided.

Conventional psychology has tried to attribute changes in the "value" of
experience to the external environment: it has done this by giving
environmental events imaginary properties: valences, affordances,
disciminativeness, reinforceingness, informativeness, etc. It is very hard
for PCT to make headway when people "take it for granted" that environmetal
events actually have many of these properties. When you say that KR
provides "information as to error" you are playing into these misconceptions;
you allow people to cling to the belief that something in the world (KR) has
properties like "informativeness"; something other than the subjects
themselves that determines the "value" of an experience (like the experience
of hearing the words "correct" or "incorrect").

No, wise Buddha would have perceived the error in stating that we do not
perceive error, and thus would have said no such thing!

I think the error is yours -- and I think it is a profound one. (But,
characteristically, it is quite consistent with your desire to see no
conflict between PCT and conventional psychology). It is a profound error
not only in terms of understanding PCT but in terms of real life dealings
with people. If you actually believe that KR is "information as to error"
then you must be quite puzzled and annoyed when you give KR and people don't
treat it as "information as to error". You have given me a ton of
"information as to error" (such as your statement above about the wise
Buddha) and, yet, I have done nothing to correct my ways. Maybe there just
isn't enough information as to error in your KR? Or could it be that we
have different references for the same perception (PCT) - - a perception
that, in itself, is not right or wrong, good or bad, correct or in error?
Naaa. That would mean that individual people (rather than an external
environment or god) determine the value of what they experience. We certainly
can't have THAT now, can we;-)

Best

Rick

[From Bruce Abbott (960928.1810 EST)]

Bruce Abbott (960926.2040 EST)

I think we're getting into a language problem here. If the participant has
adopted the experimenter's reference (or if the experimenter has adopted the
participant's reference), and the experimenter is reporting the deviation of
the CV from reference, and the participant takes corrective action on the
basis of that deviation, isn't that the same as if the participant observed
the CV, compared it to the reference, and took corrective action on the
basis of that deviation?

Bill Powers (960927.0645 MDT) --

A long time ago, and for reasons that are partly neurological and partly
arbitrary, I set the rule for myself that perception is associated ONLY with
afferent pathways in the brain. By that I mean the pathways that begin with
sensory neurons and build higher and higher levels of representation on the
signals from those neurons. A strict interpretation of that rule says that
we do NOT experience reference signals or error signals, or the signals
emitted by any level to lower levels (efferent signals).

Thanks for the explanation, Bill (only partly quoted here). I understand
your reasons for adopting this assumption. But I do have my own perspective
to add.

I think we have to be very careful here not to equate perception with
"experience." Experience implies a conscious perceiver, yet many perceptual
signals apparently are never consciously perceived. Perception itself
results from the analysis of neural input, and typically that input begins
at sensory receptors, although it may arise from other pathways, as in your
suggestion that perception can arise from the activation of stored patterns
called memories. The nervous system is organized such that sensory signals
are (with a few minor exceptions) conducted along afferent pathways from the
sensory receptors toward those spinal and/or brain structures that perform
the analyses; at higher levels in the brain the patterned activity in these
structures may give rise in some unknown way to conscious experience.

Limiting the term "perceptual signal" to cover only those signals conducted
through afferent pathways is probably as good a solution as any when
breaking down the system for purposes of analysis. A perceptual signal
arising from a muscle spindle follows an afferent pathway to the spinal
cord; it remains a perceptual signal under this definition whether or not it
is conducted beyond the immediate, low-level control system of which it is a
part. But so, it seems to me, would an afferent signal arising from the
comparator, whose magnitude represented the difference between reference and
perceptual input signals (other wise known as "error"). Whether such
perceptual signals exist in the nervous system and participate as inputs to
other levels of the system is, it seems to me, an empirical matter. It is
an assumption of PCT as opposed to a deduction from it. Furthermore, it
matters not a whit whether such signals ultimately become part of
"experience," as the failure to be experienced is no sure criterion for
rejecting a given signal as perceptual.

The entity labeled "error" in a control system is nothing more than a
difference (discrepancy) between two signals; discrepancies can be
perceptual signals. (I would propose, however, that the term "error signal"
be reserved for describing the function of this particular discrepancy
within the control loop, which is to drive action.) What is an error signal
_within_ a control system would only be a another perceptual signal running
up the afferent pathways, so far as higher-level systems are concerned. If
such an afferent signal existed and if that signal improved the functioning
of some system further upstream in the brain, then I don't see any reason
why over the course of evolution such a connection could not have been
preserved. Whether this arrangement exists or not is a matter to be
disclosed by future research; their nonexistence is a PCT assumption, not an
established fact, even though that assumption may contribute a certain
elegance to the PCT formulation.

With respect to memory and imagination depending on the same perceptual
centers that analyze sensory input of the same qualitative type as the
remembered or imagined "image," this view is actually inherent (at least to
my mind) in the "doctrine of specific nerve energies" proposed around 100
years ago by Mueller, which states that the quality of a sensation depends
on the area of the brain in which the neural impulses are analyzed and not
upon anything inherent in the impulses themselves; e.g., neural signals
arriving at the auditory cortex produce sounds because they arrive at the
auditory cortex; rerouted to the visual cortex, the same signals would
produce sensations of light. It would be wasteful of brain tissue if the
internally-generated patterns we interpret as dreams or imaginings had to be
produced via different analytic structures than those used for primary
perception. And the evidence that we do use these same structures for both
types of perceptual experience is, I think, overwhelming. For example,
destruction of the cortical region responsible for color analysis eliminates
a person's ability, not only to see in color, but also to dream in color,
and even to imagine what colors "look like."

No, wise Buddha would have perceived the error in stating that we do not
perceive error, and thus would have said no such thing!

Rick Marken (960927.0900) --

I think the error is yours -- and I think it is a profound one. (But,
characteristically, it is quite consistent with your desire to see no
conflict between PCT and conventional psychology).

Rick, your ability to read other people's minds is not nearly as good as you
believe it to be. In the two years I've been subscribed to CSGNET, I've
never once indicated that I have a desire to see "no conflict between PCT
and conventional psychology." But thanks for demonstrating how the Test for
the controlled variable can fail miserably.

Bill Powers tells us that an inability to perceive error is simply a PCT
postulate -- an assumption to be tested rather than a known fact. As it is
quite evident that I (although evidently not you) can perceive discrepancies
or differences between perceptual signals, the question as to why PCT
assumes that "error" cannot be perceived is certainly one worth asking.

It is a profound error
not only in terms of understanding PCT but in terms of real life dealings
with people. If you actually believe that KR is "information as to error"
then you must be quite puzzled and annoyed when you give KR and people don't
treat it as "information as to error".

What I actually said is that _if_ the experimenter and participant adopt the
same reference value, then what the experimenter perceives (and reports as
error) is the same as what the participant perceives (and acts upon) as error.

In your response, you ignore the qualifier and respond as if I had stated
that KR "is" information as to error, at all times and under every
circumstance. This is not my position, and the rest of your attack strikes
not at me but at, of all things, that ubiquitous CSGNET straw man -- the
only person here actually holding that position.

Regards,

Bruce

[From Bill Powers (960928.2036 MDT)]

Bruce Abbott (960928.1810 EST)--

I think we have to be very careful here not to equate perception with
"experience." Experience implies a conscious perceiver, yet many perceptual
signals apparently are never consciously perceived.

This is theoretically true, and I have said as much. I make a distinction
between perception (a signal in an afferent pathway) and awareness
(experiencing that signal). The fact that lower-level systems continue to
control when you are aware only of higher-level processes shows that
perceptual signals must exist outside awareness.

There seem to be perceptual signals of which one is not aware, but of which
one can become aware by a shift of attention. These have some empirical
basis, in that it seems reasonable that the signals were there all the time
(someone speaking to you while you're reading something fascinating), but
they suddenly appear in conscious experience when attention shifts to them.

Perceptual signals of which nobody is ever aware, however, create a dilemma.
Who knows about them? If any process in the brain is _permanently_
inaccessible to awareness, it must be purely hypothetical; there's no way to
verify that such signals could be classed as "perceptions." All we know
about anything exists in the realm of consciousness, at least while we are
attending to it. What is _never_ consciously experienced isn't part of what
we consciously know. Which simply means we don't know about it.

What is an error signal _within_ a control system would only be a another
perceptual signal running up the afferent pathways, so far as higher-level
systems are concerned.

Yes, if you want to propose a model organized like that you're certainly
free to do so. However, you have to make it work. Have you ever tried to
simulate a two-level system in which the information reaching the
higher-level input function is the error signal in the lower system? I have,
and it didn't work as a controller. The problem is that the error signal is
then part of the basis for adjusting the reference signal, which directly
affects the error signal; you get a closed loop that doesn't include the
rest of the lower-level world. What happens is that the reference signal is
adjusted until the error signal matches the higher-level reference, and
what's happening in the lower-level world doesn't get controlled. No matter
what the lower-level perception is, the reference signal gets adjusted until
the error is some particular amount. So either no action results, or the
action is maintained constant no matter what happens to the perception. That
might be an interesting feature for accomplishing something or other, but it
doesn't accomplish control of perception.

We're really talking about an empirical question here. Do we, in fact,
experience error signals? In the cases we're been talking about, confusion
is caused because the controlled variable is a relationship between two
lower perceptions, so we can verbally label that relationship an "error" if
we want to. This gets even more confusing when we're relying on someone
else's description of what THAT person considers to be an error -- that is,
whatever relationship between two perceptions that person is calling an error.

Let's consider a simpler case. Suppose you simply want to close your fist.
Just before you close it, you can feel and see the configuration of the hand
that exists. But where is the error, when you decide you want it closed?
This is very different from the case where you can see a target just as well
as you can see a cursor, and where you can clearly see the distance between
them. Those are obvious present-time perceptions that seem perfectly clear
and real -- which go away when you close your eyes.

But what are you perceiving when you decide you want your hand closed? All
you actually perceive with your senses is the hand, open. You don't have a
visual image of a closed hand that is also arising from your senses, so you
can see the relationship between the closed and open hands. The only true
perception is of an open hand. You can, with a slight effort, _imagine_ your
hand closed, but that is nothing like perceiving a closed hand with your
senses (at least it's not for me). But you don't need to do that in order to
close your hand: all you need to do is "will" it closed, and it closes. When
I do that, I get no hint of either the reference signal or the error signal.
I know they must be there, theoretically, but I don't experience them.

It's just as clear that you don't experience efferent signals. The signals
that make your muscles get tense are, as far as I can tell, totally
inaccessible to awareness. All you ever experience is the consequence of the
muscles contracting. You have to deliberately imagine the consequence in
order to create anything like experiencing an efferent signal -- and by my
hypothesis, you're then routing the reference signals into the perceptual
functions, so you're still being aware of afferent signals only.

I agree that what is called an error signal in a control system is just a
difference between two signals. But here the two signals are not both
perceptual signals originating at lower levels, and the organization of the
system is such that the difference is always brought as close to zero as
possible. When the difference is between two signals from lower systems, it
is no longer an "error" except in comparison with some reference signal that
specifies how large the difference should be.

Of course that reference level could be zero, leading to all our
difficulties. But it doesn't HAVE to be zero; it can have any value. In the
game of warmer and colder, you as the behaving system can change your mind
and start trying to get the experimenter to keep saying "colder." No matter
what the so-called "error" is, you can redefine the target value of the
error to be any nonzero amount if you want. So this "error" is really just a
controlled perception like any other, and the REAL error signal is still not
being perceived.

Best,

Bill P.

From Greg Williams (921210)

Mark Olson (921208)

Basically, Greg Williams sent me a post which clarified
things for me really well--its alot like what I've been saying all along,
(but didn't quite know it--funny how that works). Whether it's perceptions
or error that is controlled is dependent on what LEVEL from which one is
speacking. From an individual ECS level, its percpetions,no doubt--the
math shows that conclusively. But from the perspecive of teh whole
organism (the organism as an organism, as I said earlier) its error.

Basically both are equally correct from each perspective. Before I thought
both were correct but one was more correct.

Rick Marken (921209.0900)

So I ask you (and Greg) to explain how "perspective"
(or "level") influences the results of the test for controlled variables. If
the "error" variable can be "not controlled" from one perspective but
"controlled" from another, then I could be coming to the wrong conclusion
when I say "this variable is not controlled" because I happen to have done
the test from the "wrong" perspective. It also would be possible to conclude
that, say, the difference between curson and target is "controlled" from
one perspective and "not controlled" from another. If this is possible, then
methodologists should know how to tell what perspective they are
doing the test from and how it influences their results.

That will teach me to post privately. Below is what I sent to Mark last
Monday:

From Greg Williams (921207 - direct)

Hi Mark -

I think you're being conned by the math.

The basic PCT loop doesn't control error, but the function of such loops
is "ultimately" to keep intrinsic error from getting out of hand. The loops
control perceptions (not error), seen individually, but the purpose of the
SYSTEM of individual loops, seen from a higher perspective, is to control
error (which is what surviving in order to reproduce is all about). This is
the same argument-of-perspectives which I've brought up several times: the
current-time perspective (emphasized by Bill P.) vs. the historical
perspective emphasized by Skinner. The loops-as-currently-structured control
perceptions AND are themselves modified when necessary to control errors.
"Ultimates" are too verbal for my taste, yet I can appreciate the "ultimate"
nature of error control in "shaping" the control structure, vs. the
"pragmatic" nature ("proximal" nature?) of the moment-to-moment workings of
the loops in the structure, controlling perceptions (if they're able).

So, I think the problem is that Rick leaves out where the basic loop came

Hope this makes at least a little sense,

[end of post to Mark]

I see the function of the entire hierarchy as doing whatever it is able at a
particular time to keep intrinsic errors within a tolerable range (with the
result that the organism keeps living). If, at some time, that becomes
impossible, then the hierarchy changes (reorganization). So at the "systemic"
level, intrinsic errors are being perceived and if any of them get too large
for too long, then actions are taken until they get back within "spec" (or,
failing this, the organism dies) -- that's negative feedback control of the
perception "too much (of this or that intrinsic) error." Still, each loop in
the hierarchy doesn't control a perception of error. So error is controlled by
the reorganization process, but not by the reorganized loops. Those are the
different levels for which The Test shows that different "kinds" of variables
are being controlled (actually, in the former case, error is ALSO a perceptual
variable; it isn't in the latter case).

Rick, do you think reorganization does NOT have to perceive error? If not, how
does it know when to start and stop?

Best wishes,

Greg

P.S. Ooops, I see I slightly goofed above -- should have said "that's negative
feedback control of 'tolerable (this or that intrinsic) error.'" The reference
signal is for acceptably low error.

···

from: from the need to control (intrinsic) errors.