Test, coercion

[From Bruce Abbott (980802.1025 EST)]

Bill Powers (980802.1400 MDT) --

This is a totally different picture of behavior from the old one that
presents everything as a choice between alternatives. We should see
behavior as a choice between alternatives only when we can show that there
is actually a conflict between alternatives, so some higher-level process
has to be used to resolve the conflict -- a process that we call, loosely
and uninformatively, "deciding."

The view Bill presents assumes that choice is a deliberative cognitive
process. Alternatively, one can view choice, not as a deliberative
cognitive process, but simply as the selection of one alternative from among
many that could have been selected. This view focuses attention on the
question, why is this alternative selected over that one? There are many
possible answers. The decision may have been made at some other time, and
simply carried out now, according to Plan. The choice may have become
habitual through repetition. The system may be constrained to behave in
certain ways rather than others by a fixed internal organization, its
choices wired in. The choice may have followed a deliberative process in
which the expected costs and benefits were weighed, or may have been made at
the flip of a coin. The organism may be organized to choose, automatically
and with out deliberation, on the basis of what, at the moment of choice,
feels best. And so on. Finding the correct answer requires research, but
unless one asks the question, there seems to be nothing that needs to be
answered.

In the example that Bill presents, the decisions have already been made --
take his passenger to the train via rowboat, take this particular boat, use
the oars to propel the boat rather than use the trolling motor, aim for a
particular point on the shore, arrive at the other side by 8:45. Control
processes have been set in motion which, if successful, will bring the boat
to the specified point by the specified time. Whether to pull harder on the
left oar or on the right one, in order to correct for a deviation in the
boat's heading, once needed to be decided, but the actions have become
automatic through practice, the choice of action now effectively wired in.
The behavior just unfolds before us, and it is easy to overlook the
selecting that has been, and is being, done.

Regards,

Bruce

[From Bill Powers (980803.0000) MDT)]

Bruce Abbott (980802.1025 EST)--

The view Bill presents assumes that choice is a deliberative cognitive
process. Alternatively, one can view choice, not as a deliberative
cognitive process, but simply as the selection of one alternative from among
many that could have been selected.

There is an unlimited number that could have been selected. How does the
person making a choice consider them all?

This view focuses attention on the
question, why is this alternative selected over that one?

Why did you choose to write that sentence instead of the Gettysburg
Address? Everything happens instead of something else; the thermometer says
68 degrees instead of 69 -- why did it choose that alternative? The
raindrop, when it condenses, drops down -- why didn't it choose to drop up
or sideways?

Before you can speak of choosing between alternatives, you have to have
some way of defining alternatives. One way is to identify a conflict; the
system wants two things that can't be achieved at the same time. That is
what makes them "alternatives." Alternatives are mutually exclusive. If
they were not you would "choose" them both. And you, the observer, can't
know what alternatives the observed system is considering, if any. You can,
however, make up alternatives, just to give credence to the idea that
everything is a choice.

Choices between alternatives do sometimes occur. When they do, the word
"choice" (or "selection") is simply an uninformative allusion to the fact
that some higher-level system was able to find a basis for eliminating the
conflict, so that only one non-conflicting goal at a time is in effect.
This could occur at any higher level; conflicts do not exist only at
cognitive levels.

All I am doing here is applying PCT in an elementary, straightforward
manner. If you wanted to, you could be doing it instead of I. But obviously
you don't want to, because (I am guessing) that might imply that the
concept of choice can be replaced by the PCT theory of conflict and hierarchy.

Rick, I'm not commenting on your perfectly good comments because I really
want to stay out of these stupid interchanges. I don't know what's going
on; maybe somebody isn't taking his medications.

Best,

Bill P.

[From Bruce Gregory (980803.1148 EDT)]

Bill Powers (980803.0000) MDT)]

Choices between alternatives do sometimes occur. When they do, the word
"choice" (or "selection") is simply an uninformative allusion to the fact
that some higher-level system was able to find a basis for eliminating the
conflict, so that only one non-conflicting goal at a time is in effect.
This could occur at any higher level; conflicts do not exist only at
cognitive levels.

Nice clear statement. PCT is very good at identifying alternatives, but must
rely on a _deus ex machina_ ("some higher-level system was able to find a
basis for eliminating the conflict") in order to model what we
observe--choices being made without overt struggles. If one has no trouble
accepting such devices, PCT is certainly the simplest model.

Bruce Gregory

[From Bruce Gregory (980803.1905 EDT)]

Bill Powers (980803.0000) MDT)

Choices between alternatives do sometimes occur. When they do, the word
"choice" (or "selection") is simply an uninformative allusion to the fact
that some higher-level system was able to find a basis for eliminating the
conflict, so that only one non-conflicting goal at a time is in effect.
This could occur at any higher level; conflicts do not exist only at
cognitive levels.

It seems to me that this appeal to a higher level system simply moves the
"uninformative allusion" to "choice" or "selection" one or more levels up in
the hierarchy. Aren't you saying, in effect, that the higher level system
"chooses" when you say that it "was able to find a basis for eliminating the
conflict"? Sounds to me like a choice is being made. How does this higher
level system work? If we knew this we would know how to translate choice
into PCT.

Bruce Gregory

[From Bill Powers (980803.1750 MDT)]

Bruce Gregory (980803.1905 EDT)--

It seems to me that this appeal to a higher level system simply moves the
"uninformative allusion" to "choice" or "selection" one or more levels up in
the hierarchy. Aren't you saying, in effect, that the higher level system
"chooses" when you say that it "was able to find a basis for eliminating the
conflict"? Sounds to me like a choice is being made. How does this higher
level system work? If we knew this we would know how to translate choice
into PCT.

Tell me how choosing works and I'll tell you how the higher-level systems
work. The problem with terms like choosing and selecting is that they don't
tell us anything. Prior to the choice or selection, we are considering
alternatives. After choice or selection, we are pursuing just one of the
alternatives. What happened between these two stages of the process? To say
we "chose" or "selected" something just glosses over our ignorance of how
this process actually works. The words have the sound of explaining
something, but they explain nothing.

When we try to find an explanation in PCT, we're not claiming that it's
right. We're just describing what we might expect to find if PCT as it
stands is correct. The explanations, of course, involve claims that have to
be tested experimentally. Are you surprised that the experiments haven't
been done yet by our vast army of well-funded researchers?

Best,

Bill P.

[From Bruce Gregory (980803.2100 EDT)]

Bill Powers (980803.1750 MDT)]

Tell me how choosing works and I'll tell you how the higher-level systems
work. The problem with terms like choosing and selecting is that
they don't
tell us anything. Prior to the choice or selection, we are considering
alternatives. After choice or selection, we are pursuing just one of the
alternatives. What happened between these two stages of the
process? To say
we "chose" or "selected" something just glosses over our ignorance of how
this process actually works. The words have the sound of explaining
something, but they explain nothing.

I have proposed a mechanism for choosing. It involves projecting in
imagination mode the consequence of each alternative. The system "chooses"
the alternative with the imagined least total system error. This is not
HPCT, but it does suggest a procedure.

Bruce Gregory

i.kurtzer (980803.2300)
[From Bruce Gregory (980803.2100 EDT)]

I have proposed a mechanism for choosing. It involves projecting in
imagination mode the consequence of each alternative. The system "chooses"
the alternative with the imagined least total system error. This is not
HPCT, but it does suggest a procedure.

its sounds reasonable enough to try to test. why not give this to some
students?

i.

From [ Marc Abrams (980603.2346 ) ]

[From Rick Marken (980803.2040)]

Bruce Gregory (980803.2100 EDT)--

I have proposed a mechanism for choosing. It involves projecting
in imagination mode the consequence of each alternative. The
system "chooses" the alternative with the imagined least total
system error.

How does the system do this "choosing"; what is the mechanism?

One of the non intuitive things that has been learned in modeling
"systems" are that "sub-systems" _do not_ "optimize" for the "greater
good" of the larger system. I think PCT explains _why_ this happens.

Bruce, I would ask in addition to Ricks question above, _what_ is
"total" system error ?

This is not HPCT, but it does suggest a procedure.

Maybe not (though this "imagination mode" and "system error"
stuff sounds very familiar). But I think you'll find that your
model _is_ HPCT once you start thinking about the mechanism
needed to make the choice.

I agree with Rick. ( Not that any vote is being taken :-))This thread
has been great. Not unlike coercion, some _very_ non intuitive things
begin to happen with your notions about "decisions" when you start to
consider them as conflicts.

For all you PCT "application" buffs here is another _significant_
piece of knowledge that might prove to be useful in practice. Maybe
merely "deciding" between alternatives is _not_ the real issue. There
might be a better way to "resolve" the problem if you could identify
the conflict(s) involved. Just some thoughts.

Marc

[From Rick Marken (980803.2040)]

Bruce Gregory (980803.2100 EDT)--

I have proposed a mechanism for choosing. It involves projecting
in imagination mode the consequence of each alternative. The
system "chooses" the alternative with the imagined least total
system error.

How does the system do this "choosing"; what is the mechanism?

This is not HPCT, but it does suggest a procedure.

Maybe not (though this "imagination mode" and "system error"
stuff sounds very familiar). But I think you'll find that your
model _is_ HPCT once you start thinking about the mechanism
needed to make the choice.

Best

Rick

···

---
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Gregory (980804.1117 EDT)]

Rick Marken (980803.2040)

Bruce Gregory (980803.2100 EDT)--

> I have proposed a mechanism for choosing. It involves projecting
> in imagination mode the consequence of each alternative. The
> system "chooses" the alternative with the imagined least total
> system error.

How does the system do this "choosing"; what is the mechanism?

My guess is that the process involves assigning a error to the imagined
outcome of each alternative. (This might be one of the functions of the
emotions that often go with imagined situations. Picking the least error is
then a simple program--compare the outcomes two at a time discarding the
outcome with greater error. How "deep" you investigate an imagined is a
measure of "caution". "Impulsive" choices don't try to look very far ahead.

> This is not HPCT, but it does suggest a procedure.

Maybe not (though this "imagination mode" and "system error"
stuff sounds very familiar). But I think you'll find that your
model _is_ HPCT once you start thinking about the mechanism
needed to make the choice.

Well?

Bruce Gregory

From [Bruce Gregory (980804.1126 EDT)]

Marc Abrams (980603.2346 )

Bruce, I would ask in addition to Ricks question above, _what_ is
"total" system error ?

I should have said total imagined system error. Suppose you contemplate
eating a chocolate glazed donut. If you look ahead to the next few minutes,
you imagine yourself enjoying the donut immensely. Not much error here.
However, if you look further ahead, you may see yourself having to exercise
for a few hours to burn off the calories you would ingest. You don't like to
exercise. If the error associated with added calories is greater than the
error associated with _not_ eating the donut, you skip the donut. Otherwise
you eat the donut. Impetuous eaters don't look very far ahead. Worry warts
do.

Bruce Gregory

[From Bill Powers (980804.0900 MDT)]

Bruce Gregory (980803.2100 EDT)--

Tell me how choosing works and I'll tell you how the higher-level systems
work.

I have proposed a mechanism for choosing. It involves projecting in
imagination mode the consequence of each alternative. The system "chooses"
the alternative with the imagined least total system error. This is not
HPCT, but it does suggest a procedure.

But I thought you said that choosing couldn't be explained by the HPCT
model. The "imagination mode" is part of that model. Are you proposing a
different model of imagination, and of converting an imagined perception
into actions, that doesn't involve control?

Your worst problem is that your proposal still lacks any model of choosing.
Your proposal requires that some system be able to imagine consequences of
actions, calculate the predicted total system error (are you proposing some
process in the brain for detecting total system error and doing this
calculation?), and then to do something that "selects" or "chooses" one of
the imagined actions, switching from the imagination mode to the real
control mode (or to behavior, if you're proposing something other than
control). But "choosing" is the very process you're trying to model, and
when you get to the actual choosing part, all you've done is to say that it
happens. What is it that the system does that results in one or the other
of the alternatives being put into operation?

Parenthetically, you haven't said what you mean by an "alternative." Are
you speaking of alternative actions? If so, I agree that this is not HPCT,
because generally speaking, planning actions doesn't work. All you can
really plan are outcomes, and to make specific outcomes occur in varying
environments, you need to be able to vary your actions according to current
circumstances at the time the plan is to be carried out. So you need, in
short, a control system. If your proposal is at all workable, it
necessarily involves control systems.

···

--------------------
Bruce, you've evidently been having a rough time with PCT. At one point,
you were an enthusiastic supporter, getting all sorts of insights about
things as you got further into the subject. Then something happened which
apparently gave you a huge shock and caused you to apply the brakes. My
guess is that some subject came up in which the PCT gurus pointed out an
implication of the theory that was in direct contradiction to something you
believed and refused to change your mind about. This is a common experience
among people learning PCT, and it's a crisis point. This is a point where
you have to decide whether to accept the logical arguments stemming from
the PCT model, or maintain what you've always believed that is in conflict
with PCT.

In this sort of crises, there are only two real solutions. One is to show
what is wrong with the logic of PCT and the experimental evidence that
seems to support it. That of course doesn't prove that your older idea is
right, but it does say that PCT doesn't deserve preference. The other
solution, if you can't find the flaw in PCT, is to look for the flaw in the
reasoning or evidence that led to your older belief.

If neither of these things occurs, all you can do is fall back on faith.
You can only say that the PCT proposal is absurd, or self-evidently wrong,
or that your belief is obviously and self-evidently right. And of course in
a scientific venue, when you do such things you know you're in the wrong,
which naturally leads to defensiveness or attack, and high emotions, and
unfortunate communications.

It's quite likely that we PCT gurus have said things that are wrong and
can't be supported even as deductions from theory. Nobody says what he
means all the time, and nobody means what is right all the time. When
that's true, you ought to be able to show what's wrong, or at least why
it's unsupported by theory or evidence. But if you're going to hold PCTers
to that standard, you should be willing to submit to it yourself. For
example, if some statement by a PCT guru contradicts some religious
conviction, this is just as valid in a scientific discourse as a religious
person's calling some scientific conclusion into doubt. The same goes for
common-sense ideas and any other ideas a person might have, stemming from
any background whatsoever.

A disbelieving attitude and open-ended demands to "prove it" are unhelpful
-- all they do is say that the disbeliever has doubts, and they do nothing
to show whether this is because of flaws in the doubted proposition or
slowness to understand on the part of the disbeliever. The challenger has
an obligation to be specific, not only to narrow the argument but to
demonstrate that the challenger has taken the time to understand what he is
objecting to. Any ignorant person can say "Oh, yeah? Prove it!" But you
have to take seriously a critic who says "There is a mistake in your
equation 6."

I know. It's happened to me.

Best,

Bill P.

[From Bruce Gregory (980804.1435 EDT)]

Bill Powers (980804.0900 MDT)

Bruce Gregory (980803.2100 EDT)--

>> Tell me how choosing works and I'll tell you how the
higher-level systems
>> work.

>I have proposed a mechanism for choosing. It involves projecting in
>imagination mode the consequence of each alternative. The system
"chooses"
>the alternative with the imagined least total system error. This is not
>HPCT, but it does suggest a procedure.

But I thought you said that choosing couldn't be explained by the HPCT
model. The "imagination mode" is part of that model. Are you proposing a
different model of imagination, and of converting an imagined perception
into actions, that doesn't involve control?

No. I only made that statement because error is not accessible to the
control elements in HPCT as I understand it. That is, error is not normally
considered to be a perception.

Your worst problem is that your proposal still lacks any model of
choosing.
Your proposal requires that some system be able to imagine consequences of
actions, calculate the predicted total system error (are you
proposing some
process in the brain for detecting total system error and doing this
calculation?), and then to do something that "selects" or "chooses" one of
the imagined actions, switching from the imagination mode to the real
control mode (or to behavior, if you're proposing something other than
control). But "choosing" is the very process you're trying to model, and
when you get to the actual choosing part, all you've done is to
say that it
happens. What is it that the system does that results in one or the other
of the alternatives being put into operation?

I tried to explain what I had in mind in my recent posts to Rick and Marc. I
don't think the choosing is a problem because it is done using a simple
algorithm. The "error" I am think of is the feeling of "goodness" or
"badness" associated with each imagined outcome. I may still be short of a
mechanism, but the approach appears viable to me.

Parenthetically, you haven't said what you mean by an "alternative." Are
you speaking of alternative actions? If so, I agree that this is not HPCT,
because generally speaking, planning actions doesn't work. All you can
really plan are outcomes, and to make specific outcomes occur in varying
environments, you need to be able to vary your actions according
to current
circumstances at the time the plan is to be carried out. So you need, in
short, a control system. If your proposal is at all workable, it
necessarily involves control systems.

I agree completely. The alternatives are defined in terms of outcomes, not
actions. I also agree that once overt action is called for, a control system
model is essential.

--------------------
Bruce, you've evidently been having a rough time with PCT. At one point,
you were an enthusiastic supporter, getting all sorts of insights about
things as you got further into the subject. Then something happened which
apparently gave you a huge shock and caused you to apply the brakes. My
guess is that some subject came up in which the PCT gurus pointed out an
implication of the theory that was in direct contradiction to
something you
believed and refused to change your mind about. This is a common
experience
among people learning PCT, and it's a crisis point. This is a point where
you have to decide whether to accept the logical arguments stemming from
the PCT model, or maintain what you've always believed that is in conflict
with PCT.

I was very taken aback by the exchanges on coercion, because they seemed to
me to miss the point of RTP which is to provide students with tools that
allow them to be _more_ effective. Granted some students might not be
interested in acquiring these tools, but everything I read about RTP
indicated that such students were in a definite minority. Because the
emphasis came down on a feature of the process that was most easily modeled,
coercion, the larger picture of the teacher's Plan (again defined in terms
of outcomes, not actions) completely disappeared. I have already expressed
my surprise at the statement the a reward is anything that reduces error.

In this sort of crises, there are only two real solutions. One is to show
what is wrong with the logic of PCT and the experimental evidence that
seems to support it. That of course doesn't prove that your older idea is
right, but it does say that PCT doesn't deserve preference. The other
solution, if you can't find the flaw in PCT, is to look for the
flaw in the
reasoning or evidence that led to your older belief.

I am not aware of any older belief that I hold that conflicts with PCT, but
there may be some.

I have no fundamental differences with PCT. It has no real competitor as a
model of how living systems work. I sometimes think it is applied a little
too cavalierly. I have some differences with you about the nature of
intrinsic reference levels because I think our primate heritage strongly
suggests that we have "social" needs. Your picture of human nature reminds
me of Hobbes' or of the most devoted extreme free-market devotees. (I read
an "economic" analysis stating that marriage was no different from
prostitution except the contracts cover different time periods.) Again this
stems from you analysis of RTP and involved the application, not the
underlying model. As I wrote you before, the only problem I came away from
re-reading B:CP with was that of the persistence of large error signals
whenever long range goals are involved.

If neither of these things occurs, all you can do is fall back on faith.
You can only say that the PCT proposal is absurd, or self-evidently wrong,
or that your belief is obviously and self-evidently right. And of
course in
a scientific venue, when you do such things you know you're in the wrong,
which naturally leads to defensiveness or attack, and high emotions, and
unfortunate communications.

Believe me, I am mortified by my defensiveness and my "unfortunate
communications". Dag and Isaac can testify to the fact that I can appear to
be perfectly rational in many circumstances!

It's quite likely that we PCT gurus have said things that are wrong and
can't be supported even as deductions from theory. Nobody says what he
means all the time, and nobody means what is right all the time. When
that's true, you ought to be able to show what's wrong, or at least why
it's unsupported by theory or evidence. But if you're going to hold PCTers
to that standard, you should be willing to submit to it yourself.

I hope I am.

Thank you for taking the time to allow me to think about these questions.

Bruce Gregory

From [ Marc Abrams (980604.1605) ]

From [Bruce Gregory (980804.1126 EDT)]

I should have said total imagined system error.

Ok, My question still holds. What is the _mechanism_ that allows
"communication" of this kind.

you imagine yourself enjoying the donut immensely. Not much error

here.

However, if you look further ahead, you may see yourself having to
exercise for a few hours to burn off the calories you would ingest.

You >don't like to exercise. If the error associated with added
calories is >greater than the error associated with _not_ eating the
donut, you skip the >donut. Otherwise you eat the donut. Impetuous
eaters don't look very far >ahead. Worry warts do.

I think there is _something_ at work here that is a _great_ deal more
complex ( or simple :-)) then stated. Things are not as they seem.
Diets would never be broken, People would be able to quit smoking
immediately, etc., etc. The question is _what_ is the mechanism(s)
involved in being "impetuous" and/or a "worry wart". No question the
Phenomenon exist as _observables_. How is it translated into a PCT or
alternative model?

Marc

[From Bruce Gregory (980804.1635 EDT)]

Rick Marken (980804.1320)

Bruce Gregory (980804.1435 EDT)

> I am not aware of any older belief that I hold that conflicts
> with PCT, but there may be some.

Just before this you had said:

> I was very taken aback by the exchanges on coercion, because
> they seemed to me to miss the point of RTP which is to provide
> students with tools that allow them to be _more_ effective.

This looks to me like one older belief that _might_ conflict with
PCT. What if a PCT analysis led to a different conclusion? If that
conclusion would take you aback or seem to miss the point then
maybe you're defending a belief that conflicts with PCT. As Bill
said, that doesn't mean that PCT is right. When you get into such
conflicts you can either "show what is wrong with the logic of PCT
and the experimental evidence that seems to support it" or, "if
you can't find the flaw in PCT, [then] look for the flaw in the
reasoning or evidence that led to your older belief."

That's fair enough. My objection to the PCT analysis was, and is, that it
focussed exclusively on the first step in a Plan and ignored the rest. As I
said in my life guard example, a PCT analysis of the first stage of a rescue
would lead us to believe that we were observing coercion of the sort we
might encounter in a rape. A more complete analysis, however, would lead to
a very different understanding of what was happening and a different
appreciation of the role played by coercion. The life guard overwhelms the
drowning person's efforts to exercise control, not because the life guard is
indifferent to them, but because the life guard wants to save the person's
life. Much as in the example of the parent who snatches the child from the
path of the oncoming car. Motives _do_ matter. In fact, if we performed the
Test, we would find that the life guard only wants to control the person in
the act of drowning and does not intervene unless this is happening. The PCT
analysis of RTP implied, or at least suggested, that the classroom teacher's
only motive was to control the behavior of the students in the classroom.
Such a teacher would be a disaster in any program.

I'd love to see a PCT analysis of the RTP program. I'm hoping Tom will
provide one. I'd then be in a much better position to decide where any flaws
might lie--in my beliefs or in the model.

Bruce Gregory

From [Bruce Gregory (980804.1640 EDT)]

Marc Abrams (980604.1605) ]

>>From [Bruce Gregory (980804.1126 EDT)]

>I should have said total imagined system error.

Ok, My question still holds. What is the _mechanism_ that allows
"communication" of this kind.

The imagined error signal would have to be a perceptual input to a
comparator.

I think there is _something_ at work here that is a _great_ deal more
complex ( or simple :-)) then stated. Things are not as they seem.
Diets would never be broken, People would be able to quit smoking
immediately, etc., etc. The question is _what_ is the mechanism(s)
involved in being "impetuous" and/or a "worry wart". No question the
Phenomenon exist as _observables_. How is it translated into a PCT or
alternative model?

I tried to describe the mechanism, but obviously you are not satisfied. The
mechanism depends on how much further you look into the future and the
magnitude of the imagined error.

Bruce Gregory

[From Rick Marken (980804.1320)]

Bruce Gregory (980804.1435 EDT)

I am not aware of any older belief that I hold that conflicts
with PCT, but there may be some.

Just before this you had said:

I was very taken aback by the exchanges on coercion, because
they seemed to me to miss the point of RTP which is to provide
students with tools that allow them to be _more_ effective.

This looks to me like one older belief that _might_ conflict with
PCT. What if a PCT analysis led to a different conclusion? If that
conclusion would take you aback or seem to miss the point then
maybe you're defending a belief that conflicts with PCT. As Bill
said, that doesn't mean that PCT is right. When you get into such
conflicts you can either "show what is wrong with the logic of PCT
and the experimental evidence that seems to support it" or, "if
you can't find the flaw in PCT, [then] look for the flaw in the
reasoning or evidence that led to your older belief."

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

From [ Marc Abrams (980804.1804) ]

From [Bruce Gregory (980804.1640 EDT)]

The imagined error signal would have to be a perceptual input to a
comparator.

Ok, When you talk of "total" system error I get the impression that
more then _one_ control loop is involved. Is this the case? I do not
believe the example you gave represents _one_ CV. To me "total" system
error would or could involve multiple CV's.

I tried to describe the mechanism, but obviously you are not

satisfied. The

mechanism depends on how much further you look into the future and

the

magnitude of the imagined error.

Bruce, I am just trying to understand. I understand what you are
saying. I am having a difficult time _translating_ it into a model. I
am _not_ trying to be a PIA. I don't disagree with you about the
phenomenon. Just trying to understand how to _explain_ how it works.
:slight_smile:

Marc

[From Bill Powers (980804.1955 MDT)]

Bruce Gregory (980804.1435 EDT)--

No. I only made that statement because error is not accessible to the
control elements in HPCT as I understand it. That is, error is not normally
considered to be a perception.

That's right, normally. However, it is possible to perceive reference
signals via the imagination mode, and it is possible to deduce the
existence of error from the perception of effects of lower-level actions.
If you're driving and notice that you're holding the steering wheel far to
one side, you can deduce that there is enough crosswind to cause a fairly
large position error. Your very effort, which you can feel, to counteract
the crosswind is a sign that there is significant error.

Your worst problem is that your proposal still lacks any model of
choosing.

I tried to explain what I had in mind in my recent posts to Rick and Marc. I
don't think the choosing is a problem because it is done using a simple
algorithm. The "error" I am think of is the feeling of "goodness" or
"badness" associated with each imagined outcome. I may still be short of a
mechanism, but the approach appears viable to me.

It is not viable because you haven't said anything about the process itself
-- only about its outcome. It's valid to propose that some kind of outcome
is achieved somehow. But that's merely defining what a model has to
explain. Remember your claim is that HPCT can't explain choosing. So it's
up to you to define the process of choosing clearly enough so we can see if
HPCT does or does not already contain provisions for that kind of process.
If you can't do that, I would be justified in assuming that you're just
pissed off at PCTers, and are assuming that anything you can't explain,
they can't explain. That's not how to be taken seriously.

Parenthetically, you haven't said what you mean by an "alternative."

The alternatives are defined in terms of outcomes, not
actions. I also agree that once overt action is called for, a control system
model is essential.

Perhaps what's bothering you is that PCT contains no specific model of how
we do things like reasoning, thinking, calculating, talking, and so on. If
that's your problem, you're right. How those higher levels work is not well
worked out. We can see in rough outline how control-like processes could
take place at the symbol-handling, rule-driven level, but matching those
processes to performance is, right now, beyond us. I can add that it is
beyond everyone else, too.

···

--------------------

Bruce, you've evidently been having a rough time with PCT. ...

I was very taken aback by the exchanges on coercion, because they seemed to
me to miss the point of RTP which is to provide students with tools that
allow them to be _more_ effective.

This has precisely nothing to do with whether they are being coerced in
some way or not. RTP does not necessarily achieve all its goals, and some
degree of coercion does not necessarily prevent students from becoming more
effective. Could they become still more effective without the coercive
measures? I don't know -- possibly, possibly not. If they didn't stay in
school, obviously they could not benefit from RTP. So is coercing them to
the degree of forcing them by law to attend school necessarily bad for
them, if it exposes them to the philosophy of RTP and the people who run
it? Is forcing them to go to the RTC bad for them, if they get good
counselling in the RTC and learn to solve their social problems? Coercion
does not automatically have bad effects, nor does a small number of bad
aspects in any program render a much larger number of good aspects null and
void. Yet those are the things that everyone who doesn't want to see
coercion in RTP is saying.

The argument seems to be, "If there's any coercion in RTP, the whole
program is a flop and can't work. However, the program does work.
Therefore, there can't be any coercion in RTP." Of course it is the major
premise that is wrong. There can be some coercion in RTP and the program
can still work just fine. I really can't see why noting that there is some
coercion in sending a student to the RTC whether he wants to go or not
sends so many people into a hysterical frenzy. Of course it's coercion. The
student has only one choice, and force will be brought to bear if he
refuses to make it. He is being guided by invisible hands just as surely as
if they had hold of him. But that's not all there is to RTP.

Granted some students might not be
interested in acquiring these tools, but everything I read about RTP
indicated that such students were in a definite minority.

All that is completely irrelevant to the question of whether there are
provisions built into RTP to apply force if necessary to achieve compliance
with the rules. Force may, in fact, not be used most of the time, but
everyone knows it is there to be used when necessary. This naturally raises
some questions (it doesn't supply the answers, it just raises the
questions): is a given student behaving according to the rules because he
agrees that they are good rules and should be followed, or because of
knowing he or she will be overpowered and forced to obey them if he or she
refuses? Clearly these questions can't be answered just by observing that a
student is obeying the rules, because he or she would be seen to obey the
rules in either case. If you judge that the student is really following the
rules voluntarily, you must be using some other evidence beside the fact
that the student is following the rules.

I can't see anything wrong with that reasoning, but for some reason it
seems to send some people into a tizzy. I wish someone would explain why.

Because the
emphasis came down on a feature of the process that was most easily modeled,
coercion, the larger picture of the teacher's Plan (again defined in terms
of outcomes, not actions) completely disappeared.

No, it didn't. The overall plan was and is still acknowledged, and its
positive effects were and are accepted as outweighing any negative effects
from the coercion. The only reason we focussed so completely on the
coercion was that people were arguing loudly that no coercion was taking
place -- meaning that no physical force was actually being applied most of
the time. I found this such a ridiculous and dangerous claim that any other
subject fled from my mind. If that claim were true, then the Inquisition
could indeed claim that its victims had confessed of their own free will,
since they went before the judges after the torture and confessed when no
force or pain was actually being applied to them.

It takes such an extreme example to get the point across. But -- and now I
can anticipate this since it actually happened -- there would be an
immediate outcry from those who read too fast that RTP is being compared
with the Inquisition, and I'm saying the children are being tortured.

A milder example is a sign in a hospital loading zone that says "No
Parking: $50 fine." Now when a driver chooses to park elsewhere, is it
because the driver agrees that that is not a proper place to park, or
simply because the driver doesn't want to be out $50? In other words, is
the driver obeying the rule out of voluntary good citizenship and respect
for the needs of the hospital, or some other such motive that would apply
whether there was a fine or not, or just because parking there might cost a
lot of money? When a driver passes up that potential parking place, you
can't tell which is the reason. The only way to tell is to experiment:
change the sign to read: "Please respect this loading zone and don't park
here." When there is no mention of any penalty, and a driver looking for a
place to park passes up the loading zone, we would be more justified in
thinking, "Good for you, buddy."

I have already expressed
my surprise at the statement the a reward is anything that reduces error.

That's shorthand for an interpersonal relationship in which one person does
something to reduce an error in another person if and only if the other
person does something the first person wants done (or doesn't do something
the first person doesn't want done). Reward is basically a tool for
controlling people. By extension, the term has come to be used for any
situation in which a person gets what is wanted, as if the nonliving
environment were a person rewarding people for behaving properly, and thus
controlling their behavior. I say "behavior" in this context because I am
speaking of the popular concepts, in which action and consequences are
usually mixed together.

I have some differences with you about the nature of
intrinsic reference levels because I think our primate heritage strongly
suggests that we have "social" needs.

So do I. We have all kinds of needs. What's the problem? I don't deal much
with the more advanced types, because it's so easy to start talking
nonsense, but even in B:CP I noted that not all intrinsic reference levels
had to do with vegetative functions. I mentioned as an example, I think,
beauty.

As I wrote you before, the only problem I came away from
re-reading B:CP with was that of the persistence of large error signals
whenever long range goals are involved.

What did you think of my answer to this problem? It's far from new to me.

Believe me, I am mortified by my defensiveness and my "unfortunate
communications". Dag and Isaac can testify to the fact that I can appear to
be perfectly rational in many circumstances!

People say a lot of things they don't mean or even understand, both good
and bad. My response to this sort of disturbance is completely
proportional. While the error is present, I act against it. When it goes
away, I forget it.

Best,

Bill P.

[From Bruce Gregory (980805.1015 EDT)]

Bill Powers (980804.1955 MDT)]

Bruce Gregory (980804.1435 EDT)--

>No. I only made that statement because error is not accessible to the
>control elements in HPCT as I understand it. That is, error is
not normally
>considered to be a perception.

That's right, normally. However, it is possible to perceive reference
signals via the imagination mode, and it is possible to deduce the
existence of error from the perception of effects of lower-level actions.
If you're driving and notice that you're holding the steering wheel far to
one side, you can deduce that there is enough crosswind to cause a fairly
large position error. Your very effort, which you can feel, to counteract
the crosswind is a sign that there is significant error.

Thanks. I'll keep that in mind.

It is not viable because you haven't said anything about the
process itself
-- only about its outcome. It's valid to propose that some kind of outcome
is achieved somehow. But that's merely defining what a model has to
explain. Remember your claim is that HPCT can't explain choosing.

No, my claim was that HPCT _doesn't_ explain choosing. (Indeed HPCT doesn't
seem to acknowledge choice as something that needs explaining.) I have no
idea what HPCT can or cannot explain. I only know what I have seen explained
in terms of HPCT.

So it's
up to you to define the process of choosing clearly enough so we
can see if
HPCT does or does not already contain provisions for that kind of process.

That's fair enough. I am trying to decide where to spend my next holiday. I
have narrowed down the destination to A and B. I maintain that I imagine
myself going to A and going to B. I imagine my experiences in each location.
It frequently rains at A but I like the scenery and the people. B is sure to
be sunny, but I have never been there before and do not speak the language.
Each of these outcomes has some error associated with it (I don't like rain;
I like to be understood when I speak). I maintain that I will "choose" which
ever alternative presents the smallest imagined error. This, I claim, is a
deterministic process. Is it a control process? I don't know.

If you can't do that, I would be justified in assuming that you're just
pissed off at PCTers, and are assuming that anything you can't explain,
they can't explain. That's not how to be taken seriously.

I hope my statement above cleared this up.

Perhaps what's bothering you is that PCT contains no specific model of how
we do things like reasoning, thinking, calculating, talking, and so on. If
that's your problem, you're right. How those higher levels work
is not well
worked out. We can see in rough outline how control-like processes could
take place at the symbol-handling, rule-driven level, but matching those
processes to performance is, right now, beyond us. I can add that it is
beyond everyone else, too.

Fine. I must point out that this modesty is not always revealed in posts to
CSGnet.

A milder example is a sign in a hospital loading zone that says "No
Parking: $50 fine." Now when a driver chooses to park elsewhere, is it
because the driver agrees that that is not a proper place to park, or
simply because the driver doesn't want to be out $50? In other words, is
the driver obeying the rule out of voluntary good citizenship and respect
for the needs of the hospital, or some other such motive that would apply
whether there was a fine or not, or just because parking there
might cost a
lot of money? When a driver passes up that potential parking place, you
can't tell which is the reason. The only way to tell is to experiment:
change the sign to read: "Please respect this loading zone and don't park
here." When there is no mention of any penalty, and a driver looking for a
place to park passes up the loading zone, we would be more justified in
thinking, "Good for you, buddy."

Actually, my favorite is "Don't even think about parking here".

What did you think of my answer to this problem? It's far from new to me.

I'm still not clear why reorganization does not place. But it seems, neither
are you. This seems to me to be a major challenge to the HPCT model.
Clearly, having and executing a plan seems to make a big difference, but why
remains obscure.

Bruce Gregory