Test

[From Rick Marken (980801.1050)]

Bruce Abbott (980801.0835 EST)
Bill Powers (980801.0758 MDT)
Bruce Abbott (980801.1200 EST)

I had to check reference.com to make sure my posts were actually
getting though to CSGNet. They are, so I guess I'm either saying
nothing that is a disturbance to the variables being controlled by
Abbott and Powers, or I'm just being ignored. Either way, it's
fun to sit back and just watch. Why get involved when I can get
beauties like this for free:

Bill Powers (980801.0758 MDT)

Sorry, Bruce. I never got into that world [of traditional
psychology] and have no desire to get into it now.

Bruce Abbott (980801.1200 EST)

I've heard your rejection (based purely on visceral reaction,
in my opinion). I haven't heard you propose any convincing
alternative model that would account for the data.

Priceless.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Gregory (980801.1716 EDT)]

Bill Powers (980801.111p MDT)

Why must a decision have been made at some point? Why couldn't the system
have picked one course of action at random, ignoring all the ones
not picked?

If the time allotted for a "choice" is sufficiently small, effectively
ruling out the active involvement of higher level control systems, an action
picked at random is the best that can be done. If more time is allowed
evaluating the imagined consequences of several actions seems likely to be
more successful--especially when it may take a long time to determine the
ultimate effect of an action.

All right. There is nothing about "choosing" in those discussions, or
comparison of "better" and "worse." You're trying to force outmoded,
old-fashioned concepts onto PCT. And I am resisting.

How does PCT explain choice then? Why do people agonize over decisions if
the outcomes seem likely to prove equally desirable?

So you think that the organism is always comparing alternatives,
evaluating
their relative advantages and disadvantages, and choosing which action to
take by imagining which outcome would feel best? I don't think
that.

What _do_ you think?

Bruce Gregory

[From Bruce Gregory (980801.1825 EDT)]

Rick Marken (80801.1500)

> Of course, this apparent program of action exists as a perception
> in the observer; it is not controlled by the person controlling
> the cursor.

Me:

> Unless, of course, the person _is_ controlling the cursor via
> such a program. This would have to be determined by using the
> TEST.

It already has; people don't control cursors using such a program
because they _can't_; they vary handle movement as necessary
(not according to a program) to keep the perception of the cursor
in a reference state. We know this from Testing and modeling.

If I varied the handle movement by using a program, my control would be
significantly worse than is observed. Is this what the studies show? If the
disturbance were sufficiently slow might not control by a program be
possible?

So you are saying that much of the exchange about _coercion_
focussed on control at one level. You suggest that, if we had
not "ignored the possibility that [a person] was controlling
at the program level on the basis of a perception at the
principle level that [another person], with support, can work
out his problems and return to [another activity]" then the
coercion discussion would have gone better. Is that right? If
so, could you please explain what was wrong with the coercion
discussion and how consideration of higher levels might have
made this discussion go better? [Again, please leave out any
mention of RTP in your reply]

A bather is thrashing about at the deep end of the pool. A life guard leaps
into the water and, ignoring the bather's attempt to exercise control, drags
her to the side of the pool. This presumably is no more or no less coercive
that a life guard who drags a bather from the shallow end to the deep end
where the bather commences thrashing. Both lifeguards are coercive. One gets
profuse thanks from the bather for ignoring her efforts to control. The
other is arrested and placed under observation.

Bruce Gregory

[From Rick Marken (80801.1500)]

Bruce Gregory (980801.1357 EDT)

Don't feel like the lone ranger. I'm still waiting for a response
from you re:

[Bruce Gregory (980728.0635 EDT)]

Well, while I sit politely on the sidelines of the great
Abbott/Powers debate (politeness prevents me from mentioning
whose contributions to that debate are making it _great_) I guess
I can respond to your post (I actually saved it so I must have
meant to respond before I got caught up in creating my irrelevant
contributions to the great Test debate;-)).

Bruce Gregory (980728.0635 EDT)

Me:

Yes. That is the illusion that MGP fell for; actions seem to be
the result of program control if this is the level of your
_own_ perceptions of the behavior to which you attend.

You:

Are you speaking metaphorically?

No. As usual, I am speaking clearly and concretely;-)

If I am watching you control a cursor I am not sure how I can
perceive you at the level of program control.

My experimence is that most people perceive tracking at the
program level when they first look at it. They see two main
contingencies: move handle left _if_ cursor moves right and
move hadle right _if_ cursor moves left. My first research
efforts in PCT were aimed at helping people see that this
perception of the behavior that occurs in a tracking task
is illusory (it doesn't correspond to what the subject is
actually doing).

Is this different from saying that I interpret your actions as
if you were following a program?

No. It's exactly the same. You are seeing me doing what appears
to be carrying out a program but I may not be _controlling_ for
a program perception.

A good example of perceiving behavior in a way that has nothing
to do with what the actor is actually doing occurs in my baseball
catch demo at: http://home.earthlink.net/~rmarken/demos.html
There, it looks like the fielder is _anticipating_ and _planning_
in order to get to the ball> You can _perceive_ anticipation and
planning in the behavior of the fielder. In fact, the fielder
is not anticipating or planning at all; he's just controlling a
present time perception of the vertical and lateral optical
velocity of the ball on the retina.

It is also unclear to me why the fact that I am controlling at
the program level should necessarily lead me to interpret your
actions as demonstrating control at the program level.

I never said that. I didn't say that the _observer_ was controlling
at the program level; the observer is just _observing_. The
observer can attend to any one of several different levels of his
own perceptions of the _same_ behavior; the observer can see
tracking behavior as handle or cursor movements (transitions), as
a _relationship_ between cursor and handle movements; as a
_sequence_ of actions (left, right, left, right), as a _program_
of actions, as a demonstration of the _principle_ that behavior is
the control of perception, etc. The entire hierarchy of perception
is always there; all levels, simultaneously -- all the time.

Me:

Of course, this apparent program of action exists as a perception
in the observer; it is not controlled by the person controlling
the cursor.

Ye:

Unless, of course, the person _is_ controlling the cursor via
such a program. This would have to be determined by using the
TEST.

It already has; people don't control cursors using such a program
because they _can't_; they vary handle movement as necessary
(not according to a program) to keep the perception of the cursor
in a reference state. We know this from Testing and modeling.

Me:

Could you give an example of what you mean. Are you suggesting
that there is something wrong with some exchanges on CSGNet
because the participants in those exchanges lose sight of the
fact that perceptions at one level are controlled as a means
of achieving higher level goals? Could you explain this?

Ye:

In my view, much of the exchange about coercion and RTP focussed
on control at one level (the disrupting child will leave the
classroom) and ignored the possibility that the teacher was
controlling at the program level on the basis of a perception
at the principle level that the child, with support, can work
out his problems and return to the classroom. Or at least, I
never saw any mention of possible higher level perceptions.

Ah. Now I remember why I didn't respond to your post. Remember,
I can't talk about RTP. So let's imagine that you didn't even
mention it. Let's just talk about coercion.

So you are saying that much of the exchange about _coercion_
focussed on control at one level. You suggest that, if we had
not "ignored the possibility that [a person] was controlling
at the program level on the basis of a perception at the
principle level that [another person], with support, can work
out his problems and return to [another activity]" then the
coercion discussion would have gone better. Is that right? If
so, could you please explain what was wrong with the coercion
discussion and how consideration of higher levels might have
made this discussion go better? [Again, please leave out any
mention of RTP in your reply]

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Abbott (980802.1000 EST)]

Bill Powers (980801.111p MDT) --

Bruce Abbott (980801.1200 EST)

Since the alternative condition can only be present or not present, the
"degree to which it is present" is either 0% or 100%.

True, but that's not what I said. I said "degree to which the rat _keeps_
the alternative present," which is not either 0% or 100%.

"Keeps", then, means "on the average?" OK. Forget it.

"On the average?" "Forget it?" You're talking through your hat, Bill.
It's just a measure of the quality of control, and it's not an average. If
a variable is under control, it will be kept near its reference value,
despite disturbances acting on the variable. In the case of a dichotomous
controlled variable, the variable is either at reference or it is not.
Percentage of time spent at the reference value reflects how well the system
is controlling, especially since in the absence of control action, the
imposed disturbances keep that percentage close to zero. Furthermore, all
measures of rate I have seen you employ depend on taking a time sample and
computing change over that delta t. The percentage measure under discussion
can be computed in exactly the same way, at any point within the session.
How do _you_ propose to measure the effectiveness of control over a
dichotomous variable? Do you have some magical measure of control
performance up your sleeve that does not involve taking samples over time?
I'd like to hear about it.

That's what I had in mind. Another, correlated measure, would be the
average error, which would decrease with "willingness." Latency to press
the lever and reinstate the alternative condition is another measure; the
latency determines how much time is spent in the imposed condition before it
is replaced by the alternative.

Yes, and this could be modeled as a control system with an integrating
output function.

Sure. So error in some system is cumulating more rapidly when the animal
presses rapidly than when it takes its time. Now we have to identify the
control system in which the error cumulates and its controlled variable.

Task difficulty does not vary across the manipulations, as I pointed out in
my post. What is "perceptual uncertainty"?

How do you know that the task doesn't get harder for the rat when the
signal's relationship to the shocks becomes more ambiguous? The increase in
delay before switching to the signaled condition may well reflect the
difficulty the rat is having in perceiving that the signaled condition
exists or doesn't exist. Why do you just dismiss this possibility?

The animal receives extensive training under the signaled and unsignaled
conditions, each of which is associated with a different correlated stimulus
(e.g., houselight on, houselight off). It remains easy for the rat to tell
which condition it is in across manipulations of the characteristics of the
signaled condition, and the action required to maintain the signaled state
does not change.

Perceptual uncertainty is the inability to perceive clearly whether a given
signal is a sign that a shock is about to occur; in other words, the
perceptual signal is small compared with the noise because the relationship
of signal to shock is unreliable.

I haven't dismissed this possibility. It could well be a reason why the
signaled condition becomes less desirable to the rat as the signal-shock
relationship departs from the nominal one.

But you still can't say how the rat is experiencing all this. It is
unlikely that the experiences could be cast in terms meaningful to a human
being.

I can't say what any other _human being_ is experiencing, nor can you. I was
trying to present the proposal in familiar terms, as a way to get the
concepts across.

You call the presses of the lever "responses." To what events are these
actions "responses?" We can easily verify that they are actions: we can see
the rat producing them. But how would we verify that each action is a
response to something?

Sorry, I intended no such implication. How about "discrete actions"?

Is this a report of a drastic change of policy on your part, or are you
just humoring me?

Drastic change in policy? What on earth are you talking about? I'm not
humoring you. You were quite right to correct my usage here.

Yes, I understand that point. That's why I think that the situation is more
complex -- increasing the time spent in the signaled condition when the
circumstances there are made less desirable will not make the experience
there any better.

A bad cup of coffee cannot be improved by increasing how
much of it you drink. Organisms whose systems were built to behave in this
way would soon perish.

You declared that there would be no good effect of increasing the time
spent in the signaled condition, and then went on to deliver an analogy
based on that assumption. But if the assumption is wrong, the analogy is
irrelevant.

Of course. And if the assumption is right, the analogy is valid.

Suppose that being in the signaled condition enables the rat to defend
itself against most of the effects of shocks. Then the more time that is
spent in the non-signaled condition, the more shocks will occur that the
rat can't defend against. So increasing the time in the signaled condition
will definitely have an improving effect on what the rat experiences.

The only problem with that analysis is that the data do not support it.
That suggests to me that it is time to examine other alternatives.

But let's take a good look at my coffee example above. How does PCT deal
with this well-known result?

I'd like to try it, but things here are starting to get a bit hectic again
(deadlines an all) so it might be a while before I can get started.

You don't want to do it. OK, I'll put it on my list.

I didn't realize that your invitation was a demand for immediate action.
But go ahead, put it on you list. It will be interesting to see how you are
able to develop a model of a proposal you do not understand, and which you
do not believe I can translate into code.

I don't believe that the rat is deliberating its options; the actual process
as I envision it is considerably simpler, but I wanted to state the case in
a way that we human beings could relate to.

Why, if human beings don't do it this way, either? What you're doing is
called anthropomorphizing. And in this case, you're doing so in terms of an
illusion many people have about how often they actually make any decisions.

That illusion being what?

What's relevant are the
proposed underlying processes -- the various perceptions associated with the
signaled and unsignaled conditions yield evaluations of "good" and "bad" as
you suggested in a recent post and in B:CP. The organism is organized so as
to prefer "better" over "worse," and will spawn appropriate control systems
so as to obtain the former over the latter.

No. New control systems are "spawned" to correct intrinsic error. Are you
using "spawned" in the Unix sense, or the salmon sense? Or the reorganizing
sense? They're all different.

I believe that we have a general capacity to bring control systems into
being on the fly, as needed, at levels of control above those required for
controlling limb positions, velocities, accelerations, and so on (those are
hardwired).

You can make anything sound like a choice situation just by picking the
right words. That doesn't mean that there's actually any choice being made.

No. Everything is a choice situation.

I can't interpret that comment. Do you mean "No, it doesn't mean that
there's actually any choice being made"? Or do you mean, "No, I disagree,
and I assert that everything is a choice situation"? Or did you leave out a
"not"?

Option 2.

So during training, the driver is presented with the choice of turning the
wheel left or right to turn the car left or right, and learns to choose one
of them? That, of course, is not how reorganization works, so you're
proposing a new model of learning, perhaps by some system that already
knows how to carry out the operation you call "choosing." Perhaps you had
better describe your model of how a system "chooses."

You've chosen an example that would involve very little if any deliberation
of alternatives. It's a case where a random choice might be best,
especially given the time constraints. If that choice leads to a worsening
of contions, that leaves turning the wheel the other way as the only
reasonable alternative. But even a random choice is still a choice -- a
selection of one course of action over other possible ones. It may not
require any deliberation at all, especially after the performance has become
habitual.

Why must a decision have been made at some point? Why couldn't the system
have picked one course of action at random, ignoring all the ones not picked?

Picking from among alternatives is choice. I think you're reading too much
into my use of the phrase "making a decision." It need not involve any
deliberative process, although it may.

That's gobbledegook. What is "controlling more strongly?" If you mean loop
gain, say so. Ir you mean higher reference level, say that. If you mean
higher gain in the input function, say that. If you mean more powerful
output function, say that. Controlling "more strongly" means nothing.

Loop gain.

So are you saying that the higher system senses the relative loop gains of
the two or more lower systems involved, and selects the lower system with
the higher loop gain? How does it do that? Is this an example of "and then
a miracle occurs"? How can the system distinguish between a lower system
with a high loop gain and another lower system with the same loop gain but
a higher reference signal? And since the higher system _contributes to_ the
reference signal in the lower system, what keeps it from increasing or
decreasing the "strength" of control in the lower systems? And if the
higher system can adjust the loop gain of the lower systems, what "decides"
which lower system will be given the higher gain?

Excellent questions. Some while back I wrote a computer program (and
published it on CSGnet) in which an "e. coli" bacterium found its way to a
nutrient source by means of a biased random walk (reorienting its direction
of travel at random, but doing so more often when nutrients were decreasing
than when they were increasing, as a result of its motion through the
nutrient solution). A second-level system varied the gain of this system
according to the amount of nutrient currently stored by the bacterium. At a
certain level of stored nutrient, the sign of the gain reversed and the
bacterium would then avoid rather than seek the greatest concentration of
nutrient. That's the sort of model I have in mind.

That an organism finds something "attractive" does not mean that the object
has the property of attractiveness. I am asserting no such model as you
suggest.

Then why use language that _does_ have that meaning to most people? It
makes a great deal of practical difference whether we assume that the
environment does the attracting, or whether desirability is in the brain of
the beholder. Those who put attractiveness in the environment include
rapists who say "She was asking for it." So this is not a trivial issue.

It's a word everyone understands. What do you suggest as an adequate
replacement?

That implies a control system that is sensing the degree of relative
attraction and controlling for -- what? Is there any "attraction" to sense
out there in the first place? How does the organism perceive that one
alternative is more favorable than the other? Favorable in terms of what
perceptual variables and reference levels?

See your own discussions of "pleasure/pain" or "good/bad."

All right. There is nothing about "choosing" in those discussions, or
comparison of "better" and "worse." You're trying to force outmoded,
old-fashioned concepts onto PCT. And I am resisting.

Of course there is. The reorganizing system selects (chooses) alternatives
at random; those that lead to "better" get retained; those that don't, don't.

And controlling for them how? By
varying the gain of lower systems? Would that really work?

No, by controlling for those options that yield the best outcome in terms of
the organism's perceptions of what feels best.

So you think that the organism is always comparing alternatives, evaluating
their relative advantages and disadvantages, and choosing which action to
take by imagining which outcome would feel best? I don't think that. You'll
have to prove it to me.

That's not quite it (the process need not be conscious or deliberative); the
organism is simply so organized as to control for those alternatives, among
a possible set, which by certain criteria built into the organism, generally
lead to a perception of "better" over "worse."

My proposal is only a
first suggestion, based on the fact that higher gain yields more vigorous
action at a given level of error.

That is the wrong way to analyze it. Higher gain yields lower error and
essentially the same degree of action, at a given setting of the reference
signal, with a given disturbance magnitude, and with a given environmental
feedback function. Do the algebra. Prove it to yourself.

Geez, Bill, have you forgotten the context of my analysis? You're talking
about the end-state, when error has been minimized, and I'm talking about
the disturbed state, in which, at higher gain, a given level of error will
produce stronger action. There is nothing "wrong" with my analysis.

You know this, Bruce, when you're thinking with your PCT-aware persona. In
this whole argument, you're acting as if you've never heard anything of PCT
but the words.

Blarney. See above.

Do you realize what an elaborate system you're proposing here?

Yes.

You're just falling back on
all the tired old psychological concepts, the very concepts that drove me
away from wanting to be a psychologist, probably before you were born.

Enlighten me. What concepts are you talking about?

On
the one hand you express a desire to be known as a PCT-savvy scientist, but
on the other you keep saying things that show a failure to have
internalized the concepts of negative feedback control.

On what grounds do you make this claim? What things did I say that "show a
failure to have internalized the concepts of negative feedback control?
Specific examples, please.

Regards,

Bruce

[From Bruce Abbott (980802.1155 EST)]

Rick Marken (980801.1840) --

Bruce Gregory (980801.1950 EDT)

Choice is conflict? Sorry to be so dense, but I don't understand.
I do understand conflict but run choice by me one more time.

You answered it in your question; we feel like we are making a
choice when "the outcomes seem likely to prove equally
desireable". Ordinarily, we just produce the outcomes
(perception) we want; we are in control. But when we have
goals for equally desireable perceptual outcomes, all of which
cannot be achieved simultaneously, we are in conflict; we have
the experimence of having to _chooe_. For example, you want
both the ice cream and the cake; but an environmental constraint
(your mom) will not allow you to have both. Conflict (choice) time.

When two control systems are in conflict, the result is that neither CV is
well controlled, unless there is a large difference in gain between the two
systems. That is not choice. When two alternatives are mutually exclusive,
or must be pursued sequentially, one must be selected over the other for
immediate control. It isn't that we feel _as though_ we are making a
choice (selecting one alternative from among the set); we _are_ making a
choice. Calling it an illusion does not make it disappear. You have given
us no explanation for how that happens. Please supply one this time,
instead of dodging the question with a lot of arm waiving.

Regards,

Bruce

[From Bruce Abbott (980802.1435 EST)]

Rick Marken (980802.1120) --

Bruce Abbott (980802.1155 EST)

When two control systems are in conflict, the result is that
neither CV is well controlled, unless there is a large difference
in gain between the two systems. That is not choice.

Agreed. The choice occurs when, for higher level reasons, control
of one CV (or both) is abandoned. So the "choice" is what happens
when the kid reorganizes by randomly picking ice cream over cake in
order to get _something_.

What if the kid likes this particular flavor of ice cream somewhat better
than he likes this particular type of cake? You're saying that he has to
flip a coin in order to decide? It sounds to me like you are constructing
the only example in which a random decision would be called for -- the case
in which the alternatives are equally attractive. So for you, "choice" is
what happens when we flip a coin (figuratively or literally speaking) and
selection according to what you like best is something else.

When you are in the conflict, you have
the experience of "making an agonizing decision" -- I want the
ice cream, no the cake, no the ice cream, no the cake... I
agree that we don't understand how such a choice is finally made,
when (and if) it is.

What about choice without conflict? Or what would be called choice by any
reasonable person if you had not excluded it through a narrow redefinition
of the term. It's not agonizing, I prefer the ice cream, and that's what I
choose.

I imagine it's solved differently every time
(by reorganization, going up a level, insuperable disturbance to
one alternative -- there is no cake left, change in goals -- time
to get on the roller coaster, etc). My point was simply that the
phenomenon that we call "choice" is explained by PCT; but we need
to do some experiments to see if the model is correct.

Insuperable disturbance to one alternative? That would assume that you are
already controlling for it, and choose to abandon it in favor of an
alternative. That would be a reason for choice, not a mechanism of choice.
But as you said, you don't know how choices are made. Either reorganization
occurs (random selection, although how the reorganization mechanism "knows"
what part of the system to tinker with remains unspecified in PCT) or a
miracle occurs and the selection just happens. Explained by PCT?

It isn't that we feel _as though_ we are making a choice
(selecting one alternative from among the set); we _are_
making a choice. Calling it an illusion does not make it
disappear.

The illusory part is that choice is "willful". When you are in
a choice situation (conflict) there is no "correct" choice;
you don't really "select freely", as the dictionary says of
choice; the "choice" is made by circumstance (a random flip of
the reorganization coin, a disturbance to one of the choices, some
other factor that leads to a change in the conflicting goals,
etc). So the illusion of choice is the illusion that choice is
willful. In control theory, a decision situation is a conflict
and a choice is a resolution (by whatever means) of that conflict.

So I don't choose what I want or judge to be the best option (choice is not
willful), because in your definition, choice is not choice except when there
is no rational basis for choice. And the illusion of choice is that I
select freely (dictionary definition), because it is produced by a
determinate mechanism (no free will). But I think "free" as used in the
dictionary simply means that the system is free to choose based on its inner
state as opposed to being forced or pressured to make a particular choice by
an outside agency (coerced).

Regards,

Bruce

[From Rick Marken (980802.1120)]

Bruce Abbott (980802.1155 EST)--

When two control systems are in conflict, the result is that
neither CV is well controlled, unless there is a large difference
in gain between the two systems. That is not choice.

Agreed. The choice occurs when, for higher level reasons, control
of one CV (or both) is abandoned. So the "choice" is what happens
when the kid reorganizes by randomly picking ice cream over cake in
order to get _something_. When you are in the conflict, you have
the experience of "making an agonizing decision" -- I want the
ice cream, no the cake, no the ice cream, no the cake... I
agree that we don't understand how such a choice is finally made,
when (and if) it is. I imagine it's solved differently every time
(by reorganization, going up a level, insuperable disturbance to
one alternative -- there is no cake left, change in goals -- time
to get on the roller coaster, etc). My point was simply that the
phenomenon that we call "choice" is explained by PCT; but we need
to do some experiments to see if the model is correct.

It isn't that we feel _as though_ we are making a choice
(selecting one alternative from among the set); we _are_
making a choice. Calling it an illusion does not make it
disappear.

The illusory part is that choice is "willful". When you are in
a choice situation (conflict) there is no "correct" choice;
you don't really "select freely", as the dictionary says of
choice; the "choice" is made by circumstance (a random flip of
the reorganization coin, a disturbance to one of the choices, some
other factor that leads to a change in the conflicting goals,
etc). So the illusion of choice is the illusion that choice is
willful. In control theory, a decision situation is a conflict
and a choice is a resolution (by whatever means) of that conflict.

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bill Powers (980802.1318 MDT)]

Bruce Gregory (980801.1716 EDT)--

Why must a decision have been made at some point? Why couldn't the system
have picked one course of action at random, ignoring all the ones
not picked?

How does PCT explain choice then? Why do people agonize over decisions if
the outcomes seem likely to prove equally desirable?

Of course choice occurs. We reason about things, imagine what will happen,
try to figure out the best thing to do based on our knowledge of the world.
But choosing isn't a level of perception or control; it's just one of the
things we can do with the levels we have. Choices are called for when there
is more than one goal to pursue or more than one way of pursuing it. That
is, they occur when we have an internal conflict. When there is no conflict
there is no need for making a choice or using any decision process.

How much choosing one does is very much a matter of individual
organization. Some people see choices in everything -- which is the best
hand to push the door open with, which pocket is best to put the change in,
which item on the menu gives the most satisfaction for the money, which
foot to put first when starting to walk. But other people make very few
choices. They just push the door open with whichever hand is nearest to it,
they put the change in the pocket on the side of the hand holding the
change, they scan the menu until something strikes their fancy and don't
worry whether they might have liked something else more, they don't even
think about which foot to start with. Some people are desperately afraid of
choosing wrong; some aren't. It's just a matter of how they've become
organized. Some people agonize over every possible decision; most people
save their agonizing for really critical ones; and the rest just charge
ahead thinking they can live with whatever happens.

What _do_ you think?

I think there is nothing fundamental about making choices and decisions;
they're just things we can do if we want to. By the time one person has
decided on what to have for lunch, another has finished his Big Mac and is
outside again.

The problem is that people have used "decision" and "choice" as concepts to
explain behavior, as if these were fundamental human capacities instead of
optional procedures. A hierarchy of control systems controlling many
perceptions in parallel and many levels of perception will do many things
that look like choosing or deciding, but which probably don't involve
either process. Think of the Crowd program, in the simplest configuration:
a single person moving through an array of stationary obstacles toward a
goal. If you see decisions and choices in everything, you'll guess that
this simulated person is continuously presented with choices, having to
decide whether to pass left or right of the next obstacle, and having to
choose a path that will lead to the goal position as quickly as possible
and with the least effort. But having written the program I can assure you
that no choices are being made, and no decisions about which way to go. Of
course you could look at the program and define certain processes as
choosing or deciding, but there is no need to do so. All that's required is
the concept of a control system. All the behavior follows from that.

Best,

Bill P.

[From Bill Powers (980802.1358 MDT)]

Bruce Abbott (980802.1000 EST

Bruce, I'm out of this argument. It's going nowhere. I'm attacking, you're
defending, and nothing good will come out of it. Our points of view,
despite our areas of agreement, are too far apart for bridging.

Best,

Bill P.

[From Bruce Abbott (980802.1725 EST)]

Rick Marken (980802.1500) --

Bruce Abbott (980802.1435 EST)

What if the kid likes this particular flavor of ice cream
somewhat better than he likes this particular type of cake?

The conflict expresses itself at the level below the level
of the conflicting goals (having cake or ice cream) -- let's
say at the level of the hand postion control system. If the
gain of the ice cream control system is stronger than that
of the cake control system then the net reference to the hand
position control system will bring the hand closer to the
ice cream than to the cake. There will be a virtual reference
position for the hand that is between the positions that will
satisfy either of the higher level goals (having ice cream and
having cake).

So your prediction is that the kid will move his hand somewhere between the
cake and ice cream, but closer to the ice cream. Hmmmm.

What about choice without conflict? ...It's not agonizing, I
prefer the ice cream, and that's what I choose.

That kind of choice is called _control_.

How does the system end up controlling for ice cream as opposed to reaching
somewhere between the cake and ice cream and satisfying neither reference?

Regards,

Bruce

[From Bruce Gregory (980802.1747)]

Rick Marken (980802.1120)]

The illusory part is that choice is "willful".

Evidence?

When you are in
a choice situation (conflict) there is no "correct" choice;

Evidence?

you don't really "select freely", as the dictionary says of
choice;

Evidence?

the "choice" is made by circumstance (a random flip of
the reorganization coin, a disturbance to one of the choices, some
other factor that leads to a change in the conflicting goals,
etc).

Evidence?

So the illusion of choice is the illusion that choice is
willful. In control theory, a decision situation is a conflict
and a choice is a resolution (by whatever means) of that conflict.

You meant in PCT didn't you? PCT is not the only control theory. Or is it?
This post makes it clear that PCT is a system of beliefs, not very far from
the alternative models that Rick and Bill hold in such contempt.

Bruce Gregory

[From Bruce Gregory (980802.1900 EDT)]

Rick Marken (980802.1500)

The conflict expresses itself at the level below the level
of the conflicting goals (having cake or ice cream) -- let's
say at the level of the hand position control system. If the
gain of the ice cream control system is stronger than that
of the cake control system then the net reference to the hand
position control system will bring the hand closer to the
ice cream than to the cake. There will be a virtual reference
position for the hand that is between the positions that will
satisfy either of the higher level goals (having ice cream and
having cake).

Where do you get these fanciful stories from, Rick? I have never observed my
hand arriving at a "virtual reference position". You just invent data to
support your beliefs don't you? No wonder you never lose an argument.

Bruce Gregory

[From Rick Marken (980802.1500)]

Bruce Abbott (980802.1435 EST)--

What if the kid likes this particular flavor of ice cream
somewhat better than he likes this particular type of cake?

The conflict expresses itself at the level below the level
of the conflicting goals (having cake or ice cream) -- let's
say at the level of the hand postion control system. If the
gain of the ice cream control system is stronger than that
of the cake control system then the net reference to the hand
position control system will bring the hand closer to the
ice cream than to the cake. There will be a virtual reference
position for the hand that is between the positions that will
satisfy either of the higher level goals (having ice cream and
having cake).

What about choice without conflict? ...It's not agonizing, I
prefer the ice cream, and that's what I choose.

That kind of choice is called _control_.

What model of choice are you pushing, Bruce?

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Abbott (980802.0805 EST)]

Bill Powers (980802.1358 MDT) --

Bruce Abbott (980802.1000 EST)

Bruce, I'm out of this argument. It's going nowhere. I'm attacking, you're
defending, and nothing good will come out of it. Our points of view,
despite our areas of agreement, are too far apart for bridging.

I thought you might choose to bail out, but I'm disappointed that you have.
I asked a number of questions I was hoping to get answers to . . .

Regards,

Bruce

[From Rick Marken (980802.1610)]

Bruce Abbott (980802.1725 EST)]

So your prediction is that the kid will move his hand
somewhere between the cake and ice cream, but closer to
the ice cream. Hmmmm.

Depending on the nature of the conflict. My experience is
that this kind of "freezing" at a virtual reference (it's
happened to me many times) does't last very long; the
conflict is usually resolved pretty quickly.

How does the system end up controlling for ice cream as
opposed to reaching somewhere between the cake and ice cream
and satisfying neither reference?

I mentioned some possibilities: reorganization, going up a
level, etc. We haven't done much research on conflict
resolution; that's what you are asking about. I think it
would be great if someone would actually go out and start
studying conflict resolution based on an understanding of
PCT; but it looks like people are just having too much fun
bashing PCT (and me -- so much for my low gain strategy; I
guess I'm just a dislikable fellow;-)) so I guess such
research will just have to wait.

Best

Rick

···

---

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Rick Marken (980802.1630)]

Bruce Gregory (980802.1900 EDT)

Where do you get these fanciful stories from, Rick? I have
never observed my hand arriving at a "virtual reference
position".

As I just said to Bruce A., I have, indeed, had this experience
and I know several other people who have too. For example,
I remember having my hands "freeze" at in virtual reference
state between picking up a cookie and holding a cup of coffee.
I couldn't achieve both goals simultaneously so my hands froze in
a state between them; eventually (about 5 seconds, I'd say), I
"chose" to put the cup down (it required moving to another part
of the room) so that I could grasp the cookie and put it on the
(re-picked up) saucer. Little conflcts like this happen fairly
rarely but they happen; I think they actually happen pretty
often but they are resolved so quickly that we don't even notice.

You just invent data to support your beliefs don't you? No
wonder you never lose an argument.

In this case I actually wasn't inventing data; but keep an eye
on me;-)

I am not trying to win arguments; I am trying to defend my
perception of PCT. I admit it. If you don't like the perception
I'm trying to defend, then when you post to this group you're
just going to get a lot of continuing "arguments" from me (and
others who are defending the same perception) when you disturb
that perception. The way to avoid this is to just stop posting.

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Gregory (980802.2127 EDT)]

Rick Marken (980802.1630)]

I am not trying to win arguments; I am trying to defend my
perception of PCT. I admit it. If you don't like the perception
I'm trying to defend, then when you post to this group you're
just going to get a lot of continuing "arguments" from me (and
others who are defending the same perception)

Which others, Rick?

when you disturb
that perception. The way to avoid this is to just stop posting.

Physician, heal thyself.

Bruce Gregory

[From Rick Marken (980802.2020)]

Me:

I am not trying to win arguments; I am trying to defend my
perception of PCT. I admit it. If you don't like the perception
I'm trying to defend, then when you post to this group you're
just going to get a lot of continuing "arguments" from me (and
others who are defending the same perception)

Bruce Gregory (980802.2127 EDT)

Which others, Rick?

Not many. I guess I must be wrong about PCT because so few people
are defending it along with me.

What are you so mad about, Bruce? What did I say that set you
off like this? It seems to me that over the last few days you
have been posting cryptic, sarcastic posts implying some deep
failures of PCT. I've tried to answer them as politely and
substantively as possible (until I finally gave up and said that
PCT didn't seem to be working out for you -- which seemed to
be a fair statement given your posts over the last several days).
Now you say that I'm an arrogant, brown nosing asshole. Why?
What did I say, in answer to your posts, that led you to say
that I am an arrogant, brown nosing asshole? I'm really curious
about what you, a person who has called me PCT's greatest enemy
and now an arrogant, brown nosing asshole, consider to be polite
net conversation.

Physician, heal thyself.

I don't feel ill. Perhaps you could tell me what I've got.

Best

Rick

···

---
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Gregory (980802.0920 EDT)]

Rick Marken (980802.2020)]

What are you so mad about, Bruce? What did I say that set you
off like this? It seems to me that over the last few days you
have been posting cryptic, sarcastic posts implying some deep
failures of PCT. I've tried to answer them as politely and
substantively as possible (until I finally gave up and said that
PCT didn't seem to be working out for you -- which seemed to
be a fair statement given your posts over the last several days).
Now you say that I'm an arrogant, brown nosing asshole. Why?
What did I say, in answer to your posts, that led you to say
that I am an arrogant, brown nosing asshole? I'm really curious
about what you, a person who has called me PCT's greatest enemy
and now an arrogant, brown nosing asshole, consider to be polite
net conversation.

You haven't done anything but defend PCT. What I saw as arrogance was simply
your efforts to make a forceful case for PCT as an all encompassing theory
of human behavior. I become angry with you for things which I should have
been able to see were not your limitations but the limitation of trying to
make PCT a theory of everything. PCT is a theory of control. Not everything
is a matter of control. Choice for example. Or making a decision. Why would
anyone think that these are control phenomena? It is only when you must
explain everything as an example of control that you wind up deciding that
choice is an example of conflict. Control has a limited number of tools and
conflict is one them. Another example of the limitations of PCT is revealed
by the number of times "reorganization" is invoked. The only way a control
system can change is via reorganization (or appealing to an unnamed "higher
level on control") so all changed must be an example of reorganization. Or
course, reorganization is a blunt instrument, because it leads to random
actions, but its the only instrument you have if you are trying to develop a
control theory of everything.

You are not the greatest enemy that PCT has you are PCT's bulldog. If you
say something that strikes me as manifestly wrong or inadequate, I have no
quarrel with you. Instead I've discovered a limit of PCT as a model of
everything. It isn't a model of everything. It's a model of control and I
have keep reminding myself of this. I regret the nasty remarks I have made.
They are not about you. They are expressions of my frustrations that PCT is
not a theory of everything.

Again I apologize. You have every reason to be bewildered by my attacks. I
am committed to seeing that they do not happen again.

Bruce Gregory