Give and Take

[From Bruce Gregory (980506.1147 EDT)]

Rick Marken (980506.0820)]

"Choosing", like "getting feedback", is something a control system
does for itself. "Choosing" can be seen as specifying one reference
state for a perceptual variable rather than another (I choose
"staying in class" rather than "going to the RTP room" for the state
of the perceptual variable "where I want to be"). When seen in this
way it is clear that an external agent can't "give a choice" to a
control system; this would require getting inside the control system
and setting it's reference for the relevant perceptual variable.

You no doubt meant to say that choosing is something at an _autonomous_
control system does for itself. (I regularly choose the speed on my cruise
control and the temperature setting for my living room thermostat.) Needless
to say, specifying a reference state for a perceptual variable in an
autonomous control system is something that can only be done by the output
of a higher-level control system (until you reach systems with wired-in
reference levels). No control system that I know of sets its own reference
level.

Best offer

[From Rick Marken (980506.0820)]

Me:

Another intructive exercise is to analyze "give the student
a choice" in terms of the PCT model.

Bill Powers (980506.0304 MDT) --

Nice. It's like analyzing "giving feedback."

Exactly! One of the remarkable things about seeing the world
through control theory glasses is that it exposes the almost
certainly nintentional duplicity of many of our descriptions
of human interactions.

"Choosing", like "getting feedback", is something a control system
does for itself. "Choosing" can be seen as specifying one reference
state for a perceptual variable rather than another (I choose
"staying in class" rather than "going to the RTP room" for the state
of the perceptual variable "where I want to be"). When seen in this
way it is clear that an external agent can't "give a choice" to a
control system; this would require getting inside the control system
and setting it's reference for the relevant perceptual variable.

"Giving a choice" really means "taking away choice". An external
agent who says "you can have your choice of either staying in class
or going to the RTP room" is implicitly limiting the states in
which a control system will be allowed to keep a perceptual variable.
If the control system actually wants to go to the beach, then the
agent offering the choice will presumably prevent selection of this
option -- it wasn't part of the "choice" offered.

So if "choice" means selection of a reference state for a controlled
perceptual variable by a control system then an external agent who
"gives a choice" to this control system is implicitly willing to
limit (control) the control system's actual choices. An external
agent who tells another control system "I am giving you a choice" is
really just saying, in a politically correct way, "I am in control".

The analysis of "giving feedback" is left as an exercise. I will
just report the results of the analysis. From a PCT perceptive,
"giving feedback" is actually "disturbing a controlled perception"
"Giving feedback" (like "giving a choice") looks like a great idea
until you look at it through control theory glasses. (The fact that
"giving feedback" and "giving choices" look so good at first glance
may be another reason why people are so reluctant to put on PCT
glasses -- or keep them on once they've tried them;-))

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Rick Marken (980506.0900)]

Me:

"Choosing", like "getting feedback", is something a control system
does for itself.

Bruce Gregory (980506.1147 EDT)

Needless to say, specifying a reference state for a perceptual
variable in an autonomous control system is something that can
only be done by the output of a higher-level control system
(until you reach systems with wired-in reference levels). No
control system that I know of sets its own reference level.

Oh, you nit picky fellow, you;-) You are absolutely right. As
I say, far too often, to my raquetball opponents: "good get"!

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Tim Carey (980505.0605)]

[From Bruce Gregory (980506.1147 EDT)]

control and the temperature setting for my living room thermostat.)

Needless

to say, specifying a reference state for a perceptual variable in an
autonomous control system is something that can only be done by the

output

of a higher-level control system (until you reach systems with wired-in
reference levels). No control system that I know of sets its own

reference

level.

Nice observation Bruce. This still doesn't answer the question though, of
how we come to be controlling for one thing rather than another does it?
What makes the output of one higher level reference more compelling (or
something ... I struggle with the language) than another?

Cheers,

Tim

[From Bruce Gregory 9980506.1635 EDT)]

Tim Carey (980505.0605)

This still doesn't answer the question though, of
how we come to be controlling for one thing rather than another does it?
What makes the output of one higher level reference more compelling (or
something ... I struggle with the language) than another?

Ultimately, according to PCT, we control for those things that allow us to
exercise control at the "hard wired" level, since these reference levels are
established by genetics and not subject to change. If you are using RTP in a
school system, it ultimately traces back to the way that controlling
according to this principle allows you to survive and thrive. If someone
else is using a more rigid (dare I say coercive?) system, it is because
_that_ approach allows them to survive and thrive. Only when you or the
other person encounter persisting errors as a result of applying your
approaches will either of you try other approaches--and only with great
reluctance! This doesn't imply that the two approaches are of equal "merit"
only that they each meet deep seated objectives or reference levels.

Best Offer

[From Tim Carey (980507.1900)]

[From Bruce Gregory 9980506.1635 EDT)]

Ultimately, according to PCT, we control for those things that allow us

to

exercise control at the "hard wired" level, since these reference levels

are

established by genetics and not subject to change.

Yep, I think I understand this bit (at a fairly basic level). What I was
trying to get at is how we actually switch from one reference to the other.
How do we go from say, doing an assignment to phoning a friend? What has
happened for us to switch from controlling for one perception to
controlling for another? Is this what's meant by "choosing"?

Cheers,

Tim

[From Bruce Gregory (980507.0641 EDT)]

Tim Carey (980507.1900)

Yep, I think I understand this bit (at a fairly basic level). What I was
trying to get at is how we actually switch from one reference to
the other.
How do we go from say, doing an assignment to phoning a friend? What has
happened for us to switch from controlling for one perception to
controlling for another? Is this what's meant by "choosing"?

I've come to the conclusion that attention or awareness goes to where the
error is greatest in an effort to reduce overall system error. This is pure
speculation, however. "Choosing" is not a technical term, as you know. It
covers so many situations that I'm not sure it can be pinned down to a PCT
model. In every case, however, we are reducing, or trying to reduce, error.
The question is, "Why this particular error?" Here again I think the answer
may be that it is the greatest error we are experiencing at the moment.

Best Offer

[From Bill Powers (980507.0739 MDT)]

Tim Carey (980507.1900)--

What I was
trying to get at is how we actually switch from one reference to the other.
How do we go from say, doing an assignment to phoning a friend? What has
happened for us to switch from controlling for one perception to
controlling for another? Is this what's meant by "choosing"?

Reference signals are set by higher-level control systems. At any given
moment, we are probably adjusting many reference signals to keep errors
small in dozens of higher-level control systems. There's generally one main
control process of which we're most acutely aware, but that's not the only
thing we're controlling.

Bruce Gregory is probably right in saying that awareness goes to the
largest error; we are most acutely aware of the control processes that are
having the most trouble. But we can also decide to do something else if
there _is_ trouble. I've learned, for example, that when I'm having trouble
with a computer program, the best thing to do is to take a break and do
something else for a while. So my switching from one process to another is
carried out by a higher level system that has learned a problem-solving
strategy. Even though I have switched activities at a lower level, I am
still carrying out the same behavior -- controlling for solving a problem
-- at a higher level.

Best,

Bill P.

[From Bruce Gregory 9980507.1040 EDT)]

Bill Powers (980507.0739 MDT)]

Bruce Gregory is probably right in saying that awareness goes to the
largest error; we are most acutely aware of the control processes that are
having the most trouble. But we can also decide to do something else if
there _is_ trouble. I've learned, for example, that when I'm
having trouble
with a computer program, the best thing to do is to take a break and do
something else for a while. So my switching from one process to another is
carried out by a higher level system that has learned a problem-solving
strategy. Even though I have switched activities at a lower level, I am
still carrying out the same behavior -- controlling for solving a problem
-- at a higher level.

The failure to make progress on the computer program triggers a branch in
the higher-level problem solving strategy. If this happens as a result of
gain adjustment, perhaps the overall system error is reduced. "Doing
something else" now has a higher gain and errors in the control of _this_
perception become relatively more important. As a result your awareness
shifts to it. In this story, your awareness is "just along for the
ride"--the HPCT system is doing all the work.

Best Offer

[From Tim Carey (980505.0645)]

[From Bruce Gregory (980507.0641 EDT)]

I've come to the conclusion that attention or awareness goes to where the
error is greatest in an effort to reduce overall system error. This is

pure

speculation, however. "Choosing" is not a technical term, as you know. It
covers so many situations that I'm not sure it can be pinned down to a

PCT

model. In every case, however, we are reducing, or trying to reduce,

error.

The question is, "Why this particular error?" Here again I think the

answer

may be that it is the greatest error we are experiencing at the moment.

That seems to make sense, thanks.

Tim

[From Tim Carey (980508.0645)]

[From Bill Powers (980507.0739 MDT)]

something else for a while. So my switching from one process to another

is

carried out by a higher level system that has learned a problem-solving
strategy. Even though I have switched activities at a lower level, I am
still carrying out the same behavior -- controlling for solving a problem
-- at a higher level.

So in this instance, are you saying that this wouldn't really be choosing
but all of these actions are just part of what's involved for your
reference of say 'problem solving'?

I'm interested Bill in another type of choosing you mentioned before where
you essentially described is as a conflict. Can you explain that a little
bit more? This sounds different from what we've talked about in this post
so I'm wondering whether the higher levels explanation still applies in the
same way.

Cheers,

Tim

[From Tim Carey (980508.0645)]

[From Bruce Gregory 9980507.1040 EDT)]

The failure to make progress on the computer program triggers a branch in
the higher-level problem solving strategy. If this happens as a result of
gain adjustment, perhaps the overall system error is reduced. "Doing
something else" now has a higher gain and errors in the control of _this_
perception become relatively more important. As a result your awareness
shifts to it. In this story, your awareness is "just along for the
ride"--the HPCT system is doing all the work.

Let's say you're involved in an engrossing conversation with someone but
you also realise that you're late for an appointment. Are you saying that
you will only "choose" to terminate your conversation and leave for your
appointment once the error for "being late" exceeds the error that is being
reduced by conversing with this person?

Cheers,

Tim

[From Bruce Gregory (9809507.1715 EDT)]

Tim Carey (980508.0645)

Let's say you're involved in an engrossing conversation with someone but
you also realise that you're late for an appointment. Are you saying that
you will only "choose" to terminate your conversation and leave for your
appointment once the error for "being late" exceeds the error
that is being
reduced by conversing with this person?

Yes, that's what I am speculating.

Best Offer

[From Tim Carey (980505.0940)]

[From Bruce Gregory (9809507.1715 EDT)]

Yes, that's what I am speculating.

Thanks, Bruce.

Cheers,

Tim

[From Bill Powers (980508.0128 MDT)]

Bruce Gregory (9809507.1715 EDT)]

Tim Carey (980508.0645)

Let's say you're involved in an engrossing conversation with someone but
you also realise that you're late for an appointment. Are you saying that
you will only "choose" to terminate your conversation and leave for your
appointment once the error for "being late" exceeds the error
that is being
reduced by conversing with this person?

Bruce:

Yes, that's what I am speculating.

The problem with this direct kind of model is that it tries to explain how
we go from one activity to another without any concept of a higher-level
system. What you're describing are two control systems (have conversation,
go to appointment) in conflict, with the outcome being determined only by
the unbalance of forces acting in opposing directions.

Consider an alternative kind of behavior. Suppose you encounter a friend
while you're on the way to your appointment. You say "I have to meet Joe in
five minutes; can we get together for a talk later on today -- how about
lunch?" You settle on a time, and then go on your way, getting to the
appointment on time and also having a nice conversation with your friend,
later, without any conflict.

This is _sequence control_. It's how grownups handle scheduling conflicts
-- or rather, how they prevent them so they don't happen in the first
place. It requires a level of control higher than that of the conflicting
activities, a level which can select first one reference condition, and
then another (instead of turning them both on at the same time, despite
their incompatibility).

There are practical problems with the "biggest error wins" model. Consider
what happens when the errors are almost equal. Now you have the drive to
keep talking being essentially equal to the drive to go on to your
appointment. What happens? One quite possible outcome is that you are
unable to do either one. Another is that you vacillate between the two
activities. You say "Gee, your eyes are pretty --- listen, I gotta go -- I
really like talking to you, it's great that you -- oh, God, I'm late , he's
going to kill me -- I do like your hair that way ..."

Clearly, you're making a lousy job of both activities. This is what it's
like not to have developed a good sequence control system, or as Ed Ford
would put it, the ability to prioritize. This effect of conflict may have
something to do with the evolutionary reason for adding the sequence
control level.

Another problem with this model is that it's simply a stimulus-response
model. The stimulus to keep talking is the "engrossingness" of the
conversation, and the stimulus to go somewhere else is the continuously
advancing lateness of the time. If this were how we worked, we'd be in
conflict most of the time. To get around that, you'd have to propose some
hard-wired solution, such as hysteresis -- you flip from one activity to
the other only when the difference of stimulus strength exceeds some
threshold, and then you go all the way from one to the other. But that
patch on the model makes it impossible for the model to learn how to avoid
such conflicts by adding a higher level of control. You're committed to a
system concept that inevitably leads away from the idea of hierarchical
control as you try to take care of one problem after another by extending
and modifying this balance-of-forces idea.

In HPCT, we treat conflict as basically a mistake. In a properly-organized
system, a higher level of control never sets two effectively incompatible
reference signals for lower-level systems at the same time. Many reference
conditions can be set at the same time, but they must relate to
independently-variable perceptions, or they will create conflict, lead to
large error signals, and bring reorganization into play until the
simultaneously chosen reference signals are again sufficiently compatible
to turn off reorganzation. Conflict is an abnormal condition, and ideally
is eliminated by its tendency to produce reorganization.

That strikes me as quite a different model from the one you guys are
agreeing on.

Best,

Bill P.

[From Bruce Gregory (980508.0510 EDT)]

Bill Powers (980508.0128 MDT)

Bruce Gregory (9809507.1715 EDT)]
>
>Tim Carey (980508.0645)
>
>> Let's say you're involved in an engrossing conversation with
someone but
>> you also realize that you're late for an appointment. Are you
saying that
>> you will only "choose" to terminate your conversation and
leave for your
>> appointment once the error for "being late" exceeds the error
>> that is being
>> reduced by conversing with this person?
>
Bruce:
>Yes, that's what I am speculating.

The problem with this direct kind of model is that it tries to explain how
we go from one activity to another without any concept of a higher-level
system. What you're describing are two control systems (have conversation,
go to appointment) in conflict, with the outcome being determined only by
the unbalance of forces acting in opposing directions.

Then I've done a bad job of describing what I mean. In my view the
higher-level system is critical. Without it, I agree with your analysis.

Consider an alternative kind of behavior. Suppose you encounter a friend
while you're on the way to your appointment. You say "I have to
meet Joe in
five minutes; can we get together for a talk later on today -- how about
lunch?" You settle on a time, and then go on your way, getting to the
appointment on time and also having a nice conversation with your friend,
later, without any conflict.

Exactly. The higher-level system tries to arrange to minimize error by
carrying out a sequence of actions.

This is sequence control_. It's how grownups handle scheduling conflicts
-- or rather, how they prevent them so they don't happen in the first
place. It requires a level of control higher than that of the conflicting
activities, a level which can select first one reference condition, and
then another (instead of turning them both on at the same time, despite
their incompatibility).

Yes, I agree completely.

There are practical problems with the "biggest error wins" model. Consider
what happens when the errors are almost equal. Now you have the drive to
keep talking being essentially equal to the drive to go on to your
appointment. What happens? One quite possible outcome is that you are
unable to do either one. Another is that you vacillate between the two
activities. You say "Gee, your eyes are pretty --- listen, I
gotta go -- I
really like talking to you, it's great that you -- oh, God, I'm
late , he's
going to kill me -- I do like your hair that way ..."

The reason I agree that a higher level system is vital.

In HPCT, we treat conflict as basically a mistake. In a properly-organized
system, a higher level of control never sets two effectively incompatible
reference signals for lower-level systems at the same time. Many reference
conditions can be set at the same time, but they must relate to
independently-variable perceptions, or they will create conflict, lead to
large error signals, and bring reorganization into play until the
simultaneously chosen reference signals are again sufficiently compatible
to turn off reorganization. Conflict is an abnormal condition, and ideally
is eliminated by its tendency to produce reorganization.

That strikes me as quite a different model from the one you guys are
agreeing on.

I don't think we disagree at all. What I was trying to describe was a
situation in which the sequence was something like "talk to this guy until
you have to leave for next appointment". If the conversation is not too
engrossing, this works fine. If the conversation _is_ engrossing the error
generated by not departing grows as you pass your self-appointed deadline.
Now a higher level system must decide what to do. Is the error associated
with missing the next appointment greater than the error associated with
breaking off the conversation? Imagination mode may be invoked. What I was
saying was that _ultimately_ you will "decide" in a way that minimizes total
system error.

Best Offer

[From Bruce Gregory (980508.0655 EDT)]

Bruce Gregory (980508.0510 EDT)

What I was
saying was that _ultimately_ you will "decide" in a way that
minimizes total
system error.

When you act "rationally", observers agree that such an action would
minimize _their_ total system error. When you act "irrationally" observers
cannot see how your actions would minimize _their_ total system error. But
your actions are always efforts to minimize _your_ total system error.

Best Offer

[From Bruce Gregory 9980508.1245 EDT)]

Rick Marken (980508.0910)

Bill Powers (980508.0128 MDT) --

> In HPCT, we treat conflict as basically a mistake. In a
> properly-organized system, a higher level of control never
> sets two effectively incompatible reference signals for
> lower-level systems at the same time... That strikes me as
> quite a different model from the one you guys are agreeing on.

Bruce Gregory (980508.0510 EDT) --

> I don't think we disagree at all. What I was trying to describe was
> a situation in which the sequence was something like "talk to this
> guy until you have to leave for next appointment".

That's a contingency (program), not a sequence. A sequence is "talk
to this guy (for x minutes) then leave for appointment".

> If the conversation is not too engrossing, this works fine.

Yes. The contingency ends up being the same as the sequence in this
case. But if the conversation ends up being engrossing, you're
right there in the conflict; control of the contingency hasn't
helped solve the conflict. (How does a conversation become
engrossing, by the way? Is engrossing a perceptual aspect of a
conversation? Do conversations really _engross_? Do pretty girls
passing by really _call forth_?

A conversation becomes engrossing when leaving it increases the error
associated with the perception you are controlling.

> If the conversation _is_ engrossing the error generated by not
> departing grows as you pass your self-appointed deadline.

This sounds like a hybrid S-R/control model to me. You seem to be
saying that the "engrossingness" of the conversation keeps you in
it (that's the S-R part). The continuation of the conversation
(do to the engrossingness) creates error in the system controlling
for meeting the deadline (that's the control part).

Yes indeed. I was too brief in my exposition. Thanks for clarifying it.

> Now a higher level system must decide what to do.

In Bill's model, the higher level system is always there, controlling
the perception of sequence ("conversation then meeting"). Higher level
systems are not "triggered" by error in lower level systems (this
is another S-R concept). The higher level system, if it exists, is
always there, making sure that the perception it wants (in this
case the perception of the sequence "conversation then meeting")
is occurring.

Again thanks for the clarification. This is what I intended to say.

> Is the error associated with missing the next appointment greater
> than the error associated with breaking off the conversation?

Now you are describing a higher level control system that is
controlling a perception of the relative size of the error in
two lower level systems. I don't think this sort of design will
work. Anyway, it is certainly a very different model than the
one Bill described.

O.k. Now we have a real difference. I'd appreciate understanding why this is
so. Isn't this the principle on which the reorganization system works?

> What I was saying was that _ultimately_ you will "decide" in a way
> that minimizes total system error.

But this says very little other than that negative feedback control
systems act to reduce error; total error in a collection of control
systems is always minimized. The problem is that, when there is
conflict in a collection of control systems, the minimum to which
total error can be reduced is _much_ larger than it is when there is
no conflict (this can easily be seen by introducing a conflict into
my spreadsheet model of a hierarchy of control systems).

Yes, and exactly how is this a problem?

I think what may be missing from your understanding of the PCT
model of conflict is that conflict cannot be solved at the level
of the conflict itself.

Quite the contrary, this is very clear to me.

Conflicts can only be solved by some
form of reorganization -- reorganization that changes the _way_
the _goals_ for the conflicting systems are set. If you already
have a higher level system in place (like a sequence control system)
then there will be no conflict (like the conversation vs meeting
conflict); you will just have a disturbance to a higher level
perception (like "having a satisfying social life") that can be
solved by selecting an appropriate reference for the perception of
a sequence (have conversation then attend meeting, or vice versa).

You always have a higher order system "in place", or so it seems to me. If I
understand you correctly, a conflict by definition cannot be solved except
by reorganization. This is not true, of course, for conflicts between
hierarchical control systems, since there one system can reset the reference
level that is producing the conflict in order to realize a higher level
goal. The Israelis and the Palestinians can want a village to be in their
domains. One can concede in exchange for a redrawing of the border.

I think the most important thing to understand about conflict is
that it is a _goals_ problem, not a _perception_ problem. Once you
get this, then you can see that there is no way for the system
itself to solve a conflict other than by figuring out a better
way to set the goals that are creating the conflict.

I do get this. I have never intended to say otherwise.

In the conversation/meeting conflict, the control systems inside you
that want to have a conversation and get to the meeting are setting
inconsistent goals for the system controlling where you are located
(in the conversation or at the meeting). Only a higher level system
can change the way these goals are set; this is why the sequence
control system is a solution to this conflict; in order to control the
sequence perception "conversation then meeting" the sequence control
system determines how the lower level conversation and meeting
control systems will set their goals (first one then the other).

Yes, this is perfectly clear to me.

Best Offer

[From Rick Marken (980508.0910)]

Bill Powers (980508.0128 MDT) --

In HPCT, we treat conflict as basically a mistake. In a
properly-organized system, a higher level of control never
sets two effectively incompatible reference signals for
lower-level systems at the same time... That strikes me as
quite a different model from the one you guys are agreeing on.

Bruce Gregory (980508.0510 EDT) --

I don't think we disagree at all. What I was trying to describe was
a situation in which the sequence was something like "talk to this
guy until you have to leave for next appointment".

That's a contingency (program), not a sequence. A sequence is "talk
to this guy (for x minutes) then leave for appointment".

If the conversation is not too engrossing, this works fine.

Yes. The contingency ends up being the same as the sequence in this
case. But if the conversation ends up being engrossing, you're
raght there in the conflict; control of the contingency hasn't
helped solve the conflict. (How does a conversation become
engrossing, by the way? Is engrossing a perceptual aspect of a
conversation? Do conversations really _engross_? Do pretty girls
passing by really _call forth_?

If the conversation _is_ engrossing the error generated by not
departing grows as you pass your self-appointed deadline.

This sounds like a hybrid S-R/control model to me. You seem to be
saying that the "engrossingness" of the conversation keeps you in
it (that's the S-R part). The continuation of the conversation
(do do the engrossingness) creates error in the system controlling
for meeting the deadline (that's the control part).

Now a higher level system must decide what to do.

In Bill's model, the higher level system is always there, controlling
the perception of sequence ("conversation then meeting"). Higher level
systems are not "triggered" by error in lower level systems (this
is another S-R concept). The higher level system, if it exists, is
always there, making sure that the perception it wants (in this
case the perception of the sequence "conversation then meeting")
is occuring.

Is the error associated with missing the next appointment greater
than the error associated with breaking off the conversation?

Now you are describing a higher level control system that is
controlling a perception of the relative size of the error in
two lower level systems. I don't think this sort of design will
work. Anyway, it is certainly a very different model than the
one Bill described.

What I was saying was that _ultimately_ you will "decide" in a way
that minimizes total system error.

But this says very little other than that negative feedback control
systems act to reduce error; total error in a collection of control
systems is always minimized. The problem is that, when there is
conflict in a collection of control systems, the minimum to which
total error can be reduced is _much_ larger than it is when there is
no conflict (this can easily be seen by introducing a conflict into
my spreadsheet model of a hierarchy of control systems).

I think what may be missing from your understanding of the PCT
model of conflict is that conflict cannot be solved at the level
of the conflict itself. Conflicts can only be solved by some
form of reorganization -- reorganization that changes the _way_
the _goals_ for the conflicting systems are set. If you already
have a higher level sysyem in place (like a sequence control system)
then there will be no conflict (like the conversation vs meeting
conflict); you will just have a distrubance to a higher level
perception (like "having a satisfying social life") that can be
solved by selecting an appropriate reference for the perception of
a sequence (have conversation then attend meeting, or vice versa).

I think the most important thing to understand about conflict is
that it is a _goals_ problem, not a _perception_ problem. Once you
get this, then you can see that there is no way for the system
itself to solve a conflict other than by figuring out a better
way to set the goals that are creating the conflict.

In the conversation/meeting conflict, the control systems inside you
that want to have a conversation and get to the meeting are setting
inconsistent goals for the system controlling where you are located
(in the conversation or at the meeting). Only a higher level system
can change the way these goals are set; this is why the sequence
control system is a solution to this conflict; in order to control the
sequence perception "conversation then meeting" the sequence control
system determines how the lower level conversation and meeting
control systems will set their goals (first one then the other).

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bill Powers (980508.2053 MDT)]

Bruce Gregory (980508.0510 EDT)--

The problem with this direct kind of model is that it tries to explain how
we go from one activity to another without any concept of a higher-level
system. What you're describing are two control systems (have conversation,
go to appointment) in conflict, with the outcome being determined only by
the unbalance of forces acting in opposing directions.

Then I've done a bad job of describing what I mean. In my view the
higher-level system is critical. Without it, I agree with your analysis.

There mnust be something you're assuming that needs to be look at more
closely. Are you assuming, for example, that the higher-order control
system can perceive the _error signals_ in the lower systems? If so, I
recommend drawing a diagram of how this is supposed to work. This would not
be the standard HPCT diagram, but maybe, if you can work out what such a
model would do, you could make a case for it.

Best,

Bill P.