"Switching"?

Bruce Gregory (991213.1651 EST)

> Bill Powers (991213.1400 MDT)

> I'm perfectly happy to acknowledge that people experience
> such things and
> describe them as you do. But that's not an explanation or a
> model of what
> is happening. I don't know how an intention can exist but not have any
> effect, and then perhaps suddenly start having an effect.
> Nothing you said
> makes that any easier to understand.

The fact that you can't model this phenomenon confirms what Marc Abrams
has been saying for a long time. The HPCT model provides no way to
understand how we switch from controlling one variable to controlling
another. This might be a mechanism worth thinking about since without
such a mechanism HPCT will never be of much interest to those dealing
with behavior any more complicated than tracking tasks.

Hmm. Why is "switching" necessary? It seems to me that we are controlling
for lots of things simultaneously. It also seems to me that, according to
PCT, actions serve to counter disturbances. No action is required if
reference level and perception are aligned. If several controlled
perceptions deviate from their reference levels it seems to me that we'd be
engaging in lots of actions although at any point in time (a fiction, by
the way,) we might be focusing on one such perception/reference level pair.

Help me out here PCT wizards? Is "switching" necessary?

···

--

Fred Nickols
The Distance Consulting Company
"Assistance at A Distance"
http://home.att.net/~nickols/distance.htm
nickols@worldnet.att.net
(609) 490-0095

from [ Marc Abrams (991214.2303) ]

Fred Nickols:

Hmm. Why is "switching" necessary? It seems to me that we are controlling
for lots of things simultaneously. It also seems to me that, according to
PCT, actions serve to counter disturbances. No action is required if
reference level and perception are aligned. If several controlled
perceptions deviate from their reference levels it seems to me that we'd be
engaging in lots of actions although at any point in time (a fiction, by
the way,) we might be focusing on one such perception/reference level pair.

Help me out here PCT wizards? Is "switching" necessary?

"Switching" from my perspective is as you describe it. The question is _why_
does this "focus" change. We control many things at the same time some we
are aware of others we are not. Bill's suggestion is a useful one . it would
of course have to change the model because you would have to include in my
opinion, memory, and awareness/attention. Phenomenon that is currently not
in the model.

I fully intend on giving this a shot. I am still doing my due diligence by
going through the archives. All this stuff has been discussed before, and in
some depth, with some interesting proposals that have largely been ignored
or left hanging.

This is a real challange. I'm not sure I am up for it, but it should prove
to be interesting and educational.

Marc

[From Bill Powers (991215.0255 MDT)]

Bruce Gregory (991213.1651 EST)--

Hmm. Why is "switching" necessary? It seems to me that we are controlling
for lots of things simultaneously. It also seems to me that, according to
PCT, actions serve to counter disturbances. No action is required if
reference level and perception are aligned. If several controlled
perceptions deviate from their reference levels it seems to me that we'd be
engaging in lots of actions although at any point in time (a fiction, by
the way,) we might be focusing on one such perception/reference level pair.

Switching has to happen when incompatible control processes are involved.
You can't go to the movies and wash the car at the same time. We learn not
only to do things in sequence that can't be done simultaneously, but to
stop wanting things when it's not practical to want them. We turn this
reference signal on, and that one off. At least that's what I think happens.

Best,

Bill P.

[From Bruce Gregory (991215.0647 EST)]

Hmm. Why is "switching" necessary? It seems to me that we are controlling
for lots of things simultaneously. It also seems to me that, according to
PCT, actions serve to counter disturbances. No action is required if
reference level and perception are aligned. If several controlled
perceptions deviate from their reference levels it seems to me that we'd

be

engaging in lots of actions although at any point in time (a fiction, by
the way,) we might be focusing on one such perception/reference level

pair.

Help me out here PCT wizards? Is "switching" necessary?

As Bill points out, you can avoid the switching problem by assuming there is
always a higher order control system able to set references for lower order
control systems (you deviate from your trip home from the office to pick up
a container (I almost said bottle) of milk). The discussion about carrying
out sequences of actions a few weeks ago was about the "switching problem".
Do you need a fully developed plan before you set out? If not, how do you
stop doing one task and begin another?

Bruce Gregory

[From Bill Powers (991215.1237 MDT)]

Bruce Gregory (991215.0647 EST)--

Do you need a fully developed plan before you set out? If not, how do you
stop doing one task and begin another?

A logical or program-level control system can do this easily.

"If grocery store is not open yet, turn off reference signal for buying
groceries and turn on reference signal for picking up dry-cleaning."
Remember that reference signals refer to perceptual consequences, not
actions: for the preceding, the reference signals would really be
"groceries bought" and "dry-cleaning picked up."

Plans can be considered as sequences of perceptions to be achieved in a
certain order. If they include choice-points they become programs, and the
path through the program then depends on what results occur during
execution of the program. The above example is a snippet of a program
rather than a sequence, since the reference condition to be set next
depends on what happens when you get to the grocery store.

Is that more or less what you're asking about?

Best,

Bill P.

[From Bruce Gregory (991215.601 EST)]

Bill Powers (991215.1237 MDT)

A logical or program-level control system can do this easily.

"If grocery store is not open yet, turn off reference signal
for buying
groceries and turn on reference signal for picking up dry-cleaning."
Remember that reference signals refer to perceptual consequences, not
actions: for the preceding, the reference signals would really be
"groceries bought" and "dry-cleaning picked up."

Yes, I agree. I think the point Bruce A. was raising some time ago was
whether such a program is "really" in place. Certainly such a program
_might_ be in place. But at times it seems as though we (1) are going to
the grocery store and (2)suddenly remember to pick up the dry-cleaning.
This awareness of an intention outside of consciousness (until we
remember it) is often followed by a change in the perceptual variable
being controlled. Awareness may play no causative role--rather it might
follow the shift instituted by a program-level system. We can think of a
thermostat as running a program, but as Rick pointed out this may simply
mean that we are "looking" at the world "from" the program level. In the
thermostat's case we know that the simplest adequate model has no such
programmatic level. It's not clear to me how we might test the
conjecture that a program-level perception is being controlled in the
example you give.

Plans can be considered as sequences of perceptions to be
achieved in a
certain order. If they include choice-points they become
programs, and the
path through the program then depends on what results occur during
execution of the program. The above example is a snippet of a program
rather than a sequence, since the reference condition to be set next
depends on what happens when you get to the grocery store.

In some cases we seem to be presented with choice points that were not
part of the program-perception we are controlling. I've used the example
of discovering a traffic jam and changing your route. This might be
modeled in terms of a higher level program, but is this the only
approach? In other words, can we find a simpler model that explains an
"unexpected change of plans"?

Bruce Gregory

[From Bill Powers (991215.1536 MDT)]

Bruce Gregory (991215.601 EST)--

This might be
modeled in terms of a higher level program, but is this the only
approach? In other words, can we find a simpler model that explains an
"unexpected change of plans"?

Probably more than one. Why not give it a try? The first test is whether
the model does anything sensible at all when you run it according to its
own rules. The second is whether it behaves like any real person.

Best,

Bill P.

[From Fred Nickols (991215.1806 EST)] --

Hmm, again. I screwed up in my earlier posting and neglected to include my
own name and date-time stamp as the starting point. Consequently, the
paragraph below winds up being attached to Bruce Gregory. So, let me
reattach it to me.

> Hmm. Why is "switching" necessary? It seems to me that we are controlling
> for lots of things simultaneously. It also seems to me that, according to
> PCT, actions serve to counter disturbances. No action is required if
> reference level and perception are aligned. If several controlled
> perceptions deviate from their reference levels it seems to me that we'd
be
> engaging in lots of actions although at any point in time (a fiction, by
> the way,) we might be focusing on one such perception/reference level
pair.
>
> Help me out here PCT wizards? Is "switching" necessary?

Bill P, Marc Abrams and Bruce G subsequently responded and I'll pick up
with Bruce G's response:

As Bill points out, you can avoid the switching problem by assuming there is
always a higher order control system able to set references for lower order
control systems (you deviate from your trip home from the office to pick up
a container (I almost said bottle) of milk). The discussion about carrying
out sequences of actions a few weeks ago was about the "switching problem".
Do you need a fully developed plan before you set out? If not, how do you
stop doing one task and begin another?

Regarding the last two questions: No, I don't think you need a fully
developed plan. I do think you need a rough, loosely formed set of
intentions, something comparable to a general "To Do" list. I can and
often do have the intention of getting the car washed but I haven't set a
particular date or time. That leaves me free to exploit opportunity; for
example, some free time that unexpectedly comes my way (or even driving by
a car wash with no waiting line). I stop doing one task and begin another
as the result of many factors, some of which have just been mentioned
(e.g., serendipitous opportunity). I also shift my focus as a consequence
of shifting priorities and so on (e.g., the car gets really dirty and I
decide to go get it washed right now or ASAP).

Now, all this said, I still don't see why "switching" is of interest beyond
the common-sense observation that we all do it and for some pretty obvious
reasons. I don't know anyone of moderate intelligence who believes that
observable behavior/actions on the part of people are the result of S-R
pairings (i.e., behavior under the control of stimuli -- unless you want to
classify perceptions as stimuli) nor do I know anyone of moderate
intelligence who believes that behavior is simply a script or plan being
acted out (although I am well aware that both those schemes have been used
as a way of explaining behavior).

How do I stop doing one task and begin another? Actually, I don't know, I
just know that I do that. To ask me to explain that requires of me that I
create an explanation (an activity with its own reference levels I
suspect). I sure don't know enough about PCT to explain it in PCT terms
(or even to know if it can be explained in PCT terms -- except by way of
the higher order system alluded to above). In plain language, I stop doing
one thing and start doing another because the opportunity presents itself,
the conditions are right, and my own sense of priorities (importance and
urgency) call for it. Does that mean that a controlled perception has
reached some critical stage of deviation from its reference level? I don't
know. But it seems to me that the size of the deviation between reference
level and controlled perception, coupled with the standing of that
reference level/perception in the hierarchy probably has something to do
with it. I'm not about to stop breathing to get the car washed.

···

--

Fred Nickols
The Distance Consulting Company
"Assistance at A Distance"
http://home.att.net/~nickols/distance.htm
nickols@worldnet.att.net
(609) 490-0095

[From Bruce Gregory (991215.2030 EST)]

Fred Nickols (991215.1806 EST)

Now, all this said, I still don't see why "switching" is of interest

beyond

the common-sense observation that we all do it and for some pretty obvious
reasons. I don't know anyone of moderate intelligence who believes that
observable behavior/actions on the part of people are the result of S-R
pairings (i.e., behavior under the control of stimuli -- unless you want

to

classify perceptions as stimuli) nor do I know anyone of moderate
intelligence who believes that behavior is simply a script or plan being
acted out (although I am well aware that both those schemes have been used
as a way of explaining behavior).

It's of interest if you are trying to model behavior using CT, because we
don't yet have a model for it except in the context of control-of-program
perception.

How do I stop doing one task and begin another? Actually, I don't know, I
just know that I do that.

Yup. That's the problem.

I'm not about to stop breathing to get the car washed.

I think that's very prudent.

Bruce Gregory

from [ Marc Abrams (991215.2302) ]

[From Fred Nickols (991215.1806 EST)] --

Regarding the last two questions: No, I don't think you need a fully
developed plan. I do think you need a rough, loosely formed set of
intentions, something comparable to a general "To Do" list. I can and
often do have the intention of getting the car washed but I haven't set a
particular date or time. That leaves me free to exploit opportunity; for
example, some free time that unexpectedly comes my way (or even driving by
a car wash with no waiting line). I stop doing one task and begin another
as the result of many factors, some of which have just been mentioned
(e.g., serendipitous opportunity). I also shift my focus as a consequence
of shifting priorities and so on (e.g., the car gets really dirty and I
decide to go get it washed right now or ASAP).

Now, all this said, I still don't see why "switching" is of interest beyond
the common-sense observation that we all do it and for some pretty obvious
reasons.

I'm not certain that things are so obvious. At a very high level they may
"seem" to be but as you go down the hierarchy more variables are involved in
more high level control processes. You never do just one thing. What is of
interest to me are the "effects" that control has when "changes" are made.
Some questions;

In my opinion "change" involves the following issues:

What we are talking about here is volition. What is it?
How much of what you control is based on memory/imagination rather then
"real" data?
What role does attention/awareness play in what we control and how we
reorganize?

None of the above are currently expressed in the model. It's difficult to
talk about "real" behavior without including thes phenomenon.

I don't know anyone of moderate intelligence who believes that
observable behavior/actions on the part of people are the result of S-R
pairings (i.e., behavior under the control of stimuli -- unless you want to
classify perceptions as stimuli) nor do I know anyone of moderate
intelligence who believes that behavior is simply a script or plan being
acted out (although I am well aware that both those schemes have been used
as a way of explaining behavior).

Neither do i . It's usually some combination but the end result ususally
seems to be Behavior controlling perception, or Perception ( not the PCT
kind. the kind that cognivists like to define as our personal "filtering"
systems of how we see the world. That is value laden ) causes behavior

How do I stop doing one task and begin another? Actually, I don't know, I
just know that I do that.

The fact is Fred you never stop doing _all_ the things you controlsomethings
for. You walk to th store and walk to the hair salon and walk to the bank.
You might be controlling for _some _ different things at each location but
not _all_. Walking being a consistent action.

To ask me to explain that requires of me that I
create an explanation (an activity with its own reference levels I
suspect). I sure don't know enough about PCT to explain it in PCT terms
(or even to know if it can be explained in PCT terms -- except by way of
the higher order system alluded to above).

Your not alone. No one knows how to model this yet. That's the purpose.

In plain language, I stop doing
one thing and start doing another because the opportunity presents itself,
the conditions are right, and my own sense of priorities (importance and
urgency) call for it.

See above.

Does that mean that a controlled perception has

reached some critical stage of deviation from its reference level?

Maybe, maybe not. Another reason why we need to model it.

I don'tknow. But it seems to me that the size of the deviation between

reference

level and controlled perception, coupled with the standing of that
reference level/perception in the hierarchy probably has something to do
with it. I'm not about to stop breathing to get the car washed.

Fred, we'd all like the answers to these questions. They are not obvious.

Marc

[from Jeff Vancouver 991216.1005 EST]

Bill Powers (991215.1237 MDT)

A logical or program-level control system can do this easily.

"If grocery store is not open yet, turn off reference signal for buying
groceries and turn on reference signal for picking up dry-cleaning."
Remember that reference signals refer to perceptual consequences, not
actions: for the preceding, the reference signals would really be
"groceries bought" and "dry-cleaning picked up."

Plans can be considered as sequences of perceptions to be achieved in a
certain order. If they include choice-points they become programs, and the
path through the program then depends on what results occur during
execution of the program. The above example is a snippet of a program
rather than a sequence, since the reference condition to be set next
depends on what happens when you get to the grocery store.

I am not sure how this would work. Two issues: how can a control units
reference condition not be "set"? It seems that the answer is not that the
sequencing is accomplished not be setting the reference signal, but by
sending a gain other than zero to the control unit. The second question is
how do these program level control units emerge? When the morning comes,
and I think about all the things I have to do, and the order in which I want
to do them, all of this has to occur chemically (neurologically) as well as
cognitively. Why should we assume the cognitive constructions become a
configuration of control units? Having said this, I think my recently
distributed paper shows that they may well be. Nonetheless, it seems that
the leap from physical structures that are necessary for motor control to
physical, but temporary, structures for abstract, higher-level control is a
large one. I myself forgive psychologists for their hesitance in making
that leap (sorry, had to get that editorial in).

Jeff

[from Bruce Nevin (991216.1111 EST)]

Jeff Vancouver 991216.1005 EST--

···

At 10:20 AM 12/16/1999 -0500, Jeffrey B. Vancouver wrote:

how can a control units
reference condition not be "set"? It seems that the answer is not that the
sequencing is accomplished [by] setting the reference signal, but by
sending a gain other than zero to the control unit.

How is setting a relative priority to be modelled? High priority is often
but not necessarily co-incident with high gain, and gain=0 is a convenient
way to implement a priority of "never," but gain determines how much error
is tolerated, not which branch of a conflict wins in a process (yet to be
modelled) that we call "choosing."

  Bruce Nevin

[From Bill Powers (991216.1008 MDT)]

How is setting a relative priority to be modelled? High priority is often
but not necessarily co-incident with high gain, and gain=0 is a convenient
way to implement a priority of "never," but gain determines how much error
is tolerated, not which branch of a conflict wins in a process (yet to be
modelled) that we call "choosing."

See Kent McClellan's simulations of conflict. Error is not "tolerated."
Depending on gain, it produces a certain amount of output signal. If there
is error in a conflicting system, it, too produces a certain amount of
output signal. The balance point between these two systems, in terms of the
shared perception, is nearest to the reference signal of the system with
the highest gain. There are interesting predictions to be made concerning
the effects of disturbances on the opposing pair of systems, especially
when one or both systems have limits on the amount of output they can
produce -- similar limits, or different limits.

"Relative priority" is an ambiguous term. Does it mean priority in time (do
this first, then that) or priority in terms of importance (raise the gain
in the system controlling this variable and lower it for the system
simultaneously controlling that variable)? If you establish a priority in
time, this amounts to controlling the sequence of perceptions: first bring
this perception to a reference state, then bring that one to its reference
state. Or bring the first perception to a _different_ reference state. If
you mean a priority concerning simultaneous control of two variables, it
means turning down the gain of one system (and turning up the gain of the
othjer) whenever the other system experiences an error. In the latter case,
both control systems are always controlling at the same time, but with
variable gain. In all these cases, we're talking about the actions of a
system of a higher level.

The problem here is mainly in describing exactly what you want modeled.
Once that is clearly established, the model itself is not hard to sketch in.

As to "choosing", again the model depends on what you mean by that term. If
you mean the experience of having to choose at a time before the choice has
been made, conflicting control systems can form an appropriate model. If
you mean the process by which one alternative is eliminated and the other
retained (resolution of the conflict) there are many ways in which this can
be done, and thus many models to choose from. One way is to set up a third
control system specifically to conflict with one side of the choice, so as
to remove that control system from consideration. I call this the "will
power" solution. Another is to lower the gain on one side and/or raise it
on the other, so as to move the net perceptual signal close to one of the
reference signals. You might call this "lowering the importance" of one
goal relative to the other. And a third is to stop sending a reference
signal to one of the conflicted systems, so it ceases to try to act. These
three approaches, of course, are the systematic result of higher-level
control processes aimed specifically at eliminating the conflict. A fourth
method, which is not really a method but is simply a consequence of being
in severe conflict, is to reorganize. There is no way to predict the
results of reorganization, except to say that the total error will probably
(but not always) be reduced before permanent damage has occurred.

The informal terms in which we often describe behavior seldom describe
anything precisely enough to call for one and only one model.

Best,

Bill P.

[From Bill Powers (991216.1033 MDT)]

Jeff Vancouver 991216.1005 EST--

I am not sure how this would work. Two issues: how can a control units
reference condition not be "set"? It seems that the answer is not that the
sequencing is accomplished not be setting the reference signal, but by
sending a gain other than zero to the control unit.

This has been discussed a number of times on CSGnet. All neural control
systems are necessarily one-way systems; no neural signal can go negative.
If the reference signal is excitatory and the perceptual feedback signal is
inhibitory, then setting the magnitude of the reference signal to zero is
equivalent to turning the control system off. The reason is that with no
excitatory reference signal present, the inhibitory perceptual signal can
never produce any output from the comparator neuron no matter how large the
perceptual signal gets.

Of course some inhibitory connections are multiplicative rather than
additive. In that case, an inhibitory signal sent into an output neuron
from a higher level of control will make that output neuron less sensitive
to its input signal, the error signal, fewer impulses per second leaving
the output neuron for a given rate of impulses per second entering it. This
is how loop gain can be varied (here, reduced) by a higher system.

The second question is
how do these program level control units emerge? When the morning comes,
and I think about all the things I have to do, and the order in which I want
to do them, all of this has to occur chemically (neurologically) as well as
cognitively. Why should we assume the cognitive constructions become a
configuration of control units?

The cognitive constructions, in PCT, ARE configurations of control units.
The hypothesis is that there is a layer in the brain which contains neurons
suitably configured to serve as elements of logical computing processes.
Organizing them to perform _specific_ logical processes takes much longer
than a day -- it might take years. What you think in the morning is the
result of logical operations you became able to carry out years ago; all
that is different from one morning to the next are the environmental inputs
being converted to perceptions of logical variables, and perhaps your goals
regarding those variables being varied by higher levels still.

... it seems that
the leap from physical structures that are necessary for motor control to
physical, but temporary, structures for abstract, higher-level control is a
large one.

No larger than it is in going from what seem abstract computations to the
specific hardware of a computer capable of carrying them out. It is not the
physical organization of a computer that is temporary, but only the
information that happens to be available to it. "Abstract" does not mean
"non-material," at least as I think of it. The structures in which higher
levels in PCT exist are no less physical than structures in which lower
level processes are carried out.

Best,

Bill P.

[From Bruce Gregory (991216.1300 EST)]

Bruce Nevin (991216.1111 EST)

How is setting a relative priority to be modeled? High
priority is often
but not necessarily co-incident with high gain, and gain=0 is
a convenient
way to implement a priority of "never," but gain determines
how much error
is tolerated, not which branch of a conflict wins in a
process (yet to be
modeled) that we call "choosing."

Could you provide a few examples where the priority is high and the gain
is low?

Bruce Gregory

[From Bruce Gregory (991216.1304 EST)]

Bill Powers (991216.1008 MDT)

Very helpful post. Thanks.

Bruce Gregory

[from Jeff Vancouver 991216.1410 EST]

Bill Powers (991216.1033 MDT)]

Interesting. I have clearly missed this in the conversations on CSGnet. It
suggests more constraints on my modeling.

Thanks,

Jeff

[From Bill Curry (991216.1530 EST)]

Bruce Nevin (991216.1111 EST)]

>how can a control units
>reference condition not be "set"? It seems that the answer is not that the
>sequencing is accomplished [by] setting the reference signal, but by
>sending a gain other than zero to the control unit.

How is setting a relative priority to be modelled? High priority is often
but not necessarily co-incident with high gain,

Seems to me we often categorize a task as "high priority" while meaning "this
_should be_ high priority". Example--I know I have a crucial report deadline
tomorrow, but I procrastinate and pursue non critical tasks. I smell a
conflicting control system hiding out somewhere behind the bushes that is
lowering the gain on the report goal. This doesn't appear to be a switching
issue--the pseudo "high priority" control system is not "off"--it's under
control but at such a low gain that it's ineffectual. While my dilatory
actions may not indicate what perceptual variables I am controlling, they do
reveal that the "high priority" categorization of the report goal is a wishful
fantasy. I agree, Bruce, "setting a priority" does not function as a
surrogate for gain.

I therefore suggest that "setting priorities" as a subjective, volitionally
based concept has little utility in PCT modeling. On the other hand, a
concept of "perceptual priority" may be helpful.

Looking again at my example above, I ask "How do the control systems
generating my ostensibly frivolous actions overpower the more pressing report
goal and its related control systems?"

Consider the abilities of ascetic Yogis to "turn down the gain" on their
sensory channels as they sustain all kinds of traumatic bodily insults. This
is of course a volitional, acquired skill--one that is honed by focusing or
directing the perceptual priority of the mind to a particular, desired control
system(s) [e.g., meditating on the thought that "Pain is merely a sensation,
nothing more" or simply emptying the mind of all thoughts].

Importantly, this appears to be a process of relative displacement: Giving
perceptual priority to the desired control system progressively displaces the
competing perceptual systems. In other words, raising the gain on the
"desired" system has the secondary and collateral effect of reducing the gain
on the competing systems. This implies a separate control mechanism
regulating on the basis of relative gains.

What the Yogis learn to do with intentional practice, i.e., focus their
perceptual priority to dissociate from painful perceptions, is analogous to
the non volitional phenomenon occurring in my example. The relatively higher
gain being given to the dilatory control systems (perhaps the CV being "fear
of producing a poor report") has displaced and lowered the gain of the report
generation control system.

It also make sense that these Yogic skills are only an extension and
refinement of a natural brain mechanism that gives perceptual priority to the
highest gain system of the moment. We live in such a forward looking, time
dependent and time measured world that its very difficult to imagine the
proto-human realm where the time was always just "NOW" (BTW, its still is ;-).
Besides needing an intrinsic reference system to sustain its life, that
organism needed a parallel intrinsic ability to determine perceptual priority
from moment to moment. There's great value, indeed, in being able to
determine which perceptual stream is most important when a panther sinks his
fangs into your butt while you are skinning a fat armadillo for supper. [A
side observation: Given our swift evolution from a world where NOW-based
perceptions and disturbances were totally controlling, it's no wonder that
procrastination about future events is such a natural and comfortable pastime;-)].

So, how is it wired up? Dunno. There are a lot better wire benders than me
around here.

and gain=0 is a convenient
way to implement a priority of "never,"

or somewhere between "extremely urgent" and "not now".

but gain determines how much error
is tolerated, not which branch of a conflict wins in a process (yet to be
modelled) that we call "choosing."

Sure of that? Why do choices _have_ to be depicted as switched dichotomous
branches? Why not have control flow based on relative gain? This leaves all
systems controlling but still gives one dominance over another. After all,
most choices aren't black or white, on or off. Presumably a rapidly
escalating error (fangs in butt) will ratchet up the gain in that system
commensurately--couple that with a primitive, intrinsic, relative gain based,
perceptual priority system and you have a built-in automatic choice maker.
Now go wire it up!

Regards,

Bill

···

At 10:20 AM 12/16/1999 -0500, Jeffrey B. Vancouver wrote:

--
William J. Curry, III 941-395.0088
Capticom, Inc. capticom@olsusa.com

[From Bruce Nevin (991217.1538 EST)]

Bill Powers (991216.1008 MDT)--

"Relative priority" is an ambiguous term.

Yes, in this "switching" thread I was talking about priority as a basis for
choice, not temporal order. I couldn't see a necessary connection between
changing gain and making a choice. With your clarification, I now do. I see
that by changing relative gain of two systems that are in conflict over a
CV, the value of the CV approaches more closely to the reference for one
system, even "close enough" to have the effect of a choice.

You went on to talk about "the experience of having to choose at a time
before the choice has been made," which I guess means being in conflict and
being aware of having to choose. This involves a third control system,
presumably the vantage point of this awareness of choice. One of the
mechanisms involves relative gain of the conflicting systems, as above,
except that this third control system would have to be able to affect the
(I assume) output function of one of the conflicting control systems to
adjust the gain setting--I assume that's what "lower the importance" means.
You also note that this third system can take sides, as it were, and act in
conflict with one of the originally conflicting pair ("will power"). Both
of these mechanisms are a kind of intrapersonal coercion.

Or the third system "stop[s] sending a reference signal to one of the
conflicted systems, so it ceases to try to act." In this case, the third
system is also the higher-level system that previously was sending a
reference signal. When I think about the ramifications of this, it seems
like a rather complicated can of worms. There can be more than one
contribution from above to a given reference signal, and the error output
from the higher-level system may also branch to more than one lower-level
reference input. This "don't care" mechanism seems to blend imperceptibly
into the fourth mechanism that you mention, reorganization.

So to "control with high priority" appears to mean not only to control with
high gain, but also with the involvement of some higher-level control
system, used as a point of view for awareness, stepping in when there is
conflict with that high-gain control loop and resolving the conflict in one
or more of the above ways -- exercising choice. I am still somewhat
mystified as to how this monitoring and stepping in is modelled. To have it
monitor error would be an innovation.

Perhaps there is a general-purpose function that intervenes, with awareness
and capacity for "choice" as above, wherever error is high. Also a
significant innovation. I was hoping to see something within the
established mechanisms of PCT. I just can't see that well. But this
certainly helps. Thanks.

  Bruce Nevin

···

At 10:32 AM 12/16/1999 -0700, Bill Powers wrote:

[From Bruce Nevin (991217.1548 EST)]

Bruce Gregory (991216.1300 EST)

···

At 01:04 PM 12/16/1999 -0500, Bruce Gregory wrote:

Could you provide a few examples where the priority is high and the gain
is low?

The examples I could think of are complicated by multiple levels of control
where you could say that gain is high at a higher level. But the underlying
issue was that I couldn't see any necessary connection between gain and
priority in conflict resolution; I now do.