conflict, actual and imagined

[From Bruce Nevin (2001.0110.2220)]

Bill Powers (2001.01.10.1854 MST)--

Why not just stick with one person offering another person an arbitrary choice, neither alternative being anything the other person would want to do? [...] It's up to you, I'll go along with either one, you're perfectly free to choose [...].

Conflict is a relationship between goal-seeking systems, whether they be in separate persons or in the same person. Both parties to a conflict have to be control systems, as I formally define conflict. [...] a situation in which neither control system can achieve its goal without preventing the other from achieving its goal.

In all the problematic cases that get us so muddled, one control system is *imagining* its control of a variable being overwhelmed by another control system. The driver imagines a policeman arresting her for speeding. The pedestrian imagines the robber pulling the trigger and shooting him. The believer imagines eternity in pain. The conflict depends upon how credible the imagined outcome is -- so far as I know we have not talked about how to implement credibility in a computer simulation -- and upon the gain in the loop controlling to avoid that outcome. It seems to me that in *all* such cases the conflict is internal, whether or not another control system is present in the environment and perceived as ready to carry out the imagined actions -- perceptions which are also internal to the threatened control system.

Exactly symmetrical are the cases where one control system is imagining its control of a variable being enabled by another control system. Instead of threats, rewards or incentives. Not to say that the rewards cause behavior, but that if X is a controlled variable that the control system is not able to control successfully, and it can control Y successfully but is not controlling Y, and then it learns that if it controls Y then the other control system will enable it to control X successfully, it may control that program perception. The likelihood depends upon how credible the imagined outcome is and upon the gain in the loop controlling the desired outcome, control of X. I have assumed that control of Y is easy, but the difficulty of controlling Y is another variable.

The point of symmetry is in the last part of those two descriptions: the credibility of the imagined outcome, the threat or the reward. The difference is in whether the imagined outcome is desired or avoided.

         Bruce Nevin

···

At 07:37 PM 01/10/2001 -0700, Bill Powers wrote:

[From Bill Powers (2001.01.11.0310 MST)]

Bruce Nevin (2001.0110.2220)--

In all the problematic cases that get us so muddled, one control system is
*imagining* its control of a variable being overwhelmed by another control
system.

Right, a good point.

The conflict depends upon how credible
the imagined outcome is -- so far as I know we have not talked about how to
implement credibility in a computer simulation -- and upon the gain in the
loop controlling to avoid that outcome.

I should think that the _most_ credible outcome would be a kind that has
actually been experienced before. If you've ever been shot (I haven't) and
survived, a threat to shoot you might be more effective than it would be to
someone who has to try to make up what that experience would be like. If
you've felt what it's like to be shot, you would be trying to prevent
repetition of a set of perceptions that is in your memory. Of course you'd
still have to make up imagined perceptions of what it would be like to be
killed, if that were the threat.

It seems to me that in *all* such
cases the conflict is internal, whether or not another control system is
present in the environment and perceived as ready to carry out the imagined
actions -- perceptions which are also internal to the threatened control
system.

It would be internal only if you both wanted and did not want to experience
the imagined perception. What makes the conflict external is that another
person is involved, and the threat is part of that person's means of
getting you to do something you would prefer not to do. It's not the same
as imagining that another person is there. You wouldn't be imagining
getting shot if the other person weren't pointing a gun and threatening to
use it. You wouldn't imagine getting shot just as a way of forcing yourself
to give money to someone else.

Imagined perceptions play a part in lots of ordinary real control
processes. For example, why don't you step right out and start walking
across a freeway where the traffic is heavy? You imagine the probable
result and look for a different way to get across. There's no conflict
there -- you just find a means that will satisfy two goals at once: getting
across and staying alive.

Exactly symmetrical are the cases where one control system is imagining its
control of a variable being enabled by another control system. Instead of
threats, rewards or incentives.

Nice extension of the argument. The situation isn't parallel, however,
because there's no (necessary) conflict in another person's offering to
help you achieve a goal you already have. There's no goal you have to
choose, unless the other person makes the helping contingent on your doing
something, and there's no conflict until doing that something causes an
error in one of your own control systems. Imagination and credibility,
however, _are_ involved, as you say. A kid who washes your car for a
dollar, aside from being stuck in 1950, imagines that you will actually pay
the dollar when the job is done. If he doesn't believe you will pay, he
won't wash the car.

The point of symmetry is in the last part of those two descriptions: the
credibility of the imagined outcome, the threat or the reward. The
difference is in whether the imagined outcome is desired or avoided.

Right. However, the threat is only a problem if _neither_ outcome is a
desired one. You can't threaten someone by saying "If you don't give me
your money, I'll give you mine."

In the case of the reward, the problem arises when the promised outcome is
desired, but the behavior required to get it has imagined error-causing
side-effects. "I'll give you a dollar if you'll wash my pit bull."

Credibility, it seems to me, is simply the sum of the experiences one has
had that involve the threatened or promised outcome. The word falsely
implies that credibility is in the thing believed; it is actually in the
believer, and we should really be talking about credulity, the degree of
belief one has. If you believe that something will happen, you behave as if
it is actually happening. That's why people at the 3-D movie duck when
something seems to be thrown at them. The imagined perception (the solidity
of the object) is treated as real. In PCT there's no difference between a
real perception and an imagined version of it: in either case, the
perception is a signal emitted by a perceptual function. The input signals
come from different places, but once the perceptual signal exists, there's
no way to tell where it came from.

To sum up, I agree that there must be a degree of imagination involved in
both threats and promises. I don't agree that any conflict is completely
internal in both cases, or either case. Conflicts boil down to
contradictions: A and not-A.
While internal conflicts can exist, this does not mean that all conflicts
are internal.

Best,

Bill P.

[From Bruce Nevin (2001.0111 10:51 EST)]

Bill Powers (2001.01.11.0310 MST)--

>It seems to me that in *all* such
>cases the conflict is internal, whether or not another control system is
>present in the environment and perceived as ready to carry out the imagined
>actions -- perceptions which are also internal to the threatened control
>system.

It would be internal only if you both wanted and did not want to experience
the imagined perception.

You misunderstand. The control system is controlling X. The imagined perception is of being unable to control X any more, due to the imagined intervention of another control system. The conflict is between actual control and imagined disturbance; both are internal. The source of disturbance need not be present (no police cars), and we as outside observers may agree that the disturbance cannot happen (the consensus of many people about eternal damnation). All that matters is that the control system imagine the disturbance "credibly".

I agree that there must be a degree of imagination involved in
both threats and promises.

The only thing that is not in imagination is whatever basis there may be for credibility.

I don't agree that any conflict is completely
internal in both cases, or either case.

I said nothing about conflict in a promise. There is no conflict in a straightforward promise. (Incentives usually involve internal conflict. I'll defer that to the end of this note.) This is because the imagined outcome is something that is desired. There is conflict in a straightforward threat, because the imagined outcome is something that is abhorred. This difference in valuation of the imagined outcome is the characteristic difference between a promise and a threat, and it is entirely internal to the perceptual system of the one threatened or promised (as in the comeback line "Is that a threat or a promise?" when e.g. someone says they're leaving). To say this is to agree with you:

The word ["credibility"] falsely
implies that credibility is in the thing believed; it is actually in the
believer, and we should really be talking about credulity, the degree of
belief one has.

While internal conflicts can exist, this does not mean that all conflicts
are internal.

Of course. I didn't say that either.

I am drawing a distinction between (1) external conflict, where control actions of one control system controlling X prevent another control system from controlling that same variable X successfully, and (2) refraining from control actions to control X because of imagining external conflict that would overwhelm one's ability to control Y. In a limiting case, the second variable Y can be the same as X, but usually it is a different variable.

I left a promissory note behind. Not all promises are straightforward. Conflict may be involved when a promise is used as an incentive to do something that you would not otherwise do, or that you would prefer not to do. However, this conflict is internal as well because it is between your actual control of that variable (null or negative) and imagined control as means of getting what is promised.

         Bruce Nevin

···

At 04:01 AM 01/11/2001 -0700, Bill Powers wrote:

[From Bill Powers (2001.01.11.1515 MST)]

Bruce Nevin (2001.0111 10:51 EST)--

It would be internal only if you both wanted and did not want to experience
the imagined perception.

You misunderstand. The control system is controlling X. The imagined
perception is of being unable to control X any more, due to the imagined
intervention of another control system. The conflict is between actual
control and imagined disturbance; both are internal.

I don't see how that conflict works. Remember that there have to be two
reference signals, two perceptual signals, and two output functions, the
outputs of which work oppositely to each other. When you say the imagined
perception is of "being unable to control X any more," just what does that
imagined perception consist of? Sitting there not controlling? Imagining an
overwhelming disturbance counteracting the actions you take? Those are all
perceptions, not actions. There is no actual pair of outputs opposing each
other as there would be in a true conflict; you're not talking about
complete control systems. And anyway, being overwhelmed by a large
disturbance is not conflict; it's being overwhelmed by a large disturbance,
which is something entirely different. In a true conflict, there are _two_
systems, each producing a disturbance of the other, and each frustrating
the other's attempt to control. Conflict is a relationship between two
independent control systems. It isn't just a synonymn for "difficulty."

Best,

Bill P.

[From Bill Powers (2001.01.12.0535 MST)]

Bruce Nevin (2001.0111 10:51 EST)]

You misunderstand. The control system is controlling X. The imagined
perception is of being unable to control X any more, due to the imagined
intervention of another control system.

Your post has got me thinking about this. We need to be much clearer about
the way imagination enters into processes like these. I mean clearer about
what we experience going on, not so much about the model. It seems to me
that there are many cases in which we do or don't do something on the basis
of what we imagine or remember to be true.

In the case you describe above, there are several different perceptions
collapsed together. The first is imagining the intervention of another
control system: if I ask Joe to lend me his lawnmower, he'll just remind me
that I still have his rototiller. _Then_ come the conclusions: so it would
be futile to ask him for the lawnmower -- he'll refuse. That's the higher
levels kicking in. Then comes the conflict: I want to keep the rototiller
but if I do, Joe refuses to lend the lawnmower, and I need to mow the lawn
as well as cultivate the garden this weekend.

The higher levels don't really count, here -- they work more or less the
same way whether the basic information is actual or imagined/remembered.
That is, I'll do pretty much the same things next whether Joe actually
refuses to lend me the lawnmower or I just imagine his doing so. The
conflict is just as real whether he refuses my request for the reason I
imagine, or I only imagine it happening that way.

A typo just then reminded me that "imagine" is only one letter from
"imaging." To imagine is to synthesize an internal image of something
happening. That is separate from any further interpretations or conclusions
we might draw from the image, whether it's a present-time image or one
created internally.

In the case of the robber, what might we imagine? Saying "No", we imagine,
will be followed by a flash and a bang and a big pain. Saying "Yes" will be
followed by loss of our money and probably credit cards. So we imagine the
pain and we imagine the loss, and the conflict is only momentary: we pick
the loss as the immediate goal. Then we perform the act that we imagined
would lead to the loss. Presumably it does not result in a bang, flash, and
pain as well as loss of the money, although there is really no reason to
think that a person who will rob you at gunpoint will, nevertheless, tell
you the truth about his intentions.

The only imagined part of this interaction is the model of the environment:
the imagined effect of saying, in imagination, No or Yes. Everything else
is the same as it would be in reality (if one imagines accurately).

Does this get us anywhere?

Best,

Bill P.

[Bruce Nevin (2001.01.12 13:02 EST)]

Bill Powers (2001.01.12.0535 MST)--

Does this get us anywhere?

Yes, indeed! This is the level of analysis I was trying to get to but it kept getting sucked into the RTP coercion maelstrom, and it's what those little interaction diagrams were about that I showed you at the conference two years ago. I won't be able to respond properly until later, maybe tonight, but briefly:

Yes, the contingency (give money OR get shot) is at a higher level, yes, the analysis extends from threats and promises to all sorts of social contingencies ("I owe him his rototiller back"), social reliance and reliability, and the whole world of reputation, face, respect, and so on, and yes,

there are many cases in which we do or don't do something on the basis
of what we imagine or remember to be true.

Indeed. Indeed.

         Bruce Nevin

···

At 06:37 AM 01/12/2001 -0700, Bill Powers wrote:

[From Bruce Nevin (2001.01.12 11:54 EST)]

Bill Powers (2001.01.12.0535 MST)

In the case you describe above, there are several different perceptions
collapsed together. The first is imagining the intervention of another
control system:

[...]

In the case of the robber [...] Saying "No", we imagine,
will be followed by a flash and a bang and a big pain. Saying "Yes" will be
followed by loss of our money and probably credit cards. So we imagine the
pain and we imagine the loss, and the conflict is only momentary: we pick
the loss as the immediate goal. Then we perform the act that we imagined
would lead to the loss. [...]

X = money in wallet
L = being alive and well
Language perceptions: man with gun says "your money or your life".
Mapping of words to nonverbal perceptions:
         "your money"= ~X (~X imagined)
         "your life"= ~L (~L imagined)
         "or" = ^
Logic perception: ~X ^ ~L (~X and ~L imagined)
         (equivalently: X |= ~L "if I keep my money I get killed"
                   and: ~X |= L "if I give up the money I live")
Conflict 1: X vs. ~X (~X imagined)
Conflict 2: L vs. ~L (~L imagined)

X |= ~L
Imagine saying "No" and keeping control of the money.
         Imagine how the man might control ~L with that gun.
         Imagine not being able to get along with ~L as the outcome.
         (Maybe imagine him taking the money anyway, though
          that's academic in this context.)
         (Maybe imagine preventing his control of ~L with that gun,
          that could lead to the evaluation below going the
          other way.)

~X |= L
Imagine giving the money to him, and losing control of X.
         Imagine how to get along without that particular bit of money.
         (If you can't the next step could go the other way
          Point of honor? Huge amount? Someone needs it? etc.)

Some sort of evaluation of error in conflict 1 vs. error in conflict 2?
         Intrinsic error in Conflict 2 is immediate.
         Conflict 1 need not lead to intrinsic error.
         Therefore accept error of conflict 1.
         (Here's our discussion of "choosing".)

One outcome: Give the thief the money
                 despite X vs. ~X conflict and resultant error.

Note that there are two conflicts, each between an actual state (L, X) and an imagined state (~L, ~X) of a controlled variable. Threats and promises always set up a contingency between two controlled variables: If state a of Variable A then state b of variable B. (I realize I haven't written it properly to show a value of a variable but I trust the shorthand is acceptable.)

The only imagined part of this interaction is the model of the environment:
the imagined effect of saying, in imagination, No or Yes. Everything else
is the same as it would be in reality (if one imagines accurately).

So what exactly is included in the "everything else"? The fact that the imagining has verisimilitude does not take it out of imagination. There's a lot in the above that is imagined. You might argue that the logic perception is not imagined even if all its terms are imagined; or conversely that reasoning about imagined perceptions is reasoning in imagination. (Sounds like a silly pinhead argument we don't want to have, it isn't as though we knew that much about logic perceptions and their relation to language on the one hand and to nonverbal perceptions on the other.)

Here's what I can see to include in "everything else" that is not imagined:

X, money in the wallet.
L, being alive; but this is probably just absence of significant
         intrinsic error, and ~L is imagined intrinsic error
         (unless you remember being dead).
Language perceptions and their mappings to nonverbal perceptions.
         But alternative constructions and alternative mappings of any given
         construction are all imagined in parallel with the one that you
         fix on as the intention of what was said. And that's what you
         imagine was the intention of what was said.
The logic function of disjunction over pairs of perceptions ^ ("or")
          or inference |= ("if-then")
         A ^ ~B is equivalent to B |= A
         (Imagination notoriously lacks negation [think about the absence
          of a white monkey ... I don't mean think about anything but a
          white monkey, or the opposite] but the combination of gain
          and error suffices I think.)

Everything else is imagined.

A few observations:
There's a whole lot to try to include in a single diagram.
The thief is controlling the money in his imagination.
   (What if there's none there?)
The thief's purpose is to frighten the victim into compliance,
   probably not to kill him (imagine consequences of a murder rap).
   Is he controlling shooting the victim even in his imagination?
   Not known, could vary from one to another.

More could be said about the thief, and we haven't touched on some other issues such as credibility.

Long past quitting time.

         Be well,

         Bruce Nevin

···

At 06:37 AM 01/12/2001 -0700, Bill Powers wrote:

[From Bill Powers (2001.01.13.0234 MST)]

Bruce Nevin (2001.01.12 11:54 EST)--

X = money in wallet
L = being alive and well
Language perceptions: man with gun says "your money or your life".
Mapping of words to nonverbal perceptions:
        "your money"= ~X (~X imagined)
        "your life"= ~L (~L imagined)
        "or" = ^
Logic perception: ~X ^ ~L (~X and ~L imagined)
        (equivalently: X |= ~L "if I keep my money I get killed"
                  and: ~X |= L "if I give up the money I live")

I'd like to suggest some notations closer to normal usage in Boolean
algebra and programming, though not identical (& instead of the C-language
&&, and so on).

& logical AND

   logical OR

^ exclusive OR, (A and ~B) OR (B and ~A)
-> implication, A implies B (see below)

Don't know what your symbol "|=" means -- you didn't define it and I'm not
familiar with the usage. If you mean implication, this can be written as an
arrow, ->. A -> B is translated "if A then B" or "it is not the case that A
is true and B is false", the symbolic form of which is ~(A & ~B).

We have to define X and L unambiguously, too. I'd like to let M rather than
X represent the money. X doesn't have any mnemonic value for me in the
example.

M = I keep the money, ~M = I give the money.
L = I go on living, ~L = I die.

The statement "your money or your life" is logically ambiguous, as the
logical M | L (where "|" is the logical OR) is satisfied by any combination
of M and L: M, L or both. What the robber means is M ^ L, where "^" is the
exclusive-or, so

M ^ L = (M & ~L) | (L & ~M)

meaning, you can have your money and not your life, or your life and not
your money. To be complete, a logical statement has to specify all possible
cases.

What you want is your life AND not giving your money: Ref = M & L.

However, the perception Per = M & L will never be true: the condition the
robber has set up is that M -> ~L: M (keeping the money) implies not-L: it
is not the case that keeping the money is true and not-living is false. So
if M is true, L is false, and M & L is false. The reference level is true,
and the perception is false. If error = (P ^ R), then error is always true
if P and R are not both true or both false, which is another way to state
the exclusive-or.

Of course if the robber knows logic as well as crime, then he is free by
the rule he stated to shoot you after you give him the money, for if he
specified M -> ~L, he did not also specify that ~M -> L. The ONLY way the
implication M -> ~L can be false is for M to be true and L to be true. If M
is false (you do not keep the money) and L is also false (you are shot),
the implication is still true.

However, even if the robber stated that if you don't give him the money,
you will die, what he probably means is also that if you do give him the
money, you won't die. So he really means the exclusive-or. Since most
people think that if A implies B, B also implies A, you are probably safe
in this assumption.

This long preamble leads to a simple conclusion. If you are using a logical
control system, there is no conflict. There is simply one control system
that can never correct its error, even in imagination. The controlled
perception is M & L, but the environmental feedback connection is such that
M -> ~L, so it is impossible for M to be true and L also to be true. So the
reference value for M & L is TRUE, but the perceived value of M & L will
always be FALSE. This is simply a single failed control system, not a pair
of independent systems in conflict.

Best,

Bill P.

[From Rick Marken (01.01.13.0910)]

Bill Powers (2001.01.13.0234 MST)--

If you are using a logical control system, there is no conflict.
There is simply one control system that can never correct its
error, even in imagination.

Yes. That was my guess. Thanks for the analysis.

The controlled perception is M & L, but the environmental feedback
connection is such that M -> ~L, so it is impossible for M to be
true and L also to be true.

I believe that's true of both the environmental feedback connection
(the one that goes through the environment, the most salient part
of which, in this case, is the robber) and the imagination connection
(the one that goes through only the lower levels of the victim's own
hierarchy of control systems).

So the reference value for M & L is TRUE, but the perceived value
of M & L will always be FALSE. This is simply a single failed control
system, not a pair of independent systems in conflict.

Very pretty. Thanks.

Best regards

Rick

···

---
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@mindreadings.com
mindreadings.com

[From Bruce Nevin (2001.01.13 12:08 EST)]

Bill Powers (2001.01.13.0234 MST)--
>This long preamble leads to a simple conclusion. If you are
>using a logical control system, there is no conflict. There is
>simply one control system that can never correct its error,
>even in imagination. The controlled perception is M & L, but
>the environmental feedback connection is such that M -> ~L, so
>it is impossible for M to be true and L also to be true. So
>the reference value for M & L is TRUE, but the perceived value
>of M & L will always be FALSE. This is simply a single failed
>control system, not a pair of independent systems in conflict.

I can't accept your assumption that "M & L" is a controlled perception at the level of logic. M is a controlled perception. L is a controlled perception. (For simplicity assume that L is avoidance of pain.) You can't put the whole world of controlled perceptions into the logic level as pairs of controlled perceptions controlled under "&". The logic level comes in here because of the contingency posed by the robber, not because it is always logically ANDing together every possible pair of whatever perceptions we are controlling.

The contingency results in a conflict when the victim accepts it, and, having accepted it, must either control ~M or ~L. It is only because of this contingency that the victim controls ~M as one possible outcome and ~L as the other possible outcome.

Simplisticly, we could say that the logic system sets the reference for control of M. But why not set the control for L? There is nothing in logic that gives a preference for either. And the logic system just reset the reference there would be no experience of conflict, yet undoubtedly a person being robbed is reluctant to give up what is being taken and does so unwillingly. It appears that one or more control systems are controlling M even as the logic system controls ~M (setting aside for the moment why and how the choice is ~M instead of ~L). This puts the logic system in conflict with whatever other systems are controlling M.

The victim is already controlling M and L, and that is why conflict between existing control of M and imagined control of ~M is part of the process, and conflict between existing control of L and imagined control of ~L.

There is a deeper problem here. Frege should have called his book "laws for thought" instead of "laws of thought". Logic is a system for verifying that conclusions have a valid relation to premises. It disciplines our thinking and constrains us to make our premises explicit. There is no clear evidence that we typically reach conclusions logically, abundant evidence that however we reach conclusions we use logic (if at all) to patch together a rationale for having done so, and if logic were our sole innate process for decision making there would be no need to prescribe it--this last a form of argument that should be familiar to you, Bill, since you have used it against proposals that certain other human traits are universal (I don't remember the details at the moment).

Most of what passes for logic is simple sequence control, and for most purposes that suffices. That's why logicians since Aristotle have had to advise each new generation of students that "post hoc ergo propter hoc" (it comes after, therefore it must be because of) is not valid reasoning.

Victims of threats and subjects of promises alike are well known to respond emotionally and not logically, and the careful application of logic is a discipline that they apply typically (if at all) with more or less difficulty, depending on the intensity of the emotion. I think this is because they are not going up a level to the logic level or higher. I wonder if people have more difficulty going up a level when emotion is involved with their perceptions. I think that's true. What do you think?

The contingency offered by the robber in effect circumscribes a very narrow perceptual universe. There are only two possible outcomes, M or ~L. The victim does not have to accept the contingency. Going up a level, the victim might notice other possibilities, some disturbances to the robber's control of ~L. Maybe some people are coming. Maybe the gun is a fake. (My grandfather, whose crew up to Alaska were a pretty rough lot, would sometimes go into a bar in Seattle or Nome to fetch a man back to the ship with nothing more in his pocket than his hand with the forefinger stuck out.)

What I am saying here is that the contingency presented by the robber (as by any threat or promise) is probably resolved more directly in terms of relative error, comparing the internal conflict associated with each possible resolution of the contingency. The victim is controlling each variable (M and L) according to two references, one the reference output from systems that are and have been controlling it as means to their ends (skipping over that problem with the nature of L), and the other a perceptual signal that is part of an emotion-laden perceptual image of a possible outcome of the contingency. Effective threats and promises evoke imagined outcomes immediately and with emotional intensity, hallmarks of lower brain functions. It is improbable that to imagine a robber using that gun in his hand we have to wait for a logic control system to output a reference signal. (And as I pointed out the logic system would output a conclusion, simply changing the reference, and anyway has no basis in itself for preferring one outcome over the other.)

As an aside, let's not get confused with threats that have to be worked out like moves in a chess game. Sneaky trickery is a different use of the word "threat". Someone sneaking up on your bishop or on your mortgaged property poses a threat, but is not threatening you and saying "do this for me or else I will do that to you." Likewise the threat of bad weather. If there is a contingency, it is not announced, you work it out for yourself, weighing possible outcomes in imagination.

  Bruce Nevin

···

At 03:44 AM 01/13/2001 -0700, Bill Powers wrote:

[From Rick Marken (01.01.13.1340)]

Bruce Nevin (2001.01.13 12:08 EST) --

I can't accept your [Bill Powers (2001.01.13.0234 MST)] assumption
that "M & L" is a controlled perception at the level of logic.

I don't think that was the assumption. The assumption (which was
based on your description of the situation, I believe) was that
(M & L) is controlled. (M & L) happens to be a logical variable;
depending on the values of the arguments (M and L) it can have only
two possible values, true (1) or false (0). If the world (or
imagination) is set up so that M -> ~L, then regardless of how you
set M, (M & L) will be false (0). So, if you are controlling for
(M & L) being true (1) then you cannot possibly get the perception
you want. That's why Bill said that what you have here is "simply a
single failed control system", one that cannot keep (M & L) true (1).

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@mindreadings.com
mindreadings.com

[From Bruce Gregory (2001.0113.1800)]

Bill Powers (2001.01.13.0234 MST)

This long preamble leads to a simple conclusion. If you are using a logical
control system, there is no conflict. There is simply one control system
that can never correct its error, even in imagination. The controlled
perception is M & L, but the environmental feedback connection is such that
M -> ~L, so it is impossible for M to be true and L also to be true. So the
reference value for M & L is TRUE, but the perceived value of M & L will
always be FALSE. This is simply a single failed control system, not a pair
of independent systems in conflict.

There are areas of human interaction where I do not find the control theory
approach particularly illuminating. Others no doubt feel different.

BG

[From Bruce Nevin (2001.01.13 23:16 EST)]

Rick Marken (01.01.13.1340)--

>Bruce Nevin (2001.01.13 12:08 EST) --
>> I can't accept your [Bill Powers (2001.01.13.0234 MST)] assumption
>> that "M & L" is a controlled perception at the level of logic.
>
>I don't think that was the assumption. The assumption (which was
>based on your description of the situation, I believe) was that
>(M & L) is controlled. (M & L) happens to be a logical variable ...

The victim is digesting the last of his dinner. Call that D. He is on his way home from the theater. Call that H. So he is controlling D. He is also controlling H. Does it follow that D & H is a logic perception that he is controlling? And D & M, and D & L, and H & L are all logic perceptions that he is controlling? And so on for every other pair of perceptions that he is controlling, each pair ANDed together at the logic level? That result, which seems to me preposterous, is the reason I can't accept that ... not an assumption? OK ... interpretation of my description. But I said that already, in the context that you omitted when you clipped the above quote that only certain pairs are ANDed together as logic perceptions? There might be a reason stated farther on in the context that you omitted.

  Bruce Nevin

···

At 01:40 PM 01/13/2001 -0800, Rick Marken wrote:
from my (2001.01.13 12:08 EST) post. Or did you have some special reason

[From Bill Powers (2001.01.14.0132 MST)]

Bruce Nevin (2001.01.13 12:08 EST)--

I can't accept your assumption that "M & L" is a controlled perception at
the level of logic. ...

I think you're right about that. It's the robber who creates the choice
between successfully controlling one variable or the other, and this choice
didn't exist before.

It is only because of this contingency that the victim controls ~M as one
possible outcome and ~L as the other possible outcome.

Yes. You've convinced me that the victim is controlling both for M and L
(or not-pain), but independently (not the logical condition M & L), before
the robber appears. COnsider, however, what is different between the
situation before and after the robber announces the contingency. The
original situation is that one desires, and has, M and L independently. The
decision that is ultimately reached must be that having both is not
possible, so one must give up wanting at least one of the two, either M or
L. This must be the means for resolving the conflict that has just
appeared. If the conflict (internal, as you say) were not resolved, one
would go on wanting to keep M and also wanting to keep L, and risking being
shot because of the conflict that prevents doing one or the other.
Objectively, whether this is understood or not, the victim must give up the
money to avoid being shot. The victim would be much more likely to survive
if the objective logic were perceived and the appropriate choice were made.

I think we get some insight into what logic is for, as a level of control.
Without perception of the logic, what could turn off one of the lower
control systems, the one that needs to have its reference level reset? We
want to have our cake and eat it, too. It is only logic that resolves this
dilemma, by recognizing that (P and ~P) is unconditionally false, and
choosing that reality as the reference condition.

Even if the condition "M & L" had not been a controlled perception before,
it has now become one. And the reference level that needs to be adopted for
it is FALSE. The victim has to decide, "I can't keep my money and also save
my life."
That is what it means to recognize the contingency. Other considerations
then decide which reference level to abandon. As you say.

Simplisticly, we could say that the logic system sets the reference for
control of M. But why not set the control for L? There is nothing in logic
that gives a preference for either.

There could be if other premises were introduced. If a different
contingency were involved -- say, "You can keep your wallet or your car,
but I'm taking one of them," it would be more obvious that there are no
self-evident choices involved. Jack Benny's joke reminded us that the
choice between your money and your life is not _necessarily_ a foregone
conclusion. What if the money were your child's only hope for the operation
that would save her life?

And [if] the logic system just reset the
reference there would be no experience of conflict, yet undoubtedly a
person being robbed is reluctant to give up what is being taken and does so
unwillingly. It appears that one or more control systems are controlling M
even as the logic system controls ~M (setting aside for the moment why and
how the choice is ~M instead of ~L). This puts the logic system in conflict
with whatever other systems are controlling M.

Exactly. But remember that conflict is a matter of degree. The human system
never reaches a state of zero overall error. Just because control systems
have to work in a common world, they always interact to some degree. The
greater the degree of interaction, the greater will be the error in all
systems when they have come into equilibrium with each other. There is a
region of interaction where it is not clear whether conflict exists or not;
the practical question is whether effective action at a reasonable cost is
still possible. We like to consider only the extreme cases so we can decide
easily that there is or is not conflict. But conflict, like most other
variables, is a continuous measure.

The victim is already controlling M and L, and that is why conflict between
existing control of M and imagined control of ~M is part of the process,
and conflict between existing control of L and imagined control of ~L.

Now this, with which I agree now, gets us back to the basic question, which
is how imagination fits into this picture. I think it's clear that
imagination is serving here as a mental model. According to this model, the
actions which accompany keeping your money ACTUALLY CAUSE YOU TO GET SHOT.
Here I use the term "actually" as it is used in physics. In physics, we say
"this hard table-top is actually mostly empty space," meaning not "really"
but "as we theoretically imagine it."

This changes the control systems by creating a link (in imagination)
between the actions of one and the perceptions of the other. When I try to
run the model so as to make it keep my money (say, by running away), I get
shot (in the back). If I didn't already have a logical system for grasping
the basic relationship, ~(M & L), I would have to reorganize fast. I would
probably freeze and end up with (~M & ~L). This is probably why we think
about such things, so we can do our reorganizing in advance.

There is no clear evidence
that we typically reach conclusions logically, abundant evidence that
however we reach conclusions we use logic (if at all) to patch together a
rationale for having done so, and if logic were our sole innate process for
decision making there would be no need to prescribe it--

Yes, I can agree with this, but I think this is too narrow a view of logic.
Logic also has something to do with reality. Consider the strange
properties of the implication, A -> B. If "it is raining" implies "the
streets are wet," then it is not the case that it is raining and the
streets are not wet. However, it is not true that if the streets are wet,
it is raining, however tempting it is to the human brain to make that
symmetrical assumption. They could be wet because a street-cleaner truck
has just passed, or a water-main has broken. The formal properties of the
implication are exactly the case in reality, whether or not an individual
brain appreciates them correctly.

In general, I think it behooves the organism to learn correct logic -- that
is, logic that works the way the world works, not just formal logic. There
is no guarantee that it will learn, of course, just as there is no
guarantee that it will learn to perceive distance or force in a way
consistent with reality. But the more one's perceptions misrepresent how
the world works, the more effort will be required to control them, and
there is a limit to how much extra effort one can afford to waste.

Victims of threats and subjects of promises alike are well known to respond
emotionally and not logically,

I think that is a false model of emotion. No one thinks emotionally; we
always think logically. The question is what we think _about_. If we
consider our feelings along with other facts, we are "thinking
emotionally." If we do not consider our feelings, we are "thinking
logically." But is it logical to ignore feelings, which are a measure of
what people wish or intend to do? For example, is it logical for an account
manager to foreclose a mortgage and ignore the feelings of the home owner
which are likely to be a threat to the account manager's life? I would
consider that highly irrational.

I wonder if people have more difficulty going up a level when emotion is
involved with their perceptions. I think that's true. What do you think?

I think that when one's attention is on a lower-level problem, it becomes
difficult to shift it to a higher-level viewpoint. However, I don't buy the
idea that it is possible to prevent emotion from being involved with
perceptions; the only way to do that, according to my understanding of
emotion, is to avoid action altogether and live totally in imagination --
to "intellectualize" life completely.

The contingency offered by the robber in effect circumscribes a very narrow
perceptual universe. There are only two possible outcomes, M or ~L. The
victim does not have to accept the contingency. Going up a level, the
victim might notice other possibilities, some disturbances to the robber's
control of ~L. Maybe some people are coming. Maybe the gun is a fake. (My
grandfather, whose crew up to Alaska were a pretty rough lot, would
sometimes go into a bar in Seattle or Nome to fetch a man back to the ship
with nothing more in his pocket than his hand with the forefinger stuck out.)

This is why "your money or your life" turns out not to be such a good
example. In other less extreme choices, "possibilities" may become
important, but when your life is at stake I think you don't really consider
anything but certainties. Maybe the gun isn't loaded, but are you willing
to bet your life on that possibility?

... The victim is controlling each
variable (M and L) according to two references, one the reference output
from systems that are and have been controlling it as means to their ends
(skipping over that problem with the nature of L), and the other a
perceptual signal that is part of an emotion-laden perceptual image of a
possible outcome of the contingency.

I think that the desire to maximize the distance between oneself and the
gun -- that is, to run like hell -- results in the emotion we call fear,
and that it is a perfectly rational and logical solution to _one_ of the
presenting problems. However, to run would probably fail, so, still
logically, one has to suppress that desire if it can't simply be turned
off. As you suggest, of course, the error could be appreciated and acted
upon below the level of logic, but even if the decision to flee were made
logically, the emotion would be no less.

Effective threats and promises evoke
imagined outcomes immediately and with emotional intensity, hallmarks of
lower brain functions.

Hey, you're showing your prejudices here. I guess you have a model of
emotion that's entirely different from mine. I don't think that emotion,
present or absent, has anything to do with the matter at hand. If you're
preparing for action, whether for "low" or "high" reasons, there is
emotion. Haven't you ever felt like celebrating an intellectual
achievement? Or is it cool to be cool?

Best,

Bill P.

[From Bill Powers (2001.01.14.0330 MST)]

Bruce Gregory (2001.0113.1800)--

There are areas of human interaction where I do not find the control theory
approach particularly illuminating.

I've noticed that.

Best,

Bill P.

[From Bruce Gregory (2001.0114.0737)]

Bill Powers (2001.01.14.0330 MST)

Bruce Gregory (2001.0113.1800)--

>There are areas of human interaction where I do not find the control theory
>approach particularly illuminating.

I've noticed that.

When there are data to be explained, models are invaluable. When there are
no data, discussions about models tend to slip into scholastic disputes.
But then, I've noticed that some seem to enjoy scholastic disputes. Perhaps
if such disputes were so labeled it would be easier for those who do not
find them productive to avoid them.

BG

[From Rick Marken (01.01.14.0840)]

Me:

I don't think that was the assumption. The assumption (which was
based on your description of the situation, I believe) was that
(M & L) is controlled. (M & L) happens to be a logical variable ...

Bruce Nevin (2001.01.13 23:16 EST)

So he is controlling D. He is also controlling H. Does it follow
that D & H is a logic perception that he is controlling?

Of course not. That would be an unwarrented _conclusion_. But
a conclusion is not an _assumption_. In the Bill's model of the
person being robbed, Bill _assumed_ that M & L is a controlled
variable. That assumption can only be proved wrong (if it is wrong)
by empirical test; the conclusion that D & H is controlled can be
proved wrong (unwarrented) by logic.

Best regards

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@mindreadings.com
mindreadings.com

[From Bill Powers (2001.01.14.1133 MST)]

Bruce Gregory (2001.0114.0737)--

When there are data to be explained, models are invaluable. When there are
no data, discussions about models tend to slip into scholastic disputes.

True. But there is a mode of model development in which one is merely
trying to follow out the implications of the model as it stands, to see if
the results are at all realistic. If things obviously don't work right, the
model can be changed. When resources are lacking, this is about all that
can be done.

Best,

Bill P.

[From Bruce Nevin (2001.01.15 11:18 EST)]

Bill Powers (2001.01.14.0132 MST)--

>I think we get some insight into what logic is for, as a level of control.
>Without perception of the logic, what could turn off one of the lower
>control systems, the one that needs to have its reference level reset? We
>want to have our cake and eat it, too. It is only logic that resolves this
>dilemma, by recognizing that (P and ~P) is unconditionally false, and
>choosing that reality as the reference condition.

It sounds like you are arguing that conflicts like M vs. ~M can only be resolved by logic. Perhaps when we are operating on the logic level it looks that way.

It seems to me that there are at least three kinds of systems that apprehend a threat and respond to it.

1. Fight-or-flight mechanisms.

Consider a simple organism, a slug perhaps, that is hungry and feeding. Introduce a disturbance that could be due to a predator--vibration of the surface on which the slug and food rest, a touch on its projecting sensory stalks, a puff of air. The slug stops feeding, pulls in its stalks, and contracts its body. There is a logic in this, but is a logic level of perceptual control required for it? If pre-made "decisions" preferring life over short-term livelihood have evolved in very simple organisms, is there some reason that they would have been abandoned in vertebrate, mammalian, primate, or human evolution? For fight-or-flight situations, would not delegation upward to a logic level be far too slow in emergencies? If a mechanism exists that evaluates threats to life with this immediacy (can I defeat this or should I run?), is there a reason for it not to continue to operate in parallel with slower mechanisms that formulate a rational strategy?

2. Logic.

The victim hears "give me your money or I'll shoot you" and sees the gun. To understand this would seem to require the perception M ^ S on a logic level. (I'm substituting S for ~L hereafter. ~L is one imaginable causal consequence of S.)

A logical disjunction (or other relation) in itself has no existential significance, no visceral import. It is the disjoined propositions, in this case M and S, that are connected to lower perceptions that matter. Put another way, there is no basis within logic for preferring either M or S. The evaluation that reaches a conclusion happens outside of logic. Logic only specifies that there is a choice, and what the alternatives are. (Logic also allows inclusive adjunction of additional propositions without limit, as this does not affect the truth value of the original statement. One of the hallmarks of creativity is the ability to defer closure, to store in memory what seems like a solution and continue looking for alternatives that might be better.)

3. Imagination.

The existential import of M and of S comes from playing out their ramifications in imagination. Imagination develops consequences of keeping or giving up the money, and consequences of the robber shooting him or not shooting him. (These are causal consequences based on remembered experience, like "if it rains, things exposed to it get wet," not logical consequences, like "P; P->Q; Q" which hold given that P is TRUE, irrespective of the existential content of P and Q.)

It seems to me that systems of all three kinds respond to a threat at the same time, each in their own fashion, and that each contributes input to the other two.

For each imagined consequence in (3), there is a visceral evaluation of the sort that is involved in (1) above. The looking for alternatives that I mentioned under (2) is a function of imagination in (3). The evaluation of any logical consequence in (2) depends upon the connection to experience in (3); and the preference of one imagined consequence over another depends in part on whether you like it or not, which seems to me to be an evaluation of the sort that we see in (1); but the basis for desire vs. aversion may be something like "net error" in imagined control of the variables involved.

With (1), the "visceral" arousal is immediate, perceptual scanning and heightened attention follows, whose inputs are interpreted by higher cognitive processes, whose interpretation of sensory inputs includes a perceived character of the emotional arousal (fear, delight, anger, relief). I thought this account was well accepted, and it seems to me to fit with what has been proposed previously on CSG-net about emotion and attention.

All the advice one has heard to take a deep breath, count to ten, look before you leap, is counsel to wait long enough for higher and slower processes to catch up. By that time, the interpretation ("He's got a gun! I haven't a chance! Give up the food -- umm, money, and get away!" or "This is for my daughter! I don't know how, but he's not taking it!") may already have been made, and then it's up to logic to fashion a reasonable path to it. That's the way it looks to me, remembering heart-pounding situations of both kinds that I have been in.

Does an evaluation of "M & L" as FALSE precede a decision for flight? ("Give up the goodies and get away!"). Or, more tellingly, does an evaluation of
"M & L" as TRUE precede a decision to fight? ("This is for my daughter's medical treatment! I don't know how, but he's not taking it!"). And if so, why? They can't both be purely logical conclusions. OK, you said "given other premisses," but whence those other premisses? They're not in the robber's announcement of a contingency. They come from memory and are connected to the contingency by (3) imagination. By the time imagination comes up with them, the desire/aversion evaluation of (1) is already attached. In the history of "heroic" behavior it is common for people to act on the basis of such evaluation regardless of the somewhat tardy voice of reason saying that it doesn't make sense.

By the time higher and slower processes catch up, the victim's evaluation or attitude has already been set, and it's up to logic to fashion a reasonable path to it--or to override it if the person purposefully lets the initial surge pass, takes a deep breath, counts to ten, and looks before she leaps --the conditions, perhaps, for going up a level.

Up a level from what conflict? From two: present M vs. imagined ~M, and present ~S vs. imagined S. And there's no need to be governed by a logical M ^ S in order for these conflicts to arise. To see this, just consider the parent about to defend the child's medical fees. There is no logical path from M ^ S to that defense, since M & S probably fails to give the money to the doctors. But from the pair of conflicts M vs. ~M and ~S vs. S there can be diverse paths to defend both M and ~S against disturbance by the robber, and these can be taken even if the contingency M ^ S is understood and accepted as TRUE. This is born out by testimony of survivors of attacks who have said things like "I figured I didn't have a chance, I'd get killed anyway, but I couldn't just *give* it to him!" In such cases a logical conclusion from an accepted contingency does not determine behavior. In such cases, some other process makes the decision, and that "irrational" process seems related to emotion.

In this reference to parallel systems, a logic system and a faster and more primitive "irrational" system, linked by imagination, I am not urging that either of them is superior or should be suppressed, rather, that each has its aptitudes and its awkwardnesses, and that we seem to use them all in a coordinated way.

> the basic question [...]
>is how imagination fits into this picture. I think it's clear that
>imagination is serving here as a mental model. According to this model, the
>actions which accompany keeping your money ACTUALLY CAUSE YOU TO GET SHOT.
>Here I use the term "actually" as it is used in physics. In physics, we say
>"this hard table-top is actually mostly empty space," meaning not "really"
>but "as we theoretically imagine it."
>
>This changes the control systems by creating a link (in imagination)
>between the actions of one and the perceptions of the other. When I try to
>run the model so as to make it keep my money (say, by running away), I get
>shot (in the back).

What does "creating a link in imagination" mean? A link between in principle any arbitrary pair of control systems, such that the action output of one suitably affects the perceptual input of the other, without the loop being closed through the environment. The systems where the imagination shunt is closed (substituting memory for perceptual input) would have to be at a sufficiently low level to serve in place of the environment for both higher-level systems. Looking at the diagram again on p. 221 of B:CP, the action output of a control system (perhaps combined in the reference input function with other, remembered signals or with action output signals from other control systems) gets shunted away from the comparator of each lower-level system that it goes to and over to replace the perceptual input signal that is passed up from each such system. (It must be for each next-lower system so that the perceptual input signals for each of them is distributed upward correctly, differently for each.)

Perhaps there is a trial-and-error process of controlling various combinations of perceptions with the imagination shunt closed and using various combinations of remembered reference values until error drops to tolerable levels. This would be a cheap form of temporary reorganization that might be done instead of reasoning at the logic level and might be a precursor to more permanent reorganization. (And something more general but of this sort has been proposed as a function of dreaming.)

There is no room in this for a conflict between two control systems that are both controlling M (one controlling through the environment and the other controlling through the imagination loop set in many lower-level systems). It is not plausible that we grow an entire parallel to the existing system that is controlling M. It must be a conflict between two systems that are setting the reference for controlling M. One is setting the reference value to keep the money for various purposes, the other is setting the reference value to give up the money as means of keeping the robber from shooting us. Both the purposes for spending the money and the robber shooting us are imagined. The only real-time perceptions related to M are the wallet in the pocket and the words of the robber referring to money. Perhaps the person imagines keeping the money so as to pay for lunch tomorrow but being shot, and then the person imagines going hungry tomorrow but not being shot.

>If I didn't already have a logical system for grasping
>the basic relationship, ~(M & ~S), I would have to reorganize fast. I would
>probably freeze and end up with (~M & S). [Substituting S for ~L, etc.]

Yes. A like conclusion has been drawn about even faster (and more primitive) fight-or-flight systems storing "canned responses" to stereotyped situations. (I sent a post about research into this a few years ago.) There is a tradeoff between speed and accuracy. Stereotypes are inaccurate by being too inclusive. The startled jump may be funny most of the time but sometimes it may save you from harm. The systematic understanding of how to balance a sailboard is no match for the acquired skill, but it enables a close enough approximation for long enough at a stretch that over time you can learn and refine the required reference perceptions for skilled performance, provided you have the commitment (a matter of emotion and imagination, I think) to persist.

>This is probably why we think
>about such things, so we can do our reorganizing in advance.

If the victim in our scenario had done this, the choice to give up the money would be just a matter of remembering. Or the choice to drop to the pavement and from that unexpected and seemingly powerless position to use the most powerful muscles in her body to sweep the robbers feet from under him, disable the gun arm, kick him in the groin, etc.--something that little old ladies have learned to do with training and forethought.

Having worked things through, reached conclusions, and (as in that last example) developed skills, all of this is in memory, ready to be connected to current experience through imagination on the basis of fast "gut" evaluations. The slower logic-level processes are not needed.

>>I wonder if people have more difficulty going up a level when emotion is
>>involved with their perceptions. I think that's true. What do you think?
>
>I think that when one's attention is on a lower-level problem, it becomes
>difficult to shift it to a higher-level viewpoint.

So it might be a good thing to be able to divide or temporarily redirect attention without suppressing, ignoring, or denying emotions, which are often involved in "grabbing" one's attention. Attention follows emotion, which is aroused when perceptions do not match expectations (references). The ability to hold a perception in memory, attend to something else, and then return to compare the two, seems important.

  Bruce Nevin

···

At 03:21 AM 01/14/2001 -0700, Bill Powers wrote:

[From Bill Powers (2001.01.16.0330 MST)]
Bruce Nevin (2001.01.15 11:18 EST)--

It sounds like you are arguing that conflicts like M vs. ~M can only be
resolved by logic. Perhaps when we are operating on the logic level it
looks that way.

In the case we're discussing, the problem is raised through language, which
is a high-level process. So I would not be surprised to find logic
involved. But logic could be involved with lower-level systems, too, in a
simpler organism that does not use discrete symbols.

It seems to me that there are at least three kinds of systems that
apprehend a threat and respond to it.

1. Fight-or-flight mechanisms.

Consider a simple organism, a slug perhaps, that is hungry and feeding.
Introduce a disturbance that could be due to a predator--vibration of the
surface on which the slug and food rest, a touch on its projecting sensory
stalks, a puff of air.

I think you're reading too much into the slug's perceptions when you say
that a disturbance "could be due to a predator." Of course _you_ know it
could be, but I doubt that the slug does. In fact I doubt that the slug
knows any reasons for anything: it's simply that slugs which react
logically to certain disturbances this way have survived to reproduce.

<If pre-made "decisions" preferring life
over short-term livelihood have evolved in very simple organisms, is there
some reason that they would have been abandoned in vertebrate, mammalian,
primate, or human evolution? For fight-or-flight situations, would not
delegation upward to a logic level be far too slow in emergencies?

You're projecting human verbally-implemented logic onto a much simpler
situation. There is nothing inherently slow about logic: look at computers,
in which logic operations are the fastest ones. What you describe the slug
doing _is_ a logic-type operation: if A or B or C occurs, freeze. That
situation and its outcome don't have to be described in words to qualify as
a logical operation: you just need to connect a few signals so they all
have the same final effect: a simple OR, which a single neuron can
implement. Once you realize how simple logic circuits are, there's no
reason to suppose they can't exist in a slug.

"Fight-or-flight", by the way, refers to the "general adaptation syndrome",
which is a pattern of physiological adjustments that is common to many
different "emotional states" -- in particular, to those associated with
energetic attack (irritation, anger, rage) and those associated with active
avoidance (uneasiness, fear, terror). It was this sort of non-specificity
of feeling states, in part, that led me to propose my model of at least the
more acute emotions. Rather than treating emotion as some separate entity
that wars with logic, I proposed that the feeling states we associate with
emotions are our experience of the body's state of readiness to act (or
conserve energy, or whatever). It is error in control systems, I propose,
that leads to adjusting the physiological reference signals that determine
somatic states. We identify emotions not just in terms of sensed somatic
states, but in terms of the type of goal that is involved in creation of
the reference signal; if we want to attack, we call the sensed state
"anger," but if we want to escape, we call _exactly the same feelings_
"fear." It was shown a long time ago that the same physiological states
occur whether the behavior pattern is that of fight or flight. That's where
the term came from.

If a
mechanism exists that evaluates threats to life with this immediacy (can I
defeat this or should I run?), is there a reason for it not to continue to
operate in parallel with slower mechanisms that formulate a rational

strategy?

Your assumption that logic is slow is incorrect -- only some ways of doing
logical operations, such as through the medium of language, are slow. And
if you're speaking of a slug, I think that "evaluating threats to life" is
entirely too high-level a description of anything a slug can do.

2. Logic.

The victim hears "give me your money or I'll shoot you" and sees the gun.
To understand this would seem to require the perception M ^ S on a logic
level. (I'm substituting S for ~L hereafter. ~L is one imaginable causal
consequence of S.)

A logical disjunction (or other relation) in itself has no existential
significance, no visceral import.

I don't know what those words mean. It's perfectly easy to implement A XOR
B with a few neurons. I don't know what "visceral import" means. A coin
toss implements the XOR, although few people realize that, when they do it.
A coin is either showing heads and not tails, or tails and not heads.
"Heads or tails" does not describe the actual logic of a coin-toss: if it
did, coins would frequently show both heads and tails.

Under my theory of emotion, grasping the logic of the words "your money or
your life", and correctly understanding that this means you can't have
both, would indeed result in a visceral response. You might well want to
escape and prepare to do so, and to attack and prepare to do so, and
prevent yourself from doing either if not just because of the direct
conflict then for higher-level reasons such as knowing that either course
would be fatal. It probably wouldn't take more than half a second to have
all these reactions and reach all these logical conclusions. I don't think
that the _fastest_ reactions to errors at any level take more than about
half a second. Of course there are much slower reactions, for example when
you have to use pencil and paper to figure out some complicated logic, or
go to school for 16 years to implement a system concept.

It is the disjoined propositions, in this
case M and S, that are connected to lower perceptions that matter. Put
another way, there is no basis within logic for preferring either M or S.

That's only because you have failed to propose any basis because your
argument requires that none exist. If I think "Only live people can spend
money. The money won't do me much good when I'm dead," that provides a
perfectly reasonable basis in logic (one of many possible) for choosing ~M.

The evaluation that reaches a conclusion happens outside of logic. Logic
only specifies that there is a choice, and what the alternatives are.

But you're omitting the reference signal. In fact, you don't seem to be
considering PCT at all. Logic is a mode of perception, and it is possible
to select a particular state of a perception as a reference condition, and
act to remove the difference between the perception and the reference
condition. The "evaluation" part of this need take no longer than a neural
logic gate would need to operate -- a few tens of milliseconds.

(Logic also allows inclusive adjunction of additional propositions without
limit, as this does not affect the truth value of the original statement.

That's irrelevant to the PCT model. A specific learned control system uses
a specific logical function as its input function, on data from specific
lower-level systems. It does not consider logical functions that are not
part of its input function.

One of the hallmarks of creativity is the ability to defer closure, to
store in memory what seems like a solution and continue looking for
alternatives that might be better.)

Now _that_ would probably take more time than simply carrying out an
already-learned logical control process. Anyway, I don't think that endless
dithering over possibly better solutions will do an organism much good in
an emergency.

3. Imagination.

The existential import of M and of S comes from playing out their
ramifications in imagination. Imagination develops consequences of keeping
or giving up the money, and consequences of the robber shooting him or not
shooting him. (These are causal consequences based on remembered
experience, like "if it rains, things exposed to it get wet," not logical
consequences, like "P; P->Q; Q" which hold given that P is TRUE,
irrespective of the existential content of P and Q.)

Your picture of imagination is much more intellectual (and verbal) than
mine. I think of turning my back to run and wham, getting shot. I don't
reason it out in words.

Are you denying (or maybe just ignoring) my proposal that logical
relationships exist in nature, and are not solely formal? "If it rains,
things exposed to it get wet" is an empirical fact that is correctly
described by "P; P -> Q; Q." If P is "it is raining" and Q is "things get
wet," then the following truth table holds in nature as well as in formal
logic:

···

Q ~Q

---+-----
P | T F
~P | T T

The upper right corner is the only case where the implication is false: it
is raining and things do not get wet. The lower left corner is quite
feasible: it is not raining, but things are still getting wet (for some
other reason). So if things are getting wet, you can't conclude that it is
raining. That logic is fully borne out by experience.

It seems to me that systems of all three kinds respond to a threat at the
same time, each in their own fashion, and that each contributes input to
the other two.

You haven't really defined any systems, nor have you ruled out logic in
each case. You haven't actually proposed any mechanisms.

For each imagined consequence in (3), there is a visceral evaluation of the
sort that is involved in (1) above.

Can you specify what a "visceral evaluation" is, and how it works? I am in
deep doubt about the existence of such a thing.

The looking for alternatives that I
mentioned under (2) is a function of imagination in (3). The evaluation of
any logical consequence in (2) depends upon the connection to experience in
(3); and the preference of one imagined consequence over another depends in
part on whether you like it or not, which seems to me to be an evaluation
of the sort that we see in (1); but the basis for desire vs. aversion may
be something like "net error" in imagined control of the variables involved.

Whether you like it or not, in PCT, is determined by your reference level
for it, whatever it is. And that is determined by higher control systems
unless you're speaking of intrinsic variables.

With (1), the "visceral" arousal is immediate, perceptual scanning and
heightened attention follows, whose inputs are interpreted by higher
cognitive processes, whose interpretation of sensory inputs includes a
perceived character of the emotional arousal (fear, delight, anger,
relief). I thought this account was well accepted, and it seems to me to
fit with what has been proposed previously on CSG-net about emotion and
attention.

But this "well-accepted" account makes no sense to me. How can you have a
visceral arousal without perception and evaluation having occurred first?
Even a slug must perceive a touch before reacting to it (though,
considering PCT, I doubt that "reacting" is quite the right word). And its
evaluation of the touch surely comes from a reference condition, which, as
is also well-known, changes with repeated touches. If repeated touches
occur with no ill effects, the reference level for touch rises so that
future touches have less effect, or no effect (that is, so error signals
are not generated).

All the advice one has heard to take a deep breath, count to ten, look
before you leap, is counsel to wait long enough for higher and slower
processes to catch up. By that time, the interpretation ("He's got a gun! I
haven't a chance! Give up the food -- umm, money, and get away!" or "This
is for my daughter! I don't know how, but he's not taking it!") may already
have been made, and then it's up to logic to fashion a reasonable path to
it. That's the way it looks to me, remembering heart-pounding situations of
both kinds that I have been in.

I'm sure that rationalizations after the fact do occur, as you say. But
that does not say this is _all_ that occurs at the logic level, as you seem
to be proposing. Your imaginary "visceral evaluation" seems to me a
dormitive principle with no mechanism behind it.

Does an evaluation of "M & L" as FALSE precede a decision for flight?
("Give up the goodies and get away!"). Or, more tellingly, does an
evaluation of
"M & L" as TRUE precede a decision to fight? ("This is for my daughter's
medical treatment! I don't know how, but he's not taking it!"). And if so,
why? They can't both be purely logical conclusions. OK, you said "given
other premisses," but whence those other premisses?

You seem to have forgotten that I agreed that M&L is not the original
controlled perception.

The other premisses come from the same places as the first premises. You're
trying to anticipate and knock down counterarguments, but why? What's the
conclusion you want to reach? I get the feeling that you're convinced
you're right about all this and are just looking for ways to prove it. But
in an argument that uses terms like "visceral evaluation," that's all too
easy. What's your model? How does it work? What mechanisms are you
proposing that make it work? This whole discussion has got fuzzier and
fuzzier until I'm just not willing to put out the effort it takes to try to
make sense of it. It's taking too many words to say what the argument even
_is_. Isn't there some way we can cut to the chase?

Best,

Bill P.

P.S. For our non-American colleagues, "cut to the chase" is a movie-making
term. In certain types of movies, there is a scene just before the end
where somebody is chasing someone else, and right after that comes the end:
the bad guys are caught, or die, or the boy gets the girl. When the film is
edited, the editor might decide to cut out slow parts leading up to the
chase.