modeling imagination

[Hans Blom, 970923b]

(Bill Powers (970918.0926 MDT))

Now contrast the "imaginary mode" controller with the "perceptual
mode" one:

>I> >O> the controller |I| |O|
^ | -------------- ^ |
p a its world p a
  \-|W'=1|-/ \--|W|-/

The diagram on the right makes sense to me: the controller acts on
the world and its perceives its "reaction", the goal being to
control for ("compute", some say) those actions a that result in
the desired perceptions p

That's because you keep thinking of the imagination connection as
representing a REAL world. For your concept of a behavioral model,
that's necessary. In PCT it is not necessary; the function of
imagination is not to model the world, but only to provide a way of
saying "If I _could_ control this perception perfectly, would the
result be what I want?"

First, you misunderstand me: I attempt to understand the inherent
meaning of PCT's imagination mode model, in its own context (where
one is provided, and up to my own limits of being theory-bound). And
so I point out what I don't understand. Is that bad?

I also don't understand what you say above. If the question is "If I
_could_ control this perception perfectly, would the [perceptual]
result be what I want?", the answer is an unequivocal "yes". It is,
moreover, a tautology. If you perfectly control a perception, the
perception matches the reference, by definition. Having the
perception match the reference is the same thing. It is what you
want, also by definition. Since a tautology is, by definition,
correct, I cannot but agree with you when you utter one. On the other
hand, a tautology doesn't tell me much. It is, by the way, not
exactly very efficient for an organism (or a controller) having to
implement a tautology with some internal mechanism.

If you keep trying to interpret the HPCT model in terms of your
single-level model-based control system, you are never going to
grasp how the PCT model works.

I can only "grasp how the PCT model works" by connecting it with what
I already (think I) know. And I certainly don't wish to constrain
myself to single-level model-based control systems. But if I discover
conflicts/inconsistencies, I want to know why. Is that bad?

Since I predict that you will keep doing the former, I also predict
that the latter will continue to be true.

There are some "truths" that I don't give up easily indeed (although
I know these truths aren't exactly true either). One of them is that
a (simple) theory cannot be correct if it is internally inconsistent.
That's like saying "it rains but it doesn't rain". You _can_ say it,
but it doesn't make much sense. That perception = action (if only in
the imagination mode) appears to me inconsistent with much else in
PCT. Where do I go wrong? Or don't I? That's my question...

Greetings,

Hans

[From Bill Powers (970924.0315 MDT)]

Hans Blom, 970923b--

The diagram on the right makes sense to me: the controller acts on
the world and its perceives its "reaction", the goal being to
control for ("compute", some say) those actions a that result in
the desired perceptions p

That's because you keep thinking of the imagination connection as
representing a REAL world. For your concept of a behavioral model,
that's necessary. In PCT it is not necessary; the function of
imagination is not to model the world, but only to provide a way of
saying "If I _could_ control this perception perfectly, would the
result be what I want?"

First, you misunderstand me: I attempt to understand the inherent
meaning of PCT's imagination mode model, in its own context (where
one is provided, and up to my own limits of being theory-bound). And
so I point out what I don't understand. Is that bad?

No it wouldn't be bad if that were what you're doing. But you're
re-interpreting the PCT model in terms of your MCT model and THEN trying to
make sense of it, which is certainly not trying to understand it "in its
own context."

I also don't understand what you say above. If the question is "If I
_could_ control this perception perfectly, would the [perceptual]
result be what I want?", the answer is an unequivocal "yes".

Not at all. If I could control for having a big bouquet of flowers when I
get home, would that placate my wife's outrage at finding that I'm bringing
the boss home for dinner without telling her? Probably not, so I won't even
try to buy the bouquet; the fact that there aren't any florists open is
then irrelevant. Imagining at one level is a way of checking out possible
control strategies at higher levels. If I were to decide that my wife would
forgive me if I brought the flowers, I might then tackle the problem of
where to get some flowers at this time of night.

If you keep trying to interpret the HPCT model in terms of your
single-level model-based control system, you are never going to
grasp how the PCT model works.

I can only "grasp how the PCT model works" by connecting it with what
I already (think I) know. And I certainly don't wish to constrain
myself to single-level model-based control systems. But if I discover
conflicts/inconsistencies, I want to know why. Is that bad?

You're not trying very hard.

There are some "truths" that I don't give up easily indeed (although
I know these truths aren't exactly true either). One of them is that
a (simple) theory cannot be correct if it is internally inconsistent.
That's like saying "it rains but it doesn't rain". You _can_ say it,
but it doesn't make much sense. That perception = action (if only in
the imagination mode) appears to me inconsistent with much else in
PCT. Where do I go wrong? Or don't I? That's my question...

All right. Now you have the answer.

Best,

Bill P.