[Hans Blom, 970917]
(Rick Marken (970916.0830))
do you...propose that it would be fitting (even if only in
imaginationthat we, in our imagination, represent the
world by the equation output = input
No.
Yet that is what your diagram shows. Draw it side by side with the
standard PCT control diagram and note the difference and similarity.
Where in the standard diagram we have the "world", in your
imagination mode diagram you have simply a connection. Thus, you
effectively say that "in imagination" the world is replaced by a
simple connection. That is, the imaginary world (the connection) has
the property that actions on it just result in perceptions of those
actions from it. In other words: you perceive -- in imagination --
only how you act. Do you see what I mean?
what is the relation between that imaginary world and the
real one?
By "real one" I presume you mean "perceived one".
No. The "real one" would be -- well, how to say it -- the "perceived
one" while we do not perceive it. That there is something like that
is indubitable. Sit, take a look at what you see, then close your
eyes. If all is well with you (some people can do this better than
others) you still have an imaginary "picture" of what you just saw.
That picture is not the same as what you saw while you perceived, it
has a different quality ("darker", less vivid, less detailed). Yet it
is there. My question was about the relation between this imaginary
picture and the real one, that you perceived just before you closed
your eyes. And about how that difference is to be represented in a
diagram. If I analyze your proposal, I see weird implications.
But maybe that is just my imagination; maybe you are correct while I
just don't see it.
The imaginary world is the same as the perceived world to the extent
that all lower level control is possible and perfect.
How about the lowest level, where no lower level controllers are
present? Is there imagination at that level (intensity)? That there
can be "imaginary perception" of intensity (e.g. muscle tension or
pressure on the skin) seems to be demonstrated in local anesthesia,
where all direct perceptions in a certain body area are abolished,
yet we can remember "what it was like". We even generalize into "what
it _is_ like", even though direct perception is abolished (this
sometimes even creates confusion: "I _should_ feel ... but I don't").
You can imagine waving your arms and flying but you can't perceive
it because you cannot produce the lower level perceptions (of lift,
mainly) that would produce this perception in fact.
I accept, of course, that there is a difference between direct
perception and imagination. That is not the point. The point is how
(H)PCT models that difference -- and whether the predictions of that
model are consistent with our observations. Simply replacing the
"world" with a short-circuit won't do, I'm afraid.
Then you have a most bizarre way of searching since you completely
ignore experimental evidence.
There may be explanations for that, Rick. Maybe I perceive different
-- or even additional? -- "experimental evidence". Maybe your facts
aren't mine: facts are in the eyes of the beholder, as we know from
PCT. And maybe it's true that I'm bizarre. If so, so be it. Can we
leave it at that? And maybe at the generalization (fact?) that the
world is full of people that we do not understand?
You seem like a kid playing the "hot" and "cold" while
_intentionally_ ignoring all the cries of "hot" and "cold"
(the experimental evidence) around you because you _know_ that
the truth is hidden in that great big "model-based control" box.
Let's take this statement at face value and consider when such
behavior would be optimal for a kid. One possibility: the cries of
hot and cold "help" the kid toward a goal that is not the one the kid
has. In such cases, the "help" is not "help" at all but mere noise,
that is best disregarded. I experience you that way when you offer
what I see as oversimplifications as the final truth.
i.kurtzer (970915) --
Are you suggesting that a monkey knows about tensile strength as it
swings on a vine?Hans Blom (970916d) --
Certainly.
I thought it was the search for truth that mattered to you. But here
you say you are _certain_ that this highly unlikely proposition is
true.
You have a way with words, Rick. The term is "quoting out of context"
I believe. If that is the way in which you collect your basic facts,
you delude yourself.
After my "certainly" I went on to explain what I meant in more
detail. I pointed at the type of knowledge about the world that AI
calls "procedural knowledge", where one can _do_ something well but
is unable to put it so adequately into words that this knowledge can
be transmitted to someone else. Riding a bike is an example; you can
try, to the best of your abilities, to tell someone who has never
ridden a bike how to do it, but to little avail: the other just won't
be able to get on that bike and ride off as fluently as you can.
It appears to me that a great deal of our human knowledge has this
form. And I think that it is important to recognize this: although we
frequently know something extremely well, we experience the
powerlessness of our words when we try to transmit this knowledge to
others. That applies not only to riding a bike but even to very
explicit "declarative knowledge" such as mathematics (or PCT, as you
must have noticed ;-). As the majority of students has to learn,
"understanding" the lectures is not enough; even in a logical domain
such as math it takes practice, practice, practice (solving all those
silly problems) before you can show that you "own" the subject matter
and are able to _use_ it.
Students of adaptive controllers meet this phenomenon in practice:
often, the knowledge that the controller collects of its environment
function is "biased", incorrect, incompatible with reality (as we
external observers know it). Yet the quality of control is fine. In
this (admittedly limited) context it can be empirically demonstrated
that it is not the knowledge per se that is important, but the usage
of that knowledge for the purposes of the controller. The situation
is even more absurd: even if you provide the controller with the
"truth" (unbiased parameters), it will subsequently return those
parameters to their previous biased values. The controller obviously
does not "control for" the truth; it only controls for achieving its
goals, irrespective of what the truth is (the situation is not as bad
as it sounds; normally discrepancies are not that large).
Have you done experiments to show that this is true. Have you even
tried the experiment
http://home.earthlink.net/~rmarken/ControlDemo/OpenLoop.html
Yes Rick, I did. But my interpretation is, as so often, different
from yours. I did pretty well (far better than random) on the
proportional case, demonstrating that I did have a good internal
model. I did much worse on the integral case (although better than
random), demonstrating that now I did not have a good internal model.
My obvious conclusion: sometimes my model is pretty good, sometimes
not. Why is that? What are the limitations of our internal models?
Another problem that arose from your demo was how to measure the
"goodness of control". What I observed was, often, that the sum of
squared deviations was not enough descriptive to "explain" my
behavior; my behavior had systematic components/patterns that were
qualitatively correct although there was a fair amount of "drift"
(akin to integrator leakage).
"Lying with statistics" was a hot item once. That discussion had as
its major conclusion that different ways of presenting the data could
lead to very different conclusions. And that if the data or
conclusions were presented in a particular way, you could bet your
life that this presentation greatly favored the goal that the
investigator had had all along. Since PCT we know why that is true.
It helps to remember this rather large goal-directed subjectivety of
ours, once in a while.
I'm not sure that I told you what you wanted to hear. Isn't that
bizarre?
Greetings,
Hans