Extrinsic views;venting; grinding down the stone

[From Bill Powers (931119.1315 MST)]

Oded Maler (931119.1745) --

Well, you know, "Europeans that have a degree in some
mathematical science"..

Yeah, right. I never could tell you Europeans apart.

···

-----------------------

So by trying to look at the "extrinsic" results of a behavior,
I do not necessarily commit myself to a sequential S-R, non-
circular cause-effect philosophy.

Looking at the extrinsic results is all part of modeling and we
all have to do it. The most striking aspect of my modeling
experiences, however, is how differently we see the behavior from
the way the controlling system itself sees it. In the arm model,
there are no controlled perceptions corresponding to what I see
when I look at the arm from the outside. I'm seeing a bunch of
consequences of moving and pointing the arm that are completely
irrelevant to the control systems involved -- side-effects.
They're very logical side-effects of course, and I can see how
they're connected to what the control system is doing. But the
view I get and the interpretations I put on what I see are simply
not part of what the control system itself is perceiving and
controlling.
---------------------------------------------------------------
Dan Miller (931118.1430) --

I think Bruce has the answer. The relief you feel at "venting
feelings" starts just BEFORE you do it, at the moment you resolve
the conflict that's been keeping you from speaking out. That's
what allows you to speak, and also what you feel good about.
---------------------------------------------------------------
Martin Taylor (931118.1440) --

A long list of things we agree about; good. The remaining points
will probably dissolve into "Oh, is that all you meant?" So let's
keep grinding away at it.
----------------------------------

On CEVs and perceptual signals, I must be suffering from some
mental blind spot, because I really can't see what we are
arguing about. What you write rings absolutely true with me.
It generates no error signals. I just wish I could have written
it as clearly as you do. And yet you have some strong
disagreement with me. I can't find where it is.

Perhaps some of it comes down to this: I prefer to remain
noncommittal about the true nature of the physical world in which
we behave. All we know about it has been synthesized in our
brains out of sensory impressions. The actual world outside us,
in which I devoutly believe, may not be at all like what we
experience of it. We use models of it in constructing
simulations, but those models, too, must be subject to the same
transformations that affect our senses all of the time.

I've started the back-of-the-mind machinery going on putting
together a demo of what I mean. What I want to do is project a
perspective view of a tesseract (four-dimensional cube) onto the
computer screen, and connect the control handle so it rotates the
cube about some axis that has an extension in the fourth
dimension. This rotation will show up on the screen as some
continuous variation in the geometric figure we see. I am quite
sure it will be possible to control that configuration and make
it track changes in a reference configuration shown beside it.

We will be controlling, of course, a two or three dimensional
figure (three if the perspective succeeds in suggesting depth).
But what we are _affecting_ in the "real" world is a four-
dimensional figure. Since we can't, as far as I know, actually
perceive in four spatial dimensions, the best we can do is
control the projection of the actual 4-figure into the three
dimensions that our perceptual systems can handle.

So while there is a real (simulated) four-dimensional figure out
there, and our actions are affecting it in the added dimension,
we can know nothing of any effects but those that show up in the
3-D projection. Nevertheless, we can learn to control what we
perceive, even if it's a distorted representation of what is
actually there.

The only hitch in actually doing this is my abominable slowness
in working out the transformations. If anybody with more facility
at this sort of thing wants to lend a hand, I will be very
grateful.

Do you know if it is possible to take out an insurance policy
against the possibility of sudden disappearance while seated
before my computer?

This isn't directly relevant to our problem, but perhaps you can
see why I think there is some connection.
----------------------------
My problem really began when we started talking last Spring about
"information about the disturbance." I saw eventually that the
real problem was "information about the CEV." When you have only
a perceptual signal to work with, it may contain information, but
to say that this information is "about" something other than the
perceptual signal is to say it is about a CEV that corresponds to
the perceptual signal, as if the CEV had separate existence (the
way we draw it in our diagrams). So we talk as if there is a real
CEV and the perceptual signal represents it. This gives us the
impression that the CEV is some independent entity that is
"recognized" by the input functions, as if the CEV had some
degree of _independent_ existence.

To say that the CEV has independent existence is not just saying
that there is a real external world; it's saying that that
external world is what we experience, which goes a great deal
farther that I am prepared to go. When I'm not just being a
sloppy old modeler and taking physics for granted, I like to
withdraw inside the controlling system and try to see the problem
of control as it must appear to that system, which has no access
to any of the things I model outside of it.

When I do that, there's no such thing as the "real bed" in the
darkened bedroom. The reality of what my perceptions are telling
me is irrelevant to solving my control problem of not barking my
shins. The only way I can solve this problem is to construct a
map inside my head, in which I place things (and myself)
according to my perceptual experiences. Even when I can see, many
of the objects on this map have locations without being seen,
such as the doorway I just came through, and the bottoms of my
feet and my hip pocket. All that changes when the lights go out
is that I have to maintain the map without as many updates. This
map is what I call the "real bedroom."
--------------------------
There's one more point. You seem to be having difficulty with the
idea of controlling a perceived relationship between you and the
bed when you can't see where the bed "really is." Let's get rid
of part of the problem first. Even in the light you can't control
where the bed is, really or otherwise, without seizing it and
moving it. But you can still control your relationship to the
bed, because you can vary the other term: your own position. The
fact that the bed itself is uncontrollable, or that the
perception of the bed's position is uncontrollable, is not a
problem.

You can still control your relationship to your perception of the
bed with the lights out, can't you? You can look at your map, and
say "Here's the bed, and here I am, so if I move THAT way I will
make the distance between the me-object and the bed-object
smaller." You can place the me-object using kinesthetic
information as you walk. The bed-object stays where it was the
last time the map was updated.

So the control process can occur in the usual way. It doesn't
require any change of organization when the lights go out. It
doesn't suddenly need a "feedforward" connection that it didn't
have before. Just acting as an ordinary control system, it's
doing as well as can be done.

The control problem is not in the control system, but in the
perceptual situation. In fact, in the Boss Reality world, the map
is rapidly going out of date. Someone has quietly sneaked out of
the bedroom carrying the bed. The bed in the map being used for
control should also be moved out of the bedroom, but it is still
where it was. There is nothing that can be done about that using
feedforward, feedsideways, or feed-upside-down. You can't improve
the design of the control system enough to cure that problem.
----------------------------
Notice how easy it is to give the CEV an independent existence:

All I have been trying to say is that there was a question as
to whether some particular output TO THE WORLD was affecting a
controlled CEV--one for which there was a perception that was
being stabilized at its reference value.

If the output to the world isn't affecting a perception, it's not
affecting the "controlled CEV" either. The effect of the output
on the CEV is identically the effect of the output on the
perception. The moment you imagine that there can be a
difference, you have started to treat the CEV as if it goes on
existing when it's not being perceived. _Something_ may go on
existing, but we never perceive that anyway.

And again:

Me:

It is not necessary for BOTH inputs to a relational perceptual
function to be real in order for closed-loop control of a
relationship perception to exist.

You:

In imagination, but it is necessary for them both to be real
for control of a perception of the real-world CEV.

It is very hard to stick with the control-of-perception paradigm.
You said earlier,

What is brought to one state is the output of a single-valued
function, not the arguments of that function, as you seem to
want to read me as claiming.

That is how I see it, too. Only the value of the function is
controlled: the value of the function is the value of the CEV.
Outside the function, all there are are arguments. No one of them
is the CEV. The output of the function is a flock, but the inputs
are just particular birds. If the input function ceases to work,
there is no flock; there are only birds.

So the input arguments to the "relationship" function are a
perception of a bed and a perception of yourself. There is no
relationship at the input. The relationship is the value of the
signal generated by the input function that combines positions to
create a perception of a relationship -- a distance. The CEV is a
perception, not an item in the bedroom. There is no real-world
CEV to which you can compare the perception to see if it is
right. The CEV and the perception are the same thing.
--------------------------
I completely understand what you are trying to say: that the
actual, real relationship can't be controlled if you can't see
it. But try on the viewpoint I've been expressing. The
explanation of how the system works can't depend on what we know
about the real situation. Everything has to arise from what the
behaving system knows.
------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 931122 13:40]
(Bill Powers 931119.1315)

Mail sometimes comes in a funny ordering. This arrived about noon 931122.

I've started the back-of-the-mind machinery going on putting
together a demo of what I mean. What I want to do is project a
perspective view of a tesseract (four-dimensional cube) onto the
computer screen, and connect the control handle so it rotates the
cube about some axis that has an extension in the fourth
dimension. This rotation will show up on the screen as some
continuous variation in the geometric figure we see. I am quite
sure it will be possible to control that configuration and make
it track changes in a reference configuration shown beside it.

That's very nice. If I am right that you can control only what you perceive,
and that reorganization yields controllable perceptions, people who can
do the task effectively should have achieved some kind of a perception of
a four-dimensional object. If so, then they should be able easily to
learn to control rotations of other four-dimensional objects about the
same axis, even though the 2-D projection will look very different.
Neat.

The only hitch in actually doing this is my abominable slowness
in working out the [4D] transformations. If anybody with more facility
at this sort of thing wants to lend a hand, I will be very
grateful.

I'm not guaranteeing it, but I _think_ that potted algorithms in C exist for
doing this stuff. Somebody may know better than me. I'll enquire around.

This isn't directly relevant to our problem, but perhaps you can
see why I think there is some connection.

Oh, yes indeed. Maybe not the same connection you see, though. The
connection I see is through the reorganization process that leads to
the characterization of a PIF (->CEV) that gives a controllable perceptual
signal.

···

============

To say that the CEV has independent existence is not just saying
that there is a real external world; it's saying that that
external world is what we experience, which goes a great deal
farther that I am prepared to go.

I don't think it does say this. I wouldn't want to go nearly that far
myself. I limit myself to something like (and I may well want to rephrase
this later):

  The independent existence of the CEV is attested by the controllability
  of the perceptual signal to which it corresponds. Actions that are
  presumed to have effects in the real world can reliably bring the
  perceptual signal toward its reference value, at least within a reasonable
  range of reference values much of the time. The nature of the CEV is
  unknown, but, whatever it may be, it is stable over periods of time
  long compared with the time that the control system takes to bring the
  perceptual signal near its reference value.

  The environment imposes some function of time and possibly other unknown
  variables (lumped together as "disturbance") on the effects of the output
  of the control system on the perceptual signal. The PIF reflects not the
  whole environment function, but only its value at the present time.

  The only reality that can be observed is this function, and even that
  reality cannot be perceived by the ECS that controls through this CEV.
  It could be perceived by another entity that can sense both the output
  and the perceptual signal of the ECS. The PIF reflects a projection of
  the function onto present time, and therefore reflects an aspect of
  "reality."

  There is no way that the perceptual signal in an ECS can be shown to be
  "veridical." All that can be shown is that the perceptual signal is
  controllable with greater or lesser reliability. Reorganization depends
  on the reliability of control of a wide range of perceptual signals in
  different ECSs in the hierarchy, and is effective only insofar as control
  works in the real world. Control of imaginary perception is ineffective
  in ensuring that intrinsic variables stay within tolerable limits, because
  they are necessarily affected by real-world processes. This assumption
  demands only that the organism interact with a real world; it asserts nothing
  about the nature of this interaction. Reorganization may affect the
  control of imagination-based perceptions, but must affect the control
  of real-world based perceptions.

  In a mature hierarchy, the reliability of control of perceptions based on
  lower reliably controlled perceptions attests to the "reality" of the
  CEVs being controlled.

I don't know whether the above makes any sense to you. It does to me. It
can be paraphrased as "The CEV must be more or less real because the perceptual
signal can be controlled." In that sense, the CEV does "have an independent
existence."

It is through the independent existence of CEVs that we can communicate about
things like "beds" and "rooms." We perform versions of The Test on each
other by manipulating our own perceptual signals/CEVs and detecting whether
others act so as to resist our (to them) disturbances. If the CEV had no
independent existence, such interactions would be impossible.

You seem to be having difficulty with the
idea of controlling a perceived relationship between you and the
bed when you can't see where the bed "really is." Let's get rid
of part of the problem first. Even in the light you can't control
where the bed is, really or otherwise, without seizing it and
moving it. But you can still control your relationship to the
bed, because you can vary the other term: your own position. The
fact that the bed itself is uncontrollable, or that the
perception of the bed's position is uncontrollable, is not a
problem.

The problem is not whether the position of the bed is uncontrollable, but
that if it is _unperceivable_ it cannot be part of a controllable relational
perception. I can control the relation perception in imagination all I
want, but I can use that (imaginary) relation perception as input to other
higher perceptions only to the extent that the map is stable.

==================

I'm beginning to think I understand some of what Hans is talking about.
Maybe it is the same as what you have been saying.

Rather than trying to paraphrase him or you, let me describe what is becoming
less vague (but still not quite clear).

The standard control loop is:

(ref-) percept->error->output->environment (+ disturbance)->CEV->percept

What I am beginning to think is that "percept" in the first element
should always be "imag.percept" (i.e. the current imagined perception), and
there should be another arrowed phrase at the end: "->imag.percept"

(ref-) imag.percept->error->output->env (+dist) ->CEV -> percept
                                  > >
                                  V V
                              map function -----> (ref-) imag.percept

The idea is that the current perception is always updating the imaginary
perception. So long as there is sensory data available, the imaginary
perception is the same as the current perception, but as the sensory
data fades or is eliminated, it is still there, and the map operating
on the output continues to change it, permitting a continuity of control
during periods of sensory blackout. There is no switching between control
through the real world CEV and through the map function. But disturbances
cannot be resisted when control is through the map. The output to the
real-world CEV is open loop, the CEV->percept arrow being cut out.

At the same time, the current perception is updating the map function, as
Hans has been saying. But this requires a different mechanism that is not
part of the depicted control loop. It relates to the control loop in the
same way as reorganization relates to it.

======================

It is very hard to stick with the control-of-perception paradigm.
You said earlier,

What is brought to one state is the output of a single-valued
function, not the arguments of that function, as you seem to
want to read me as claiming.

That is how I see it, too. Only the value of the function is
controlled: the value of the function is the value of the CEV.

There is one defining function, but there it is applied in two places.
(Don't bite your lip or tear you hair out just yet...). One application
is direct, within the ECS, to the sensory inputs, the resulting value
being the perceptual signal. The other is notional, possibly applicable
by an external observer with measuring instruments and a knowledge of the
function. That notional application is to real-world conditions, and
the result is the state of the real-world CEV. The "bed-position" PIF
defines the perceived location of the bed. But the result of applying
this function to the real-world data is different in someone who actively
carried off the bed as compared to someone who remembers where it was
when the lights went out.

There is no real-world
CEV to which you can compare the perception to see if it is
right.

"You" can't. Someone else can, if not directly. Someone else can
have better sensory data as arguments to the "same" function. We know
that there is no guarantee of sameness. But through interactions the
approximations get ever better.

The CEV and the perception are the same thing.

As you can see, this is the concept I resist. And I don't think that my
resistance has anything at all to do with it being hard to stick with
the control of perception paradigm. It has to do with the assumption of a
"real world" out there, as opposed to taking the solipsist position.

The ECS doing the controlling can't distinguish the CEV from the perception.
It has only the one perception, so from our point of view on it, they are
the same thing. But from an observer's point of view on its operation,
they are not. If they were, we wouldn't draw control loop diagrams with
a point at which disturbance is added to output to provide something that
goes through the sensory functions and the PIF. We would only have the PIF.
And what would it be a function of? Arguments that are perceived states
of the real...Uh-oh...lower-level CEVs...can't be that....of the functions
of the perceived states of the ... uh-oh, can't be "real," can it?...
well, anyway, Arguments. No saying what they might be. They aren't real.

No, I don't accept that the CEV and the perception are the same thing.
Nor: "The explanation of how the system works can't depend on what we know
about the real situation." The explanation is ours, not the system's, so
it must depend on what we perceive (not know) about the real situation. But
I do accept your final: "Everything has to arise from what the behaving
system knows."

Martin

From Tom Bourbon [931122.1711]

[Martin Taylor 931122 13:40]
(Bill Powers 931119.1315)

I've started the back-of-the-mind machinery going on putting
together a demo of what I mean. What I want to do is project a
perspective view of a tesseract (four-dimensional cube) onto the
computer screen, and connect the control handle so it rotates the
cube about some axis that has an extension in the fourth
dimension. This rotation will show up on the screen as some
continuous variation in the geometric figure we see. I am quite
sure it will be possible to control that configuration and make
it track changes in a reference configuration shown beside it.

That's very nice. If I am right that you can control only what you perceive,
and that reorganization yields controllable perceptions, people who can
do the task effectively should have achieved some kind of a perception of
a four-dimensional object.

How so? Am I missing someting? The person will see a 2D rendering of a 3D
projection of the 4D "object." The person will not perceive a 4D object. I
thought (perhaps incorrectly) that Bill's point in the demonstratiuon would
be that the person can *control* his or her perceptions by *manipulating*
(using, not controlling) dimensions that are unperceived, perhaps
unperceivable. Control of perception always seems to entail manipulation
of unperceived variables: I perceive the keys and screen in front of me, and
my actions on the keys, and the results on the screen, but I sense nothing
of what happens beyond the auditory, tactile and kinesthetic experiences of
pressing the keys, and the visual results on the screen. Perceptually, I
know nothing of what happens between the keys and the screen, or of how many
dimensions might contain the variables that intervene. Did I read you
incorrectly, Bill?

If so, then they should be able easily to
learn to control rotations of other four-dimensional objects about the
same axis, even though the 2-D projection will look very different.
Neat.

By any interpretation, neat!

Until later,

Tom