World models and perceptions; ST

[From Bill Powers (940222.0830 MST)]

Martin Taylor (940221.1530) --

The "world model" that could be useful to an ECS doesn't need
any of that. All it needs is some kind of time waveform that
corresponds to the way the CEV would react to the ECS output in
the absence of disturbance.

But "the way that it reacts" has to be embodied in some sort of
computing function, unless your model is to consist of an
exhaustive table of inputs and outputs. And lookup tables don't
work for everything. The way a velocity responds to an applied
force is through an integration with a multiplier representing
the inverse of the external mass. A constant output force creates
a constantly-increasing velocity -- that would be difficult to
represent in any sort of table lookup. Modeling the effect of an
output force on an environment in terms of position would be even
more complex: a constant output produces a constantly-increasing
rate of change of the position. Your "All it needs ..." is
considerably too simple to solve the problem. In fact, it's a
"thing that makes it go."

What is needed is a Finite Impulse Response (FIR) filter that
has the right temporal characteristics.

This will work for simple derivative functions, but not so well
for integrals. In the Artificial Cerebellum demo, which seems to
be an FIR, the filter adds first and second (and perhaps higher)
derivative terms to the conversion from error to output signal.
This is the inverse of the properties of the environment, which
consists of two integrals in tandem. If you put the FIR in the
imagination loop (which is not the case for the A. C. demo), then
it has to produce an output that is the double integral of the
input (the error signal), and that, I predict, will be hard to do
with an FIR. You will, in fact, need two integrators in tandem in
the imagination loop, however realized, plus computations to
reflect the spring constant and frictional effect in the external
world (I'm speaking of the A. C. demo in which the environmental
feedback connection is through a mass on a spring with damping,
all parameters adjustable).

The problem is that an "impulse response" filter in the
imagination connection can work only if the external feedback
path can be described in terms of an impulse response filter. For
emulating integrals in this way, you require an infinitely long
impulse response table with the largest values at the greatest
delay. I don't think this is physically realizable.

The problem in this whole discussion is that you're describing
world models in terms of what they must do, which is to model the
world, but without any indication of whether a feasible method
for doing this exists, even in principle.

An ECS knows nothing of cups and cupboards.

Why not? A control system at the relationship level knows about
"in", others at the category level know about "cup" and
"cupboard," and a logical symbol-manipulation one can control for
the number of cups in the cupboard by counting up and down as
cup-elements are added and removed. All control systems, at every
level, are ECSs, although the _meaning_ of the variables
manipulated at a higher level can be quite complex in terms of
lower-level variables.

If an outside observer identifies changes in those values with
changes in numbers and locations of cups and cupboards, so be
it. The ECS knows none of that.

If an outside observer can make these identifications, so can a
human subject possessing the same levels of perception and
control. And if we model human behavior with ECSs, the ECSs can
control for the same variables. You're speaking as if all ECSs
were at the lowest level.

And if the "world models" in the various ECSs happen to
correspond (for the outside observer) to changes in cups and
cupboards, so it may be. The ECSs in question don't know.

If the ECSs don't know, then nobody knows. What else is there to
do the knowing? Are you proposing a whole new human system that
doesn't employ ECSs or at least their perceptual components?

I think something needs to be made more explicit here. I assume
that the world we experience consists entirely of scalar
perceptual signals, some in control systems. That means not just
scalar signals representing positions and velocities, but scalar
signals representing cups and cupboards and "in" and numbers and
conclusions and logic and everything else. We experience many of
these signals awaredly and simultaneously, which is what creates
the complex world of experience. But any one dimension of that
world, at any level whatsoever, is actually a simple single
scalar neural signal that can vary only in magnitude.

This is very hard to grasp; it's hard for me to grasp. When you
just look and think and feel, the world seems multidimensional
and interrelated. But pick any aspect of that world, any at all
at any level of complexity, and try to see it directly, and
suddenly that one aspect is just something that can be more or
less present. Look at an "interrelationship" and it turns right
into a scalar something that could perfectly well be represented
by a scalar neural signal. It's present to a greater or lesser
degree, and that's all you can say about it. As soon as you
isolate any aspect of experience, however asbtract-seeming, it
eludes inspection and loses its special character.

When you look at two objects, you can see the objectness of each,
but close inspection reveals that the only difference between
them is that you call them by different names and they are in
different places. The objects themselves are simply there. If you
try to see how they differ, you move your attention to a lower
level, and start enumerating sensations -- which of course are
not objects. And if you try to say how the sensations differ from
each other, you again have the same problem.

You can say "Oh, but I can also see the relationship between the
objects." In doing that, however, you have shifted to a new
perception, relationship, and you will find exactly the same
problem with that perception. A perceptual signal indicating the
presence of a relationship is certainly there: one cup is beside
the other, and you can sense the besideness. But one cup may also
be greener than the other, and you can simultaneously sense
"greener than" as a directed relationship. How are the two
relationships different? Well, one is _here_ in perceptual space,
and the other is _there_. At a higher level, we assign different
names to them. Otherwise, there is no difference. They are just
two signals. Trying to state how they are different simply takes
us to a lower level of perception where the same problem recurs.

The holistic world, the one that seems so rich and complex,
exists only in awareness, only as a set of simultaneous signals
observed at the same time. As long as awareness is broadly tuned,
that complexity and richness persists. But as soon as awareness
focusses on any element of that world, all the complexity fades
away and we are left with simple scalar somethings. The concept
of the whole pattern disappears, because no element of that
pattern has any organizational meaning, apparently, without the
rest of it.

Maybe I'm talking about system concepts and what happens when you
look at the world from a lower point of view. Or maybe I'm
talking about some mode of perception that I'm stuck in and
therefore can't see as a mode of perception. What I'm hoping is
that by sticking strictly to the scalar perception concept in
_every_ context, I will sooner or later discover what, if
anything, is wrong with it, by finding a phenomenon that simply
doesn't fit it. So far every possibility has proven to be
understandable in terms of a scalar perception at a new level.

All this relates to a bone of contention that was unearthed
earlier, namely the concept of vector perceptions. Suppose you
have two perceptions at the same level which seem to vary
independently of each other: two objects moving in patterns which
seem unrelated. This appearance could have two causes: (1), each
pattern of movement depends on prior unknown causes so widely
separated that they have no discernible influence on each other,
and therefore have no systematic relationship; or (2), the two
patterns are being systematically caused by related antecedents,
but we have simply failed to create a perceptual function that
can create a function of the two variables with markedly less
variance than the quadrature sum of the two variables (less
pedantically, there is a relationship but we have failed to
perceive it).

If we eliminate (1) as uninteresting, then we are left with (2):
the possibility that a relationship exists, but we have failed to
see it. At this point, what is the status of that relationship?
According to the vector perception hypothesis as I understand it,
the mere existence of the relationship is enough to give it
behavioral significance, even if there is no perceptual function
in the system that can receive the values of the two variables
and output a signal indicating the value of the regular function
of them.

My claim is that unless such a perceptual function exists, the
"actual" systematic functional relationship can have no
behavioral existence. The relationship can be controlled ONLY if
it is explicitly represented as a neural signal. It is only the
existence of the appropriate neural computing function that makes
the implicit function real for the behaving system.

In those terms, it makes no difference whether the systematic
relationship "really" exists, or whether someone else looking at
the same two variables perceives the regularity in their
relationship. For the system in question, the relationship simply
does not exist if there is no input function to perceive it.

This is a critical conclusion because of its converse: if a
person _does_ perceive some regularity in a collection of
variables, it is only because those variables are inputs to a
perceptual function which outputs a measure of the regularity. So
a vector of perceptions has no meaning at the same level as the
perceptions; it has meaning only if the vector is projected upon
some input function to yield a scalar value in the form of a real
perceptual signal. And that input function must exist in the
system in question, not just in some observer of that system, or
as an abstract possibility.

Frank Rosenblatt, whose mistreatment I will always resent, said
all this a long time ago in speaking of perceptrons, and was
squashed for it. Fortunately, I care much less than he did about
the opinions of the world's Minskis and Paperts.

There are elementary systems (minimal control units--ECSs),
hierarchic systems (the learning hierarchy, as you sometimes
call it), and organic systems (things in skin bags, which are
the overt things that behave). You can separate out subsets of
all of them, but the total organism is legitimately described
as a system, I think. It includes a reorganizing process.

When I say that organisms are control systems (seeming to adopt
your suggestion), what I really mean, or think, is that
organisms are (made of) control systems. I think that the usage
of "system" is dictated by the application. There is no inherent
"legitimate" meaning.

The more you include in your definition of a system, the less
able you become to specify what it does unless you understand
every subsystem in it. To me a system is a specific kind of
organization. A control system certainly is. We can look at
behavior at many levels because we can see this same organization
at many levels of detail. We can see it in spinal reflexes, and
we can see it in controlling for social status, a process that
entails the use of the spinal control systems. So to me, "system"
is not connected to physical subdivisions of larger or smaller
size, but to a recognizeable organization of functions. I do not
see a recognizeable organization of functions in "the whole
organism," or in general in arbitrary collections of functions
with boundaries arbitrarily drawn. This is why I don't see the
piano and the piano-tuner as a single control system: a control
system is an organization with an output and an input that can
link through many different aspects of the environment. As I
believe Agre and Chapman have tried to say, the piano tuner can
tune _any_ piano, not just the one in front of him -- even a
simulated piano. What is constant as the piano-tuner goes from a
home to a school to an auditorium is the control organization
that can control the perceived tuning of any piano-like object.
So I think of the control system as the input function, the
comparator, and the output function, with the external link being
part of that which is controlled (although control is possible
only for a certain range of external organizations).

That probably didn't merit a whole long paragraph.

ยทยทยท

-------------------------
RE: DNA creating the reorganizing system;

Now you have a system that includes all organisms back to
before the first single cell. DNA doesn't change the settings,
so much as record them in such a way that ones that work
continue to be observed.

Well, the model I have in mind shows DNA as a collection of
control systems which control for the states of variables
important to accuracy of replication, by means of varying the
mutation rate. DNA is to the reorganizing system as the
reorganizing system is to the neural behavioral hierarchy.
Actually I am probably talking only about a certain core part of
the organization of DNA; the bulk of DNA, as I interpret what I
have been reading lately, seems to consist of many perfectly
systematic regulating systems that use feedback control. I don't
think that pure Darwinian selection has been the main factor in
evolution for a very long time -- not since the reorganizing
system built into the biochemistry of DNA evolved.

DNA is itself changed from outside, whether by cosmic rays or
by mustard.

That's one source of change, and maybe once it was the only one,
but for some billions of years I believe it hasn't been the only
one. Of course my view is a minority one.
-------------------------------------------------------------
Cliff Joslyn (940221.2100) --

Watch out with those exaggerated compliments. That can work like
the Olympics announcer saying that Dan Jansen is especially good
at going around curves.

As to the ST argument with Rick: you, in yourself, are a
sufficient refutation of his remarks. Well said.
--------------------------------------------------------------
Best to all,

Bill P.