Self-tuning world models

[From Bill Powers (940219.0845 MST)]

Martin Taylor (940218.2104) --

Martin, I think you're guilty of exaggeration when you say

My claim is only that control engineers have algorithms that
produce good models of the way the world reacts to output,
based on the differences between what the models produce and
what the world does when output is "jiggled" rather than used
to control perception directly.

A world-model that accounts only for the dynamical properties of
a system that is otherwise totally specified is hardly a "world"-
model. In my "artificial cerebellum" model, it's not even a model
of the external (highly local) environment; if anything, the
"world model" contained in the transfer function represents the
inverse of the dynamical properties of the feedback connection.

A real "self-tuning" world model would require far more than
anything that has so far appeared in any engineering model I have
seen or devised. A real world model of the sort to which I would
give such a high-flown name would tell me, "All the cups but the
one in my hand are behind _that_ cupboard door; the plates are
behind the other one."

The kinds of self-tuning systems I have seen or devised consist
of a basic design in which the controlled quantity, the
perceptual function, the comparator, and the output function are
supplied in advance, with only a few parameters left free. The
"tuning" is concerned primarily with minimizing error under
dynamic conditions, given the basic design, by varying these
parameters and only these. I have yet to see a self-tuning system
that can decide it is perceiving the wrong variable, or producing
the wrong kind of output.

Also, I think that the term "self-tuning" is misleading. In any
adaptive control system there is a part that is tuned, and a part
that does the tuning. The part that does the tuning never acts on
itself (it doesn't change the method of tuning or the criteria
for best tuning). It acts to change parameters in the remainder
of the system, and the remainder of the system has no ability to
tune anything. The term "self-tuning" suggests a recursive or
reflexive process, which is the wrong image of what is happening.
No part of a system tunes _itself_. It always tunes something
else. The organization that does the jiggling and judges the
consequence is not the organization that is adjusted on the basis
of the result. Ignoring this fact leads only to confusion when
the time comes to simulate the system.

Tom Bourbon's diagram of the system that reorganizes to find the
best k-factor is, in fact, a diagram of two systems. One is a
control system with a k-factor determining the integration rate
in its output function. The other is an independent system that
senses the rate of change of absolute error signal in the first
system, compares that variable with the desired value (not
greater than zero), and converts the result into a process that
changes k at a variable rate and in a variable direction. This
auxiliary system acts to reorganize the other system, the primary
control system. It does not reorganized itself. The other system
does not reorganize itself, either. The auxiliary subsystem
reorganizes the main subsystem. It is not, however, concerned
with anything about the output or input of the main system. All
it is concerned about is one controlled variable, the variable
that is a function of the error signal in the main system. That
is the variable upon which it acts by varying k. As far as the
auxiliary control system is concerned, the main control system is
simply its environment, and the main error signal is the
controlled variable.

Term like "self-aware", "self-controlling," and "self-tuning"
create a titillating picture of some sort of bootstrap process.
But when it comes down to modeling such systems, there is no
bootstrap process. It always comes down to one system acting on a
different system.

ยทยทยท

---------------------------------------------------------------
Best,

Bill P.

<Martin Taylor 940221 0945>

Bill Powers (940219.0845 MST)

You don't like my choice of words, it seems, but I see no conflict between
what you say happens and what I was talking about.

(1) Exaggeration in the use of the words "world model." The "world" in
question is the world relevant to the ECS in question, namely the effect
of output on the perception of the ECS. The world is that of the ECS. The
world of the hierarchy is the world relating to all the perceptual signals
of all the ECSs in the hierarchy. So, I think that if you want to complain
about <A real world model of the sort to which I would
give such a high-flown name would tell me, "All the cups but the
one in my hand are behind _that_ cupboard door; the plates are
behind the other one."> then you have to address the same complaint
to the notion that a perceptual signal is like an everyday notion of
perception--that multidimensional, glorious complex that we subjectively
experience.

(2) Self-tuning: You are correct that this is a misleading term. What is
tuned is not what does the tuning. It is, however, a term used in control
engineering. The point is that no signals are used in the tuning that are
not directly available within the thing being tuned. I would be perhaps
a good idea to invent a new term, such as "auto-tuning" to replace the
misleading wording.

The kinds of self-tuning systems I have seen or devised consist
of a basic design in which the controlled quantity, the
perceptual function, the comparator, and the output function are
supplied in advance, with only a few parameters left free. The
"tuning" is concerned primarily with minimizing error under
dynamic conditions, given the basic design, by varying these
parameters and only these.

Right. That's what I'm talking about.

I have yet to see a self-tuning system
that can decide it is perceiving the wrong variable, or producing
the wrong kind of output.

I thought that's what reorganization was about. That's "auto-tuning" of
the hierarchy. It's different from the local "auto-tuning" we were
discussing.

Term like "self-aware", "self-controlling," and "self-tuning"
create a titillating picture of some sort of bootstrap process.
But when it comes down to modeling such systems, there is no
bootstrap process. It always comes down to one system acting on a
different system.

Yes and no. See above. It depends on what you see as "the system." If
"the system" includes the reorganizing subsystem, then it is bootstrapped.
I have not seen a proposal for an external entity that changes the nature
or the parameters of the reorganizing system.

Martin