Systems Theory;DME; World models

[From Bill Powers (940221.0930 MST)]

Cliff Joslyn (940220) --

I think Rick Marken deserved your protest. Systems Theory is a
broad term, covering a lot of things both good and bad. It's also
a discipline that includes all levels of quality, from the
superficial dilettante to the responsible and careful thinker. To
judge systems theory by its worst proponents would be like
judging PCT by certain personality theorists who claim to
represent it in print.

You and Martin Taylor are examples of systems theorists who like
to flex their muscles at the higher levels of abstraction. While
I would not hestitate to take issue with either of you about
misuses of abstraction (and plan to do so), I have great respect
for both of you as intelligent and creative thinkers who
understand PCT and have done a service to us all in making it
intelligible to others in other fields.


Jim Dundon (940220) --

...the magnitude of the perceptual signal is the
information it carries>

Magnitude compared to what?

Bob Clark (940219.1750 EST) --

I offered the name, "Decision Making Entity," as a label for a
concept composed of a set of events I perceive within myself.

It's difficult to decide how to classify "events I perceive
within myself." They could be simply the operation of the learned
hierarchy, or something added to those operations by an aware
entity. Many of the functions you give, by implication, to the
DME seem to me to belong in the learned hierarchy and have the
character of the higher levels I have proposed.

I said I would like to see the functions of the DME unpacked, and
you said you thought you had done that. But consider:

The primary and defining action of the DME is that of selection
and application of reference signals for one or more levels of
the hierarchy.

What I mean by "unpacking" is analyzing what a system
accomplishes into types of operations needed to accomplish that
result. If the DME can "select" reference signals from memory,
this implies that it must be able to perceive them and do
something (which needs to be specified) that is called
"selection." If it is to "apply" them, it must have some means
for directing them to particular subsystems, which implies in
turn that it must know where the reference inputs to those
subsystems are, as well as what those subsystems accomplish. And
it must be able to operate routing switches. All of this implies
a rather complex entity with inputs, recognition capabilities,
and output capabilities of several kinds.

The recognition capabilities bother me the most, because they
seem to duplicate what is already in the hierarchy. How could a
DME know what the content of a memory location means? It would
need a recognition system just like the one that normally
(automatically) handles that kind of perception.

While my concept of the reorganizing system and its connection to
awareness also has holes in it, I have removed from it anything
like ordinary perception and action. Anything relating to
behavior that is perceived is perceived by an input function in
the learned hierarchy. All thought-like processes, all reasoning
and computation, are part of the learned hierarchy. All that is
left for the reorganizing system to do is to monitor intrinsic
variables and create random changes according to intrinsic error
signals. All that awareness has left to do is to (perhaps)
confine the reorganizing process to specific learned systems
where error is large.

You say:

In order for "selection" to be meaningful, the DME must have
more than one possible alternative available.

But consider what this implies about the capabilities of the DME.
It must be able to perceive memories as they exist in storage,
either without activating them or by addressing and activating
them while simultaneously preventing the resulting signals from
becoming reference signals for lower systems. It must be able to
judge the relevance of each examined memory to accomplishment of
a specific goal at a higher level. It must be able to weigh
possible outcomes and judge them in terms of multiple higher-
level goals. And then it must be able to connect one of the set
of possible reference signals to an appropriate lower-level
comparator or set of comparators. A diagram of the DME would
become very complex.

All of these capabilities, it seems to me, would better be
assigned to the learned hierarchy. Rather than interpreting
decision-making as the operation of some separate entity, it
seems to me more parsimonious to explain decision making as a
normal control process taking place at one or more of the levels
of automatic control. To give the DME the capabilities of
perceiving categories, sequences, programs, and principles is to
duplicate functions that need exist only once in my version of
the hierarchy. In my system, memories are examined by the next
level up, not by a separate entity. This automatically applies
the correct perceptual interpretation to signals from memory,
which would otherwise be hard to explain because all neural
signals are alike. Furthermore, in my model of this process,
perceived memories can be composed of memory signals from many
lower-level systems, the same ones that normally supply multiple
inputs to the same perceptual function during real-time control.
There is no duplication of functions.

If you were to remove from your DME all capabilities having to do
with perception and control of classification, order, logic and
computation, generalizations, and system concepts, the only
perceptual function left would be the generalized one I call
awareness. No "decision-making" would be required -- all
decision-making would then consist of applying learned rules in
learned ways to lower-level experiences. All deviations from
well-learned rules would amount to random reorganization, which
takes no rational considerations into account.

It seems to me that you have packed too many rational
capabilities into your DME, too many capabilities of different
kinds. What you wrap up in a single entity I lay out as a series
of levels of perception and control which can do the same things,
but many other things as well. And you also seem to include in
the DME the capacity for awareness and random reorganization,
thus stuffing my reorganizing system into the same package. While
for purposes of communication there might be advantages in
referring to many of my higher levels plus reorganizing system as
a "decision making entity," the disadvantage of this approach for
purposes of modeling is that those same higher levels can do many
things other than making decisions -- for example, proving
theorems or writing music. A term like "decision-making" is
simply not rich enough in connotations to cover all higher-level
processes without a lot of poetic license.
Martin Taylor (940221.0945)--

(1) Exaggeration in the use of the words "world model." The
"world" in question is the world relevant to the ECS in
question, namely the effect of output on the perception of the
ECS. The world is that of the ECS.

Even a limited world model requires much more than that. A
control system that controls the position of a mass in space
perceives only position (and perhaps velocity). It does not
perceive the mass. An interior model of the external world that
would help in dealing with the mass would have to contain a
simulated force input, an integrator to turn that into simulated
velocity (with an adjustable multiplier for setting the modeled
mass), and a second integrator to turn the velocity into a
perceptual signal simulating position. The two integrators and
their parameters are the world-model. Neither of these
integrations and none of the parameters is represented by the
perceptual signal.

To build a tuning system that would alter the model to make its
behavior like that of the real perceptual signal, someone or
something would have to decide to represent the properties of the
environment as two integrators in tandem, with or without springs
and friction. In engineering applications this is normally done
by the engineer, who provides the appropriate property-
representing operations and leaves their parameters free to be
adjusted by the tuning process. The tuning process does not
select among integration, differentiation, logarithm-taking, or
other possible operations. All it does is adjust parameters in a
model whose form is already given.

... if you want to complain about <A real world model of the
sort to which I would give such a high-flown name would tell me,
"All the cups but the one in my hand are behind _that_ cupboard
door; the plates are behind the other one."> then you have to
address the same complaint to the notion that a perceptual
signal is like an everyday notion of perception--that
multidimensional, glorious complex that we subjectively


No, that's comparing apples and apple strudels. A world model of
cups and cupboard requires building a model with properties like
that of the real world -- namely, that if I remove one cup from
the modeled cupboard and close the door, there is one less cup in
it even if I can't see inside. Internal models do not represent
STATES of the world, but PROPERTIES of the world. Using these
properties, the models convert inputs (the action of the higher
system) into perceptions (states of imagined perceptual
variables). With a good model, the action need not be one that
has occurred before; the modeled perception will still be
correct. If I put a tea bag into the cupboard, my model will
contain a tea bag behind the closed door.

To propose that perceptual signals are the elements of subjective
perception is simply to propose an identification: the subjective
perception is perception of a neural signal. The glorious complex
that we subjectively experience is made of individual perceptual
signals, as we can see by disturbing the complex in selective
ways. If we could not disturb it along independent dimensions, we
wouldn't think of it as multidimensional.

(2) Self-tuning: You are correct that this is a misleading term.
What is tuned is not what does the tuning. It is, however, a
term used in control engineering.

Who cares? Even engineers can be illogical.

The point is that no signals are used in the tuning that are not
directly available within the thing being tuned.

That's no different from the situation holding between any
control system and its environment. The signals that are
controlled are directly available from the part of the
environment being controlled. You (and others) have already
proposed an alternative name for one kind of tuning: "control of
error." Such a control system perceives the error in _another_
control system, and acts to bring that error toward a reference
level (normally, but not necessarily, zero). Tuning is just
another control process.

It would be perhaps a good idea to invent a new term, such as
"auto-tuning" to replace the misleading wording.

Self, auto, or ipso, the meaning is the same. Why not just drop
the prefix and say "tuning?" What a piano tuner does to the piano
is tune it, using criteria that are not in the piano. We don't
consider the combination of piano tuner and piano as a self-
tuning system. That system comes apart too easily.

I have yet to see a self-tuning system that can decide it is
perceiving the wrong variable, or producing the wrong kind of

I thought that's what reorganization was about.

No, the reorganizing system doesn't have any idea what's wrong
with the system it's reorganizing. All it knows is that some
consequence of the system's operation, a consequence that the
reorganizing system does care about, is not at its reference
level. Its action is just as likely to make the affected system
behave worse as behave better. But of course if it behaved worse,
another reorganization would immediately follow.

It depends on what you see as "the system." If "the system"
includes the reorganizing subsystem, then it is bootstrapped.

I hoped you wouldn't say that. I consider "a control system" to
be the least semi-permanent functioning unit that can control.
It's not hard to separate a self-tuning "system" into the system
that tunes and the system that gets tuned. Any broader use of the
term system just opens the doors for the lawyers.

I have not seen a proposal for an external entity that changes
the nature or the parameters of the reorganizing system.

I call it DNA, which varies the settings (and nature) of
intrinsic reference levels as a way of coping with evolutionary
selection pressures (on days when I am promoting that model of
Best to all,

Bill P.