MMT:DME&OUTPUT-RKC

<Bob Clark (940223.10:30 EST>

Martin Taylor (940204 18:30 EST)

You bring up an important point with the following:

These sound very close to statements that it is output that is
controlled, rather than input. Is that what you mean?

No, that is not what I mean.

Your question involves an important distinction. In general
conversation and planning, the emphasis (and attention) are mainly on
the "outcome" to be accomplished. An "outcome" is an anticipated set
of perceptual variables. It consists of selected portions of
memories that act as reference levels for lower order systems. An
"outcome" is an "input," not an "output."

The basic Unit Control System has no connection from the Output
Function into the remainder of the System _except_ through the
Environment and thence through the Feedback Function. The Unit
System has exactly two inputs: 1) the Feedback Signal from the
Environment via the Feedback Function; and 2) from recordings
(memories) of past Feedback Signals currently being used as Reference
Signals. These recordings can only be changed by action of some
higher level system and/or the Reorganizing System.

A familiar example: One perceives the room to be a bit chilly. (Skin
temperature sensors send feedback signals into the system.) One
considers alternatives (imagined): 1) go into the other room and
change the thermostat setting; 2) call to someone already in the
other room requesting a change in the setting; 3) get a sweater; 4)
turn on an extra heater; etc etc. These are all indirect ways of
controlling skin temperature. Any of them would be expected to
result in a more acceptable skin temperature.

The output of the furnace may be controlled, but only _indirectly_,
by changing the setting of the thermostat.

As the DME examines these alternatives, additional aspects of the
current situation are included. How far away is the thermostat? Can
the person in the other room hear me? Is getting the sweater too
much trouble? etc etc.

The DME is concerned with three sets of perceptions: 1) imaginary
future situations and events; 2) present time situations and events;
3) recordings of past situations and events. The DME examines and
selects from available recordings in accordance with present time
situations and events.

It is interesting to observe that past, present, and future times are
each important to the operation of the system.

People DO make plans and prepare alternate courses of actions, or
"fall-back positions." I have assumed that this is done by
imagination loops in program-level ECSs, ...

Imagination is essential to the operation of the DME. Indeed, it
appears that _any_ imagination loop must be closed through the DME.
Where else? A great deal of planning involves the program-level
systems, so that the program level is familiar. However any
operation, or modification, of the program level can only be done by
higher level systems. And those systems are also subject to
modification. The process of planning illustrates the operation of
the DME at any level. It is not restricted to the program level.

In the absence of a DME (or equivalent) a couple of questions need
answers. What are imagination loops? How are the loops formed? How
are they activated? And how are conclusions determined?

Without some equivalent to a DME, an imagination loop at any level is
incomplete.

It seems to me that no matter how you slice it, making decisions has
to involve a perception that incorporates the construct of a
conditional -- an IF-THEN construction.

I agree that some form of IF-THEN construction is unavoidable. A
"computed GO-TO," is equivalent. Yes, it is logically necessary for
the DME to include a form of "conditional." However, such an
operation need not be restricted to the program, or any other
specific, level.

Last year, if I remember correctly, you placed the DME alongside the
main hierarchy so that it could act at all hierarchic levels.

To me, "alongside" is too specific, too geometric. The exact
location of the DME does not restrict its access to the hierarchy.
Since it can select the reference signals for the highest level, it
is, in some sense, "beyond" the highest level.

That idea parallels my notion that a symbolic-logical hierarchy
parallels the "classic" analogue hierarchy rather than being "above"
the analogue hierarchy as it is in the classic set of levels.

I am not acquainted with your "symbolic-logical hierarchy."

You refer to:

a process I have described as "contrast."

I am unfamiliar with this use of "contrast." The term implies a
comparison of two (or more?) sets of perceptions. Could "contrast"
be a measure of the degree or manner, in which they fail to match?
This seems a straightforward concept. To the degree that a
comparison between two (or more) sets of perceptions is involved,
this resembles some aspects of the operation of the DME.

Decision. Control of output. Choices. The "I" (who is that, in a
case of multiple personality?). The DME. Program-level perceptual
control. All difficult issues.

Yes, "difficult issues."

I have not discussed the "multiple personality." A discussion of the
relation[s] between the DME and the "I" in "multiple personality"
would require an initial discussion of both "personality" and "mental
illness." These and related topics are very interesting and important.

At this time there seems to be no generally accepted definition of
either topic within PCT.

In the publications that I have on hand, I find no reference to
"personality" either in an index, nor in a quick scan of the text.
The nearest is in Ed Ford's books where he includes adjectives and
adverbs that appear to relate to the concept of "personality." But
the concept of "personality," as such, is not found. In addition to
BCP, these references include: Living Control Systems, Living Control
systems II, American Behavior Scientist of 1990, Mind Reading, Love
Guaranteed, and Freedom From Stress.

In my post, DEM IX, PERSONALITY, (Bob Clark 930929.12:46 EST), I have
discussed the general concept of "personality" as included within my
view of the hierarchy. A discussion of your question requires a more
complete discussion of these and related topics.

I think discussion of these very important and interesting topics
should be separated from other subjects.

I do not like to contemplate the possibility of control of output,
and I'd like to be shown how the DME and/or the program-level
control of perception do not involve it.

I have not discussed the operation of the program level. I only note
that it must exist before it can act. How is the program level
formed? That requires operation of the DME.

At the beginning of this post, I have tried to show that the DME has
no "possibility of control of output."

In summary:
There is no internal connection between the output of a control
system and any input. The system has no direct way to perceive its
output.

and

The DME seeks to achieve a future set of perceptions. It does this
by controlling the current operation of the hierarchy. For this
purpose, it uses recorded sets of past perceptions as reference
signals.

This is control of "outcome" not control of "output."

I'll be interested in what you think of all this.

Regards, Bob Clark

<[Bill Leach 940223.18:35 EST(EDT)]

<Bob Clark (940223.10:30 EST>

You say:

In summary:
There is no internal connection between the output of a control
system and any input. The system has no direct way to perceive its
output.

I do see how that is actually a requirement for the mind. Indeed, in the
activity of contemplation, outputs must often become either or both of
perceptions and references.

-bill

<[Bill Leach 940224.07:50 EST(EDT)]

<Bob Clark (940223.10:30 EST>
[Bill Leach 940223.18:35 EST(EDT)]

I feel as though I have to comment further. If feedback as I referred to
in my posting <Bill Leach 940223.18:51 EST(EDT)] does exist within the
human control system, then direct use of part of the output signal by the
comparitor is not unreasonable.

I think that there are two different cases to consider here. The first
is the "simple" operation which I see as just a motor control issue. The
second is in the realm of the cognitive.

As far as I am concerned, we are so far removed from understanding the
possible operation of the brain algorithmically, it is totally useless to
even discuss something like feedback.

My thinking there leads me to believe that this massive computing ability
is not just a matter of a linerly more powerful parallel processor but
rather that the power is so much greater that we have not yet begun to
think about much less investigate the sorts of algorithms that are used.

This is not, at least in my case, even a belief that we won't or can't
determine such things. I'll admit that there could be something 'beyond'
the purely 'machanical' biological 'machine' that is operative there but
assuming so without proof is foolish.

Thus, I expect that as the level of computing power available to
researchers continues to increase and efforts in PCT continue to model
ever more complex human behaviour that these algorithms that are
presently unknown (if any) will be discovered.

Though we have groups that claim to have modeled the neuron, I believe
that there could be subtilties about neuron behaviour that we are a long
way from understanding. Some of the items that even I can think of are
that the "firing threshold" for neuron outputs can vary for groups of
neurons as a function of their immediate environment. The propagation
delay can vary in the same manner.

Propagation delay differences could easily be a major factor in operation
of the human mind.

In the first case... the simple one (har har), I'm sure that I sound as
vague as I undoubtedly feel about this. We have control systems in the
mechanical world that do have feedback in the sense that I use the term.
We also have computer programs that use output as input (LISP in
particular). Part of the problem with the term feedback as applicable to
even this "simple" situation is referential. When I have a goal, such as
take a sip of my coffee (as I just did), how many billion "computing"
operations occur between my decision and a swallow of coffee?

Are there control loops that monitor the "impulse rate" to the muscles
directing my hand toward the cup? Do these control loops "change the
gain" of the control loop that has the goal of positioning my hand to the
immediate region of the cup? Or maybe suppress or enhance the nerve
impulses in some other fashion? If so, are these properly called
"feedback" (I think so).

If the above were not bad enough, is it proper to consider the tactile
contact signals with the cup as "goal achievement" for the part of the
control loop that is positioning my hand or is it "feedback"? I don't
think that either view is wrong but discussion can be greatly confused
between two people taking the two views unless each understands the
other.

From what little I know about PCT so far, it appears that PCTers would

rather ignore the tern altogether to avoid the ambiguity and just stick
with what the functions accomplish.

<Martin Taylor 940224 12:20>

Bill Leach 940224.07:50

As far as I am concerned, we are so far removed from understanding the
possible operation of the brain algorithmically, it is totally useless to
even discuss something like feedback.

There's no conceptual difference between feedback in a linear control
system, feedback in a far-from-linear system, and feedback in a discrete
system. The fundamental construct of PCT is that ALL perceptions are
values of signals, and that those values can be compared with reference
values and affected by actions. My perception of the probability that
you will walk into my room at 12:32 is a signal with a value (very near
zero), just as is my perception that my room feels moderately warm. I
can affect the perception of the temperature of the room by various means,
all of which involve changing references for the perceptions of muscle
tensions and much else. But I don't want to, because the present temperature
is very close to my reference value for it. In other words, it is
comfortably warm.

I can't do much to affect my perception of the probability you will walk
into my room at 12:32, it being now 12:30 and my perception of the probability
you are within 2 minutes of here being also at a very low level. But I
could affect it if the time in question were 12:32 on April 1, and if I
had a lot of money or access to military force, or other means of affecting
you.

Feedback is the effect on perception of the output of the control system,
no matter what kind of perception that might be. "Cognitive" perceptions
are perceptions like any other, but they probably are in pretty high, and
very likely symbolic levels of the hierarchy.

My thinking there leads me to believe that this massive computing ability
is not just a matter of a linerly more powerful parallel processor but
rather that the power is so much greater that we have not yet begun to
think about much less investigate the sorts of algorithms that are used.

When you contemplate this, remember that the perceptual signal at any level
is not a simple function of sensory input, but is a function whose inputs
are the outputs of other complicated functions, down through possibly many
levels of computation, and may well include the results of imagination
occurring in lower levels of the hierarchy. In the classic sense of
"algorithm," as a procedure for obtaining a result, given the set of
data inputs, there can be no algorithm that determines a cognitive
perceptual signal. Algorithms run to completion once their data are
provided (unless they run forever). Brains don't have that luxury.
Disturbances come continuously. Behaviour controls perception continuously.

The processes that transform the inputs to one Elementary Control Unit
into its perceptual signal may be simple. The connections of ECUs into
a hierarchy are not, any more than are the connections within a simple
time-delay neural network. The "algorithm" (process) that converts
sensory input into a high-level perceptual signal incorporates all the
complexity of the network, INCLUDING that it is a control network that
maintains SOME of its perceptual signals near (changing) reference values
in the face of unexpected environmental disturbances.

Propagation delay differences could easily be a major factor in operation
of the human mind.

Yes, that's a simple consequence of accepting the basic proposition of PCT,
that behaviour is the control of perception.

Martin

<[Bill Leach 940224.20:56 EST(EDT)]

Martin Taylor 940224 12:20>

I agree that the type of control system does not effect the meaning of
the term feedback.

The problem that I have is that refering to a distrubance as feedback, I
think that one is applying the term to something that is actually
"outside" the control system. I (from training and experience) have a
real difficult time looking at "feedback" the "engineering term" as
something outside the control system itself. Now I recognize that this
position of mine may not be right in general, right for human behaviour,
nor right for PCT.

It seems that as soon as you leave the "classical" electronics use of the
term, it tends to become so ambigious as to be useless.

When you contemplate this, remember that the perceptual signal at...

When I contemplate this now and try to integrate what little I have
already fathomed of PCT, I more amazed at what the human brain does.

I also, am convinced that the approach that PCT takes IS the one that
will result in the solutions to the questions (eventually). The "light
bulbs" are going on here so fast that it scares me! Without either
"denigrating" what it is to be human nor making "light" of the real
complexity, it is my opinion that PCT provides unbelievably simple
comcepts as to just how the human brain might function to accomplish the
incredible tasks that we know that it often does accomplish.

Please forgive me as I wax and wane between zelot and heretic but I know
that the implications of this approach are only just beginning to break
into my conscousness. yes, yes! I know, no heretics :slight_smile:

-bill