SHANNON & TURING

[from Bob Clark (960109.1112 EST)>]

Shannon Williams (951228)

Bob Clark (9512275.1736)

What happens .... if a computer is designed which has a
Re-organizing System?

Yes, of course -- but THAT is the point! A man HAS a Re-organizing
System. In addition, an Intrinsic System is needed ("to drive") the
Re-organizing System, and a man HAS one -- the machine has not.

In B:CP Bill has recognized the need for -- and the importance of --
such systems. But he proceeds by assuming, without proof, that they
exist.

No one seems to be considering how a machine can be designed to
implement those Systems. The need for such Systems doesn't even seem
to be considered!

A related concept is that of self-perception -- one of the
characteristics of the Re-organizing System. Can a machine recognize
a compliment -- or an insult? A machine does not CARE about anyone's
opinion.

Sincerely, Bob Clark

[Martin Taylor 960109 14:05]

Bob Clark (960109.1112 EST)

In addition, an Intrinsic System is needed ("to drive") the
Re-organizing System, and a man HAS one -- the machine has not.

Depends what your call "the machine." If you mean the hardware, it's not clear
that a biological machine has one, either. If you mean a control hierarchy
simulated within a computer, why should it not have a set of intrinsic
variables and a reorganizing system? In a rudimentary sense, even the
Little Baby has both. The "intrinsic variables" are the closeness of the
"fingertip" to the current value of the incoming syntactic stream in the
quasi-phonetic-feature space, and a linear function of the total error and
the square of the derivative of the total error in all the individual ECUs.
The LB reorganizing system is constructed to be able to change the weights
on the sensory inputs to the PIFs, the weights on the reference inputs from
higher ECUs, and the linkage patterns among ECUs. Not all of those have
been tested, but there's no technical reason why they shouldn't be. (But
there's a financial reason). And if they work, why shouldn't they be
implemented in future hardware?

A related concept is that of self-perception -- one of the
characteristics of the Re-organizing System.

In the sense that the "outer world" environment of the reorganizing system
is inside the skin of the organism, you are correct in saying this. But
the next sentence switches to a very specific aspect of self-perception,
an aspect that is probably not related directly to the reorganizing system,
but that has been studied by Robertson and Goldstein:

Can a machine recognize
a compliment -- or an insult? A machine does not CARE about anyone's
opinion.

What you are talking about is disturbances to a self-image reference value.
Self-image may be a perception high in the normal hierarchy, but it is
unlikely to be an intrinsic variable, the control of which is the business
of the reorganizing system.

The word "CARE" implies an emotional effect. In a mundane sense, any control
system cares about the perception it controls. You know that, so you must
mean especially the emotional elements associated with control. Emotion has
been discussed from time to time, but so far as I know, it is not well dealt
with in (at least my) current understanding of the hierarchy. What is it
that differentiates caring, sadness, fright, joy, lostness....? Are they
just perceptions like any other, are they aspects of the working of the
intrinsic system (a conjecture given some support by the idea that they
seem to have some relationship to different states of blood chemistry),
or are there some of each kind?

Once upon a time, I suggested substituting the word "insistence" for "gain"
as a generic term relating to the output function of control loops, since
"gain" properly applies only to continuous linear systems. I would say that
a control system "cares" about its perception proportionately to its
"insistence" on keeping that perception close to its reference value.
In this usage, the two terms appear very like their everyday counterparts.

I'm not sure of the point of your posting, but it's always nice to be asked
to consider aspects of PCT that sometimes get pushed aside a bit.

Martin

<[Bill Leach 960110.09:48 U.S. Eastern Time Zone]

[Bob Clark (960109.1112 EST)>]

In B:CP Bill has recognized the need for -- and the importance of --
such systems. But he proceeds by assuming, without proof, that they
exist.

Of course this depends upon what you mean by "proof". The control
systems people (engineering) has satisfied themselves (rigorously) that
some sort of random reorganizing system is necessary for "AI" type
systems that would be required to operate unattended in space (for
example) under largely unknown conditions.

Some very serious consideration has been given to such machine
(particularly in the "hayday" of the space program. In various fields we
have some experience with the behaviour of machines encountering
"unplanned for" disturbances. In most cases the machine just fails.

The most obvious problem with machine design at allows for "random"
alteration of the machine's operations is that one has introduced the
capability of making "mistakes" [that is, the randomizing system may
erroneously alter a system function that would function correctly as well
as allow for possible continuation when disturbances occur for which no
"preplanned" function exists].

CARE

You bring up an interesting and oft discussed topic. I would suggest
that "compliments/insults" (from the subject's perspective) are the
result of "emotion" that result when a "sudden" change occurs in a
perception (with respect to one's reference for "quality" of
performance).

This idea of a machine not being able to "feel" may well prove to also
not be true. How we "feel" is probably related to both general and
specific error levels.

-bill