Greg's summary

[From Bill Powers (921005.0730)]

Greg Williams (920928) --

I don't believe that the following is quite correct:

4. The path of learning/reorganization is a function of (possibly
randomly generated) successive sets of changes in reference signals
and/or input/output functions, each successive set of changes being
made to the result of the previous set of changes, and another set
of changes being made only if certain criteria are not met for
ceasing learning/reorganization.

If reorganization changed a reference signal directly, as opposed to
changing the organization of an output function that generates it, it
would be altering a SIGNAL that is being emitted by a higher output
function (except at the highest current level). But this could mean
only injecting a signal into the same output path already occupied by
the existing signal. It seems to me, therefore, that injection of
arbitrary reference signals by reorganization would be limited to the
highest level of existing control systems. To inject such a signal at
any lower level would simply be to disturb existing control systems
which are ALSO supplying reference signals at that lower level, with
the result that the disturbed systems would alter their own outputs
and cancel the effect of the reorganization-produced change.

Changing a signal does not create a permanent change of organization;
it only affects content, not form, and does so only a long as the
signal remains changed. True reorganization, as I have thought of it,
alters parameters, not signals. As a result, of course, signals take
on new relationships to each other.

Hebbian learning makes parameters depend on long-term effects of the
signals passing through the network that is being reorganized; that's
the Hebbian version of reorganization. For this to work in general, it
must be true that there is some "best" way of handling signals. This
assumption shows up in the postulate that the output of a neuron
somehow "strengthens" the effects of input signals that exist at the
same time as the output signal. The implication is that it is best for
the organism that all signals contributing to a given neural output
have the maximum possible effect. This sort of rule, I believe, is an
attempt (of which I approve) to get away from a "teacher" that already
knows how a neural function should be organized. But I don't think it
will actually work. Any system in which there is a preferred kind of
input or output function loses the ability to adapt to environments in
which some other kind of input or output function is required for
successful control. Hebbian learning will not work if what is needed
is an input function with particular positive and negative weights on
its inputs. I don't think it is generally true that an increase of
effect of an input is always better than a decrease or no change.

The reorganizing effects I imagine act only on parameters, not on
signals.

ยทยทยท

----------------------------------------------------------------

Here it is assumed that memories of environmental disturbances
count as environmental disturbances, so that, for example, the
disturbance of being called a "pig" would, as a memory, continue to
act as a disturbance perhaps for a long time after the sound waves
had dissipated, with similar results as if "pig" were being
repeated over and over again by the disturber.

"Memory" is a somewhat ambiguous term, because it's often used to mean
any effect that makes a signal persist beyond the termination of the
input that produced it. I think you're using it that way here. If a
perceptual function has a long decay time, the perceptual signal will
continue to exist after the lower level signals at the input of the
perceptual function have disappeared. The perceptual signal indicating
the occurrance of "pig" is not itself, as I'm sure you realize, a
word, but only an indication THAT a particular word has occurred.
That's the only sense of "memory" that fits your proposal above.
Memory that requires recording and associative retrieval would not
constitute this sort of memory; in order for it to serve as a
disturbance, the same memory location would have to be addressed again
and again, and the system involving it would have to be operating in
the imagination mode. The retrieved signal is then under control, and
is no longer equivalent to an arbitrary external disturbance.
-------------------------------------------------------------------

6. At any time, the criteria for ceasing learning/reorganization >are

functions of reference signals and input/output functions at >that
time.

I still have a problem with both aspects of this. In effect, you're
saying that intrinsic or critical reference levels can be set by (a) a
signal in the system to be reorganized, and (b) the FORM of a
function. I think it's incumbent on the proposer to show how a system
that worked like this would be organized.

I have a feeling that to achieve the effect you're thinking of, you
need both a built-in reorganizing system of the type I propose and
some other type of reorganizing system that can accomplish changes in
parameters based on this-lifetime experience. It would help if you
could give examples of the second kind of reorganizing, so we could
judge where it does something that a learned hierarchy can't do.
-------------------------------------------------------------------
I have located a place to get the two photoreproductions done at $11
per page. I'll take the materials in today.
-------------------------------------------------------------------
Best,

Bill P.