reorganization models

[From Bill Powers (950104.0600 MST)]

Martin Taylor (950103.1130) --

     The localized reorganizing system is different from the separate
     reorganizing system, yes. It is an attempted answer to the
     question "Is it absolutely necessary at this stage of understanding
     PCT to assume that there exists a separate system devoted to
     reorganization." It is necessarily different from the separate
     reorganizing system in that it suggests that the answer to the
     question might be "No, it is not absolutely necessary to assert
     that reorganizing is done by a system separate from the main
     hierarchy."

in B:CP, p. 182:

"This _reorganizing system_ may some day prove to be no more than a
convenient fiction; its functions and properties may some day prove to
be aspects of the same systems that become organized. That possibility
does not reduce the value of isolating these special functions and
thinking about them as if they depended on the operation of some
discrete system."

So my answer to your question was given 21 years ago: no, it is not
necessary, but it is a conceptual convenience to see the properties of
"the" reorganizing system as being different from those of the hierarchy
of control systems "it" brings into being.

···

------------------------
     I proposed that the outputs of the control systems for the
     intrinsic variables contribute to reference signals in the main
     hierarchy, rather than causing changes in the structure of the main
     hierarchy. This seems to me to be reducing the number of ad-hoc
     processes, not increasing them.

I had commented:

If the higher system affects the lower only by varying the value of
its reference signal (first paragraph), how does it also vary its
linkages and functions and cause new ECU's to grow (second paragraph)?
You need to give each ECU capacities for doing those operations as
well as varying the values of reference signals for lower systems,
don't you?

and your reply was:

     I don't suppose it is worthwhile restating it yet again. But, YES.
     Each ECU in the localized reorganizing scheme is allotted the
     capacities that are allocated to the separate reorganizing system
     in that scheme, but they apply only to its OWN local environment.

You seem to forget your own words within one page of having uttered
them. You want to "simplify" the process of reorganization by
eliminating the capacity to alter the hierarchy structurally, but you
then claim that your model "... is allotted the same capacities that are
allocated to the separate reorganizing system...".

By eliminating structure change as a consequence of reorganization, you
eliminate the fundamental property that I gave it in order to account
for the development of the hierarchy. By making reorganization work only
through adjustments of reference signals, you assume that there are
already places to send reference signals, so the hierarchy must already
be in existence in order for reorganization to work. This assumption, of
course, is what requires you to think of the initial reorganizing
process as equivalent to a highest level of control in the hierarchy
which works initially by sending reference signals directly to the motor
output systems. And having made that assumption, you must then propose
that new ECUs are "inserted" between the highest level and the lowest
(this somehow happening only through sending reference signals to lower
systems which do not yet exist, and without structural changes).

By proposing that reorganization works only through adjustments of
reference signals, you add to the reorganizing process an effect that I
did not put in it. In my proposal, changes in reference signals come
about through structural changes in the systems that are generating the
reference signals. It would make no sense for reorganization to set
reference signals directly, because in general there is no one "right"
reference signal. All reference signals must be adjusted by higher
systems on the basis of current disturbances and higher reference
signals, not be set to particular values. In my system, reorganization
does not produce any particular reference signals; it alters the
_relationship_ between the reference signal in one system and the error
signals in higher systems. That is a structural change, not a change in
a signal.

In short, I proposed a reorganizing system that works _strictly_ through
making structural changes, not through manipulating signals in the
hierarchy. You have proposed one that works exclusively through
manipulating signals in the hierarchy, not through making structural
changes.
--------------------------------

Anyway, all this sounds quite incompatible with the Genetic Algorithm
approach, in which new ECUs are developed not by experience with the
current environment or failure to control in that environment, but by
combining ECUs from the parents.

     Huh!?!

     What kind of dormitive principle is this you are espousing? New
     ECUs developed "by experience with the current environment or
     failure to control in that environment"? How, pray?

When the current environment is such as to cause intrinsic errors, or
renders existing control system ineffective so they contain large and
persistent error signals, reorganization starts, altering the
organization of existing control systems and, especially during
development, creating new control systems out of "uncommitted neurons."

The Genetic Algorithm, however, creates new control systems only by
combining the properties of control systems that existed in the parents.
This proposal attributes an entirely different origin to the control
systems in the offspring. It says that the newborn child contains
control systems in a hierarchy that resembles the hierarchy in the
parents, except that the properties of each new control system are some
combination of the properties of the parent's equivalent control
systems. Under the Genetic Algorithm approach, there is no need for a
reorganizing system except perhaps to tune the properties of the
inherited hierarchy of control. The inherited hierarchy contains all the
levels and connections that are in the adult parents, from the very
beginning, complete with the organization of perceptual functions and
output functions as well as the interconnections from level to level.
This is necessary because a single ECU has no meaning in isolation; it
perceives only though lower perceptual processes, and it acts only
through lower control systems, so its properties must be appropriate to
the properties of all the systems below it. If one ECU is inherited,
then necessarily all ECUs below it that are related to it must also be
inherited.

In my proposal, all that is inherited in a human being is a set of
levels of brain function, each one containing the basic materials from
which control systems of a particular type can be constructed, but with
no other pre-organization, or very little. The capacity to adapt
behavior for controlling a given environment is inversely related to the
amount of inherited organization in a system. I have chosen as a
starting point the assumption that _all_ structural organization in the
human brain arises through reorganization and that _none_ is inherited.
I will no doubt have to relax that rather extreme assumption some day,
but the more we can account for without relying on help from genetics,
the less burden we place on genetics to account for the events of a
particular lifetime.

I hope I have made my position clearer.
------------------------------
     Take as the watchword in understanding it: Everything about the
     reorganizing system is the same in the two proposals, EXCEPT its
     distribution or localization.

Then whether it is distributed or localized, the reorganizing system
operates strictly through making structural changes in the brain, and
not through manipulation of the same signals that flow in the hierarchy
of behavioral control systems. The variables it controls are not the
variables that the hierarchy controls, but intrinsic variables at a
level of which the hierarchy knows nothing. If this is what you mean,
then I can accept the above "watchword." Somehow, I doubt that this is
what you mean.
-----------------------------
As to the three types of perceptual function you have been mulling over,
I don't think you have considered how they would work as input functions
in a control system.

     For the curious, the three different kinds of perceptual function I
     have mulled over are: (1) functions that combine incoming values
     with some kind of weighting function, possibly with mutual
     multiplication, and apply a non-linear saturating function to the
     result;

You seem to be forgetting that a control system controls its perceptual
signal, not the inputs to the perceptual function. If the perceptual
signal is being made to track a changing reference signal, then the
inputs to the perceptual function must vary as the inverse perceptual
function of the reference signal (or one of the inverses). A saturating
input function would thus require the inputs to become extremely large
when the reference signal demands a perceptual signal near (or above!)
the saturation point.

All you're doing here is adopting (loosely) the architecture that others
have proposed for adaptive neural nets. This architecture is designed to
produce categorical outputs on the basis of varying inputs; it is not
designed to create a perceptual signal whose variations are proportional
to variations in some external physical variable. This kind of input
function could not be used to model tracking behavior; its presence in a
control system would lead to behavior that we do not observe.

Your other two types of input functions are even worse-adapted to the
requirements of control, as you would find out if you tried to build a
simulation based on them.

Don't forget that in the original attempts to get a Little Baby going,
your team was unable to make even a single control system operate using
the proposed integrating sigmoid input function until I persuaded them
to try a simple linear input function. That was at least two years ago.
What makes you think the nonlinear saturating input function is going to
work any better now?
---------------------------------------------------------------------
---------------------------------------------------------------------
A general comment based not only on the above interchanges but on papers
I've been receiving from various people:

When an attempt is made to bring together the latest or most influential
thinking to produce a model of behavior, the result -- even not
considering PCT -- impresses me as a mess, not an advance. The problem
is that reinforcement theory, Freudian theory, information theory, goal-
setting theory, genetic theory, quantum theory, neural network theory,
fractal theory, dynamic systems theory, classical conditioning theory,
Gibsonian theory, and all those other theories were never designed to
work together. Each one was developed from specific assumptions (both
covert and overt) and from observations that the others did not take
into account. A composite of all these theories is not an improvement on
any one of them. It is just a confusion of random ideas pulling in
different directions.

When you try to mix PCT with all these other theories, the confusion
just becomes worse. If you stir a Big Mac, fries, and milkshake into a
big bowl with breakfast, tea, and dinner, the result is not an
improvement in the Big Mac lunch but garbage. Even the Big Mac becomes
just more garbage. That is what happens when PCT is combined with all
these older theories: it turns into garbage. It may not have been the
ultimate gourmet repast to begin with, but it is not improved by trying
to combine it with a mishmash of other ideas thrown together at random.

PCT covers a lot of ground and it was constructed to hang together
without internal contradictions. It is based on a fundamental phenomenon
that none of the other theories it keeps getting mixed in with took into
account, and in many cases it offers alternative explanations of
behavior that contradict the explanations offered by other theories. For
essentially every other theoretical explanation, PCT leads to a
_different_ and usually _incompatible_ explanation. You can't just start
plugging other theories into PCT, or vice versa, and expect to get some
magical synergistic (oh, yeah, synergism theory) whole that is greater
than its parts.

I think it may be have been Dag Forssell who came up with an analogy:
building a car by using the best features of all the automobile
manufacturers. Combine the wheels of a Mercedes, the carburetor of a
Maserati, the engine of a Rolls Royce, the suspension of a Volkswagen,
the body of a Chevrolet truck, and the electronics of an Infiniti, and
what do you get? A nonfunctional monstrosity.
-----------------------------------------------------------------------
Best to all,

Bill P.

[Martin Taylor 940104 16:30]

Bill Powers (950104.0600 MST)

So my answer to your question was given 21 years ago: no, it is not
necessary, but it is a conceptual convenience to see the properties of
"the" reorganizing system as being different from those of the hierarchy
of control systems "it" brings into being.

Sorry, it was Dennis's question, not mine, and nobody but me seems to have
attempted to answer it, apart from Rick's elegant description of the
standard separate system. I am glad for your reminder of what you write in
BCP, and it would have helped earlier, if you had said this to Dennis,
and at the same time mentioned that you still hold the same opinion.
Good. At least we are on the same wavelength in some respects, since
what you wrote is much closer to what I think to be true than is the
re-ification of "the reorganizing system" as a "real" separate entity
that has been the assumption of many of the other posts.

···

===================
But otherwise, this get odder and odder.

What makes you say the following?:

You want to "simplify" the process of reorganization by
eliminating the capacity to alter the hierarchy structurally,

Absolutely not. I don't. I DON'T, i dOn'T!!!

The business of reorganization is structural alteration in the hierarchy.

It's THE SAME in both systems. Only the focus of the structural
reorganization is different. In the localized system, the structural
changes happen in the vicinity of ECUs that are failing to control.
With the separate system, THE SAME kinds of change happen "somewhere",
unspecified.

You acknowledged (not for the first time) that you did not have a good
principle for reducing the possibilities for locating the structural
changes, in the separate model. I don't say my proposal is satisfactory
or effective, but it is A proposal. Changes happen in the vicinity of
error.

By making reorganization work only
through adjustments of reference signals, you assume that there are
already places to send reference signals, so the hierarchy must already
be in existence in order for reorganization to work.

I simply cannot imagine where you got the idea, in any of my postings, that
reorganization is in any way related to adjustment of reference signals.
I have never had any such connection in mind--none at all, never.

What I have suggested is the possibility that reorganization is related
to ERROR, and not DIRECTLY to the control of intrinsic variables. In the
localized system proposal, intrinsic variables are controlled like any
perceptual variable, within the same hierarchy. The fact that the physical
corresponding variable (the CEV, except that it is internal) is a chemical
concentration or some such, is irrelevant. Our acts in the outer world
can cause chemical variations which we perceive as colour or taste, etc.,
but that causes no difficulty for us in controlling the relevant perceptual
signals.

Somewhere there are transducers that inter-relate the chemical, physical,
and neural domains. What they are, and where, is independent of the
nature of the control systems in whose paths they occur.

In my proposal, changes in reference signals come
about through structural changes in the systems that are generating the
reference signals. It would make no sense for reorganization to set
reference signals directly, because in general there is no one "right"
reference signal.

Yes, I think I am as clear as you on this point. It would make absolutely
no sense. I have never considered the possibility that it would, beyond
the fleeting moment it takes to know that it wouldn't.

In my system, reorganization
does not produce any particular reference signals; it alters the
_relationship_ between the reference signal in one system and the error
signals in higher systems. That is a structural change, not a change in
a signal.

In the localized system, too.

In short, I proposed a reorganizing system that works _strictly_ through
making structural changes, not through manipulating signals in the
hierarchy. You have proposed one that works exclusively through
manipulating signals in the hierarchy, not through making structural
changes.

Let's go back to square one. Forget the presumption you make that I
cannot mean what I say when I say that the process of reorganization is
exactly the same in the two systems. Try to imagine the system as I
have proposed it, as being exactly the same as your system except for
two facts: (1) Intrinsic variables are controlled directly, through the
same kinds of control systems (functionally, not necessarily physically) as
any other variables; intrinsic variable output does NOT directly induce
reorganization. (2) Error in a control system is not considered to
contribute to a special intrinsic variable, but affects reorganization
directly and locally in the neighbourhood of the ECU experiencing error,
whether that ECU is involved in controlling an intrinsic variable or any
other kind of variable.

================

I'm not sure you understand the nature of the Genetic Algorithm proposal.

When the current environment is such as to cause intrinsic errors, or
renders existing control system ineffective so they contain large and
persistent error signals, reorganization starts, altering the
organization of existing control systems and, especially during
development, creating new control systems out of "uncommitted neurons."

This is your description of the reorganizing events, but it is not a
description of a mechanism of reorganization. It can apply equally to
the GA proposal, which refers to your last phrase "creating new control
systems out of 'uncommitted neurons'". The GA proposal is a little more
specific, in that it makes some suggestion about what the new control
systems might look like. This makes it perhaps a bit more vulnerable
to criticism, but at least it's a stab at a mechanism, rather than having
the new control systems with just the right properties pulled out of the
neuron-soup.

To repeat: the GA proposal is a mechanism, or at least a tighter description,
for what you say happens. It isn't supposed to be the only possible
mechanism, either. It's one possibility that we intend to check out to
see if it works. If you have other suggestions and we have money, we
might check them out as well. But not if they are "deus ex machina"
proposals that say "put in a new neuron with the right properties for
its level."

Now in regard to your criticism of the GA proposal:

The Genetic Algorithm, however, creates new control systems only by
combining the properties of control systems that existed in the parents.

This is true (apart from the very occasional mutation that is incorporated
in most GA algorithms, and quite probably in ours).

This proposal attributes an entirely different origin to the control
systems in the offspring. [As compared to Powers's]

This may be true, but I don't know, since I don't know what your proposal is
for the structure of any given new control system (and I don't now have
easy access to BCP).

It says that the newborn child contains
control systems in a hierarchy that resembles the hierarchy in the
parents, except that the properties of each new control system are some
combination of the properties of the parent's equivalent control
systems.

Superficially, this is true, but it hides a very important fact, which is
that by such recombinations of properties, systems of very different
power and (apparent) type often emerge. Consider: Mom has a perceptual
input function (forgetting the final saturating nonlinearity), say:

p = a + 3b + c*d

Pop has a PIF

p = c + d + 4f

Baby might well have

p = a + c*(c+d) +4f,

in which the novelty of c^2 has emerged.

Under the Genetic Algorithm approach, there is no need for a
reorganizing system except perhaps to tune the properties of the
inherited hierarchy of control.

Why? It would be nice if this were so, but I doubt very much that it
would be. To have just one mechanism would be more parsimonious than to
have the several mechanisms we (both) currently envisage, but I can't
see how it would work.

The inherited hierarchy contains all the
levels and connections that are in the adult parents, from the very
beginning, complete with the organization of perceptual functions and
output functions as well as the interconnections from level to level.
This is necessary because a single ECU has no meaning in isolation; it
perceives only though lower perceptual processes, and it acts only
through lower control systems, so its properties must be appropriate to
the properties of all the systems below it. If one ECU is inherited,
then necessarily all ECUs below it that are related to it must also be
inherited.

In this paragraph, I agree with the premise:

a single ECU has no meaning in isolation; it
perceives only though lower perceptual processes, and it acts only
through lower control systems,

but I do not follow any of the argument that you seem to think follows
from it.

How could it be possible for the initial hierarchy to have all the
levels and connections that existed initially? As I already described
it, there are (assumed to be by inheritance) certain initial sensory
system and muscular outputs, and certain intrinsic variables that have
inherited reference values. Certainly all the connections eventually
link to these initially present systems, so in this sense all the
connections are initially present. And as for the functions, the
components are initially present in the initial random population
(is this what you mean by "uncommitted neurons?), except for novelties
such as I described above, and for possible results of mutations.

How can an ECU have properties that are NOT appropriate to the
systems below it? We assume that the systems below it feed SIGNALS
upward, and that it feeds SIGNALS downward. For the most part, these
signals are assumed to be neural currents, for the sake of argument,
but all that matters is that what one sends out the other can accept as
input. In what manner could an ECU in the neural structure NOT have
properties appropriate to the systems below? And I don't even understand
what you mean in the last sentence: "inherited" by the organism, or
"inherited" in the GA sense of the developing hierarchy? Either way,
it makes no sense to me as an assertion.

In my proposal, all that is inherited in a human being is a set of
levels of brain function, each one containing the basic materials from
which control systems of a particular type can be constructed, but with
no other pre-organization, or very little.

OK. You assume more to be inherited than I do. I imagine (rather than
assume, which is too strong a word) that the levels would develop out of the
GA restructuring. I realize that this is actually quite unlikely, and it
is probable that the actual types of functions in the mature hierarchy
have been evolved over mega-years, but I'd like to avoid forcing that
assumption if it is unnecessary.

I have chosen as a
starting point the assumption that _all_ structural organization in the
human brain arises through reorganization and that _none_ is inherited.

Whoops! I thought you said the opposite in the previous sentence. So we
both choose the same starting point, after all.

I will no doubt have to relax that rather extreme assumption some day,
but the more we can account for without relying on help from genetics,
the less burden we place on genetics to account for the events of a
particular lifetime.

Bon. D'accord.

I hope I have made my position clearer.

No. As far as your position goes, you only reinforced what I already
thought it to be. But I hope I have made my position clearer. It isn't
very different. And I'm in no way committed to the localized structure.
If it can be shown to be a non-starter, I won't be heartbroken. But I'd
like it to be evaluated on its merits, not to be replaced by something
entirely different that can be readily dismissed. And I hold no special
brief for Genetic Algorithms as a way of developing novel ECUs and
interconnections. I may even drop GAs before we test them out, but I
don't now expect to do so.

------------------

    Take as the watchword in understanding it: Everything about the
    reorganizing system is the same in the two proposals, EXCEPT its
    distribution or localization.

Then whether it is distributed or localized, the reorganizing system
operates strictly through making structural changes in the brain, and
not through manipulation of the same signals that flow in the hierarchy
of behavioral control systems.

YES!

The variables it controls are not the
variables that the hierarchy controls, but intrinsic variables at a
level of which the hierarchy knows nothing.

That, precisely, is the difference in "distribution or localization." Taking
the "intrinsic variables" out of the reorganizing system's direct action
loop and placing them in the main hierarchy is what allows the localized
structure to be a (possibly) viable alternative reorganization system.
So long as the control systems for the intrinsic variables provide output
that only drives reorganization, the reorganizing system is necessarily
separate, almost by definition.

If this is what you mean,
then I can accept the above "watchword." Somehow, I doubt that this is
what you mean.

50-50.
------------------

You seem to be forgetting that a control system controls its perceptual
signal, not the inputs to the perceptual function. If the perceptual
signal is being made to track a changing reference signal, then the
inputs to the perceptual function must vary as the inverse perceptual
function of the reference signal (or one of the inverses). A saturating
input function would thus require the inputs to become extremely large
when the reference signal demands a perceptual signal near (or above!)
the saturation point.

Is this a criticism? Surely this is what happens. It's one of the
basic possibilities built into the Control Builder functions. But note
that it is not only the perceptual signal that saturates: All the neural
signals, including the reference signal and the output signal, also
saturate. They cannot do otherwise. The question is whether the
reference signal saturates at a level above the perceptual signal
saturation value, and whether output from above can (normally) cause it
to do so. I suspect that it can, given that in emergency situations
people sometimes generate enough force to break their own bones. But
this is abnormal.

All you're doing here is adopting (loosely) the architecture that others
have proposed for adaptive neural nets.

Not exactly. What I'm doing is noticing that the internal structure of
the control hierarchy contains as a sub-part the architecture that has
been proposed for some types of neural net. I'm also noticing that there
is a reason why all neural nets that do anything useful are based on units
with saturating nonlinearities. The reason is that when the nonlinearity
exists, a second level of the network is doing a job that is different
from the first level. This is not true when the neural nodes are linear
summators. The same principle must apply to control networks: what is
controlled at one level must be qualitatively different from what is
controlled at a level below.

This architecture is designed to
produce categorical outputs on the basis of varying inputs; it is not
designed to create a perceptual signal whose variations are proportional
to variations in some external physical variable.

Some people use them that way, some don't. For my part, I've consistently
argued that it is a terrible misuse of a neural network to make it do
categorical discrimination--for all that they do so rather well. And
categorical discrimination is only one aspect of the things for which
neural networks have been used or proposed.

I see no reason to demand that variations in perceptual signals are
_proportional_ to variations in some external variable. Monotonic, yes
probably (though you have made proposals from time to time that could
be construed as relaxing even that requirement).

Don't forget that in the original attempts to get a Little Baby going,
your team was unable to make even a single control system operate using
the proposed integrating sigmoid input function until I persuaded them
to try a simple linear input function. That was at least two years ago.
What makes you think the nonlinear saturating input function is going to
work any better now?

Well, I don't know why, but the problem seems to have gone away. I
don't remember it in that form, but I'll take your word for it. I'll
have to look into it closer. Perhaps other changes around the same time
had compensating effects. We've been using half-sigmoid functions more
or less interchangeably with other functions, so far as I remember, but
I'll re-check (haven't used the Control Builder for some months now). The
biggest problem I remember with getting the Control Builder functioning was
in the interaction of the compute iteration (sample) time with the transport
lag. We got around that by putting a low-pass filter in every signal
path, but the result is not totally satisfactory because the filters
are not sharp enough and when they are put in series, things have to
happen very slowly, which causes problems with the display. But that's
nothing to do with the non-linearity.

Your other two types of input functions are even worse-adapted to the
requirements of control, as you would find out if you tried to build a
simulation based on them.

Well, we've used the flip-flop connection. No great problems there. And
Rick's used the If-Then-Else function in his spread sheet, and that doesn't
seem to cause great difficulties. Admittedly, we've only simulated the
flip-flop control in a two ECU system, and it might well run into problems
with more interconnected ECUs, but I don't see why it should. What problems
do you anticipate? If you suggest a configuration that you think I think
should work, and you think should not, I'll try it out in the Control
Builder (if it isn't too complex).

==================

Philosophy...

When an attempt is made to bring together the latest or most influential
thinking to produce a model of behavior, the result -- even not
considering PCT -- impresses me as a mess, not an advance.

That's been obvious. Let's look at the list:

reinforcement theory,--psychology. Could conflict with PCT.
Freudian theory,--psychology. could conflict with PCT
information theory,--mathematics. might or might not be useful analyzing
                      control systems, but could not conflict with PCT.
goal-setting theory,--(sorry, don't know what this is)
genetic theory,--in biological evolution, how could it conflict with PCT?
                 If you mean using genetic notions in developing new ECUs
                 who knows? At any rate, it can't conflict with PCT, though
                 it might be irrelevant.
quantum theory,--In what context? It's (this century) the basic physics of
                 what any organism is made of. How could it conflict with
                 PCT? But there may be little value in considering it.
neural network theory,--PCT is supposed to be a structure implemented in a
                        neural network. I should have thought that ANY
                        insights developed through studies of real or
                        artificial neural networks might be of potential
                        value to PCT.
fractal theory,--Mathematics. If an appropriate circumstance showed up in
                 PCT, the relevant mathematics should be used. Could never
                 conflict with PCT. I have a suspicion that it might show
                 up some day in connection with reorganization, but who knows?
dynamic systems theory,--of which PCT is a specialization. It is better
                         to use the specialization rather than the general
                         case when the special conditions are known, but
                         what applies in general ALSO applies in particular.
                         Cannot conflict with PCT.
classical conditioning theory,--psychology; can conflict with PCT
Gibsonian theory,--psychology; can conflict with PCT.

...and all those other theories were never designed to work together.

True, and I'm not aware of efforts to make many of them do so--even pairwaise.
It is the case that individually some of these have been tried in conjunction
with PCT, sometimes fruitfully, if only to demonstrate deficiencies in
someone's understanding.

Each one was developed from specific assumptions (both
covert and overt) and from observations that the others did not take
into account.

And in different domains of enquiry, that in some cases overlap.

When you try to mix PCT with all these other theories, the confusion
just becomes worse.

With "all these other theories" yes. But using the appropriate mathematics
and physics should confuse only those who are ill-acquainted with the
supporting structure in the domain of origin.

If you stir a Big Mac, fries, and milkshake into a
big bowl with breakfast, tea, and dinner, the result is not an
improvement in the Big Mac lunch but garbage.

Bad analogy. Try eating only Big Macs, or only milkshakes, and see whether
you like them better than when you have one of each (though ANYTHING with a
Big Mac would probably seem worse than alone:-)

PCT covers a lot of ground and it was constructed to hang together
without internal contradictions.

No question. Perhaps it even succeeds. It surely succeeds better than
any other view of psychology I have come across.

It is based on a fundamental phenomenon
that none of the other theories it keeps getting mixed in with took into
account,

So? Why does that make it automatically inappropriate to take it as
part of "normal science?"

and in many cases it offers alternative explanations of
behavior that contradict the explanations offered by other theories. For
essentially every other theoretical explanation, PCT leads to a
_different_ and usually _incompatible_ explanation.

True. Important. But "_incompatible_" can apply ONLY where there is
actually a counter-explanation _for the same phenomenon_. And that's
not usually what is claimed.

PCT is a grand theory. It integrates phenomena in all areas of human
(and animal) behaviour, which I think no other approach to psychology does.
But it is a theory in a natural world, violating no "normal science"
findings in physics, chemistry, biochemistry, etc. It is subject to the
same mathematics as apply to everything else, and it is hubris to place
it over and in contradiction to all other sciences. I am not aware that
either Newton or Einstein went so far.

Martin

<[Bill Leach 950104.20:32 EST(EDT)]

[Martin Taylor 940104 16:30]

Sorry, it was Dennis's question, not mine, and nobody but me seems to
have attempted to answer it, apart from Rick's elegant description ...

I suggest that several of us have attempted to contribute an answer (each
to what was individually perceived to be the actual question).

I kinda like Rick's "simple minded" thinking and will adopt it for now.
Being also a bit of a simple minded fellow, I believe that I am missing
the essential significance of your postings on possible reorganization
system structure.

It seems to me that all we "know" about the reoganization system is:
1) It must function in at least a somewhat random fashion. That is, it
    can not be fully systematic and still provide for observed behaviour.

2) The functioning of the reoganization system must be, at least in
    part, related to the existance of and possibly the magnitude of
    control system failure.

3) There is no physical evidence from biological research that supports
    any particular scheme.

Good. At least we are on the same wavelength in some respects, since
what you wrote is much closer to what I think to be true than is the
re-ification of "the reorganizing system" as a "real" separate entity
that has been the assumption of many of the other posts.

And I know that I am pretty "simple" (prolly a lot simpler than Rick) but
the point of talking about "the reorganizing system" as a "real" separate
entity is that with rare exception (like your recent posts) the only
standpoint from which we discuss such matters IT IS. That is
FUNCTIONALLY.

Functionally, one can not rationally consider the reorganization system
to be a part of the operating control loops (even if physically it is).
We are probably a LONG LONG way from modeling the possible operation of
the reorganizing system. Hopefully some biological evidence will have
surfaced before the attempts.

Putting in a simpleton plug... Since it has been demonstrated that
specialized cells in the brain release chemicals that clearly affect
synaptic connection thresholds (and do so in a rather imprecise fashion),
it seems reasonable to surmise that the reorganizing system could be such
a network physically.

Also considering that for the reorganizing system to be an integral part
of each ECU then there would have to be some sort of "building
supervisor" to ensure that each randomly constructed ECU had all of the
required reorg parts too.

... proposed it, as being exactly the same as your system except for two
facts: (1) Intrinsic variables are controlled directly, through the same
kinds of control systems (functionally, not necessarily physically) as
any other variables; intrinsic variable output does NOT directly induce
reorganization.

WHAT?! Of course intrinsic variables are controlled "directly". Where
did you get the idea that they were not? {having just gone back and
reread this, maybe I need to go break out B:CP again myself}

I presume that by "intrinsic variable output" you mean comparitor output
or error signal for the intrinsic's ECU. As I have understood from
previous discussions of this topic, as long as the error is small, no
reorganization action; large error - lots of action; inbetween - probably
a leaky integrator.

(2) Error in a control system is not considered to contribute to a
special intrinsic variable, but affects reorganization directly and
locally in the neighbourhood of the ECU experiencing error, whether that
ECU is involved in controlling an intrinsic variable or any other kind
of variable.

It is interesting that you would say this too... It has been suggested
several times that the reorganization system functions in localized areas
and probably enlarges the affected area as the error persists or
increases.

The idea that any sustained error (above some minimum threshold) probably
"starts" reorganization action has been discussed before. I believe that
general agreement still exists on the idea that errors in the control of
"intrinsics" trigger more rapid and vigorous reorganization system
action.

Genetic Algorithm

...systems might look like. This makes it perhaps a bit more vulnerable
to criticism, but at least it's a stab at a mechanism, rather than
having the new control systems with just the right properties pulled out
of the neuron-soup.

I'd say this makes more than just "a bit more vulnerable".

As to "just the right properties pulled out ...";

This is almost a bit tough to swallow but considering how little is
understood about control hiearchy for simple systems, it is a little
early to claim random processes can't do the job.

I am reminded of just how much simple control loops can accomplish as
demonstrated in PCT. Add to that some experience with control loop
failures in some rather complex electronics system (complex for
electronic systems but brutally simple when compared to biological
systems) that were almost undetectable and you have the beginning of a
belief that random building of ECUs could work if each individual ECU was
sufficiently limited in its macro effects.

How can an ECU have properties that are NOT appropriate to the
systems below it? We assume that the systems below it feed SIGNALS
upward, and that it feeds SIGNALS downward. For the most part, these
signals are assumed to be neural currents, for the sake of argument, ...

Discounting that the type of comparitor could be wrong, your question is
right but only literally. For an inherited system to work without having
to be "built" in-situ the interconnections must be correct. Each level
"up" in the hiearchy involves the combining of multiple lower level
signals in a specific manner appropriate to the sort of parameter that
the new level deals with. The output signals from this level must then
go to the proper lower level ECUs etc.

I think that Bill understated his position on this. It is quite clear
that there is massive preorganization even in a newborn human baby (it
might be a tiny fraction of the organization of an adult but that is
about the only comparison where it could be considered tiny).

The point is that outside of the truely impressive tasks of maintaining
body metabolism, blood flow and pressure, breathing, processing nutrients
and as any parent know massive waste disposal processing, very little
else likely exists and even these are doubtless "unfinished" in the
newborn.

I suggest that Bill's remarks (in B:CP?) concerning the difference in
maturation rates between animals and humans strongly indicates a truth to
his assertions. The speed with which newborn animals learn to "walk" for
example suggests (though certainly not compelling) that for them it is
more a matter of "setting" proper reference values as opposed to building
and scrapping multiple systems of ECUs.

···

--------------------

Is this a criticism? Surely this is what happens. It's one of the
basic possibilities built into the Control Builder functions. But note
that it is not only the perceptual signal that saturates: All the neural
signals, including the reference signal and the output signal, also
saturate. They cannot do otherwise. The question is whether the
reference signal saturates at a level above the perceptual signal
saturation value, and whether output from above can (normally) cause it
to do so.

You are going to have to go a bit further than just this assertion to
convince me. Just why does a reference signal have to reach saturation
when the perceptual signal saturates? This makes no sense at all to me.
Even the reverse does not necessarily have to be true. Indeed, even the
error signal does not necessarily have to reach saturation during loss of
control but it is likely that at least this one signal will frequently
reach saturation.

I suspect that it can, given that in emergency situations people
sometimes generate enough force to break their own bones. But this is
abnormal.

Generating sufficient force to break one's own bones is certainly not
proof of a control system going into saturation. Demonstrating that
sufficient musculature force to break bones exists and then showing that
with maximum error present bones did NOT break would be suggestive of a
possible saturation condition.

---------------------

PCT is a grand theory. It integrates phenomena in all areas of human
(and animal) behaviour, which I think no other approach to psychology
does. But it is a theory in a natural world, violating no "normal
science" findings in physics, chemistry, biochemistry, etc. It is
subject to the same mathematics as apply to everything else, and it is
hubris to place it over and in contradiction to all other sciences. I
am not aware that either Newton or Einstein went so far.

First you seem to agree with Bill, then disagree, then agree and then
disagree!?

Martin you have been arguing and discussing things with Bill PLENTY long
enough to know without a shadow of a doubt that Bill's most significant
single point about PCT is that it IS a physical science.

PCT must be consistant with all of the FINDINGS of all other physical
sciences where overlap occurs but that is "FINDINGS" and not necessarily
conclusions or inferences.

A different look at your list:

reinforcement theory,--psychology. Illusion is explained by PCT. Term
"theory" is inappropriate.

Freudian theory,--psychology. Conflicts with PCT anywhere an unambigious
claim is made... again "theory" inappropriate.

information theory,--mathematics. might or might not be useful analyzing
                      control systems, but could not conflict with PCT.

goal-setting theory,-- If this is what I think it is then again, not
                       really a theory and really not rigorous enough to
                       be called a hypothesis. The basic concept that
                       humans "achieve" goals that are internally
                       established is not in itself in conflict with PCT.

genetic theory,--if it is wrong then it could easily be in conflict.

quantum theory,--No there isn't any conflict here, but applying quantum
                 theory to explain a PCT topic is stretching credibility.

neural network theory,--Yes, there might be some value here but it won't
                        show up until either a PCTer sees it or the
                        neural net folks learn something about control.

fractal theory,-- This is like QED. At the moment, so what?

dynamic systems theory,--While it is true that as a science it cannot
                         conflict with PCT it is certainly possible for
                         the opinions of its practicioners to do so.

classical conditioning theory,--see reinforcement theory

Gibsonian theory,--see reinforcement theory

The point here being that it is Control Theory that is appropriate to the
study of control systems and specifically PCT to the study of living
control systems. The psychological "theories" are just plain wrong as
they are studying illusions explainable through PCT. The true facts that
such "theories" hit upon are either irrelevant or pure luck.

The hard science theories are not relevant to behavioural studies except
as they provide the foundation for the basic premises of PCT. We are no
doubt a little way from needing detailed information concerning subatomic
partical behaviour or relativistic energy studies in the tracking task.

-bill

[Avery Andrews 950105]
(Leach 950104.2032)

>
>3) There is no physical evidence from biological research that supports
> any particular scheme.

Maybe there's a bit. Gallistel, in `Behavior: A New Synthesis', reports
that birds that use their feet for food-collection can learn to operate
pulleys and other machinery with their feet, to get food, but birds that
don't, can't. And birds just can't be taught to operate food-gathering
machinery by hitting levers with their wings. etc. So there would seem
to be some degree of connection between specific intrinsic reference levels
and motor areas. Plus there's a certain amount known about the level of
behavioral plasticity in various kinds of creatures, e.g. salamanders can't
adapt to having their legs rewired so that the motor commands for going
forward send them backwards, but mammals certainly can (and so, I think I
remember, can reptiles, who generally seem to deploy a higher-tech brand
of neurology than amphibians.

Avery.Andrews@anu.edu.au

<[Bill Leach 950105.19:11 EST(EDT)]

[Avery Andrews 950105]

I believe that the excerpted section of mine that you commented upon was
intended to be related to support or non-support for any PARTICULAR form
of reorganization as opposed to the existance of reorganizing systems in
general.

Otherwise, I do have any problem with the information that you posted
(some of it new to me).

-bill

<[Bill Leach 950105.23:34 EST(EDT)]

[Martin Taylor 950105 14:00]

Nice reply, I believe I followed ALL of that one.

No.

Yep! I agree... the jury is not only still out but probably will be for
some time to come.

Dimensionality

I tend to agree with you though it maybe geometric. I don't think that
anyone is suggesting that the reorganization system operates in a fully
random fashion. That is no matter how "seperate" the reorganization
system might or might not be, it is highly improbably that a severe
sustained error concerning an optical control loop would disable
digestive process control as an initial reorganizing step.

In the localized system, the dimensionality issue is finessed, by making
the reorganization rate depend on a local variable (the local error
ONLY), and ...

I agree that this has merit but also suspect that some thought must be
given to how scope widens. In part this is "just a feeling" but it seems
that the reorganizing function could explain the occurance of some
dsyfunction that is actually observed but only if the function can "do
too much".

What constitutes a reorganization "event" -- ...

I should not have raised the point. Even your suggestion to Bill that
this could be a "hard" spot for local reorganization is not valid. We
have at best very vague ideas as to what physically might be involved in
reorganization. This is a problem equally for any proposed method at
this stage of understanding.

Intrinsics

Yes, I may well have been "running" under a misconception all along about
that one. I suspect that the only reason why no one else might have
noticed before is that what is currently considered to be important to
PCT concerning reorganization is the general points about which we all
seem to agree.

neuron-soup

vulnerable

I don't know what you are saying here: whether you misunderstand Bill P,
whether I misunderstand you, whether you misunderstand me, what you are
finding "tough to swallow", whence you drag in the notion that random
processes can't do the job--all are mysteries to me.

Sorry, I was probably not very explicit with this one. What I am talking
about is that a GA proposal appears to beg the question. The issue is
that no known systematic method exists that can explain learning in an
unpredictable environment.

The issue that you raised concerning dimensionality is certainly
justified. The idea that the process must still contain a measure of
randomness to work at all still exists too.

I guess that one of the ideas that I was trying to emphasize (without do
a very good job of it) was that we routinely talk about control loops as
single entities and analyze them thusly. All well and proper in the
context in which it is done. However, these control loops typically
consist of many thousands, hundreds of thousands and maybe millions of
individual control loops. Complete failure of hundreds or maybe even
thousands of such loops might not even make a perceptable difference in
an on-going control operation.

It is this idea that possibly many hundreds of neural connections would
have to be changed before any noticable difference in control properties
could even be detected that prompts me to assert that we would be a bit
premature to conclude that a close tie must exist between where an error
exists and reorganization system action. Logically one literally must
conclude that there be some relationship.

It "could" just be possible that various regions of the brain have simple
control loops (not necessarily interconnected at all) that release
chemicals into their local region to restore a perceptions for a level of
another chemical that could be released by "normal" ECUs when they
experience sustained error. This released "restoring" chemical would of
course have some reorganizing effects.

I know only one type of comparAtor: a subtractor. Are you introducing
others into PCT? If so, please trot them out for general analysis and
critique.

By definition, a subtractor comparitor is two types of comparitor since
the two inputs are functionally different. Since that was only an
example, I can easily offer another... Some inputs and some outputs have
inverters... some don't. Some neural pathways have "and" gates and some
don't.

But that's precisely what reorganization is for, isn't it? One of us
has a total misconception about what reorganization is and does--that's
for sure. To me, reorganization is the changing of the structure of the
hierarchy until the structure supports control of the intrinsic
(physiological) variables.

This is out of context (though that might be my fault). My comment was
in reply to the remarks concerning inherited structure.

Each level "up" in the hiearchy involves the combining of multiple
lower level signals in a specific manner appropriate to the sort of
parameter that the new level deals with.

"Parameter?" Each level deals with signals. Parameters are part of the
structure definition of the hierarchy. Signals are of the same kind
everywhere in the (neural part of the) hierarchy.

You can't have it both ways! Yes, every neuron deals only with neural
currents (firing rates). No neuron has any "idea" of what its signals
mean or represent. However, when looking at the structure the signals
do have specific meaning. Thus, subportions of the entire structure have
to be "wired" correctly or else the "meaning" (purpose/function) of
signals will not be correct. (And I presume that you know this)

I think that Bill understated his position on this. It is quite clear
that there is massive preorganization even in a newborn human ...

Well, you and I read Bill P differently. I read him as acknowledging
that what you say is probably true, but that he wants to see how far he
can go by making as few assumptions along those lines as he has to. ...

No we don't read him differently on this, and I also agree. We pretty
well know for example that a new born does have a great deal of
"preorganization" but we also know that a fertilized egg displays none of
this preorganization. Now as far as I know, humans have been pretty
consistant at producing human babies and not horses, chicken, fish or
trees. Thus, we can pretty well conclude that a fully random structure
generation process is NOT in control. Thus, we admit that there is some
preorganization and go on trying to produce the final product without
actually falling back on such builtins until no other reasonable solution
is found (even then reserving the right to deny preorganization if
evidence to the contrary shows up later).

Saturation

Ah! Got ya. I did think that what you were saying was that the
reference signal had to saturate if the perceptual signal saturated and
going back and re-reading the original cleared that up.

I agree that from a purely mechanical standpoint, all signal pathways can
saturate. I suggest that only output signal pathways saturate very
often and even then likely only for extremely short periods of time
(though this implies that a reference for a lower level ECU is saturated
by definition but it does not imply that the perception can not equal the
saturated reference). Perception pathways may well be saturated whenever
we experience what we call pain (though again not necessarily).

Also, a saturated reference does not necessarily imply that the condition
propagates "down the chain".

They cannot do otherwise.

If you mean that since these pathways have Fmax then I still agree. If
you mean that the system could not function unless operation to
saturation was a normal operating mode then I disagree.

Saturation may well explain certain characteristics of control failure
but like transport lag and derivative effects it should probably not be
considered until demonstrated that it is relevant to a particular model.

But the possibility is there, that reference signals can get to a level
that cannot be attained by the perceptual signal, no matter how strong
the output. This could be because the world won't permit, or it could
be because the perceptual signal saturates at a level lower than the
reference level, or there might be other reasons.

Taking the "flip-side" of the coin, it is also possible that everytime
you try to do something that you are physically not strong enough to do
that there are saturated reference signals somewhere in the control
hiearchy. Again, it does not necessarily follow that the neural currents
to a muscle group are saturated. I believe that for most people just
about every muscle group can generate sufficient force to do harm (though
of course not necessarily anything quite so drastic a break a bone). The
"blame" for such harm could be directed at an inhibiting function failure
(such as might exist for tension perception).

... is on what I perceive as a frequent claim of Bill P that PCT is
incompatible with any known science, at least as soon as that known
science is applied to any aspect of the world relevant to PCT. On the
occasion to which I was responding, he made that claim more explicit
than usual.

Well here I sure don't "read" Bill P. the same way that you do. Now
maybe Bill P. is a simple guy too. Mechanics explains virtually all
physical interaction data in sufficient detail to render unnecessary any
analysis or "support" from quantum physics.

The dismissal of Fractals is equally justified. Until it is DEMONSTRATED
that a simple control process can not model an observed phenomenom then
"pure" PCT is the ONLY appropriate methodology.

We all conjecture about possible configurations including even Bill P.'s
comments on how QED suggests that behaviour may not be deterministic. I
don't think that there is necessarily anything wrong with doing this
(including your reasons why it is possible that fractals just may explain
some memory and pattern operations). What is wrong is trying to assert
that the need exists when it can't be demonstrated that the need does
exist.

There is a pretty "fine line" of difference in perceptions on this
matter. Bill's QED comments were clearly "way out there" in future land
and no one should have understood him to mean that he considered such an
idea directly relevant to current PCT research or thinking.

I suggest that the reorganization discussions are not much closer in
either. We actually have to have some repeatable, verifiable data
concerning reorganization before such conjecture can be much more than
conjecture. I started to add "or conversly build a model that uses
reorganization that correctly models observed behaviour" but even this is
not correct unless the model works when a standard control system model
does not.

I thought it appropriate to object.

Obviously.

... true (so far) in information processing, where in many respects our
computers can't do as much as our brains, despite using processing
elements millions of times faster. So maybe, just maybe, our perceiving
systems used some kind of fractal compression techniques to reduce the
degrees of freedom we ...

Yes but just maybe analog negative feedback closed loop control systems
can do these things with very simple data structures. I would say that
we don't yet know. We know so little about even analog computing much
less control system operation that it is almost silly to discount how
much control system operation can accomplish.

I am reminded of an old analog computer that I occassionaly used
(designed by a nuclear physicist). This computer (an interesting set of
negative feedback closed loop control systems BTW) would model our
reactor core to within our measurement accuracy and do it faster than
real time. The Supercomputer often required weeks of continous operation
to perform the same calculations digitally. It was not at all unusual
for the entire operating period to be less than a quarter of a second.

As digital computing power keeps increasing, the experience with that
analog computer continually reminds me that for certain types of
operations the digital computer has one hell of a long way to go still.

I believe very strongly that we will all be amazed at how much can be
done with just control loops. Fractal math or other means may prove to
be useful to understand information loss or even information storage
density. Such analysis however is probably much like trying to find
information about the disturbance in the perception as far as the goals
of PCT research are concerned.

... fractious fantasy.

I don't have much... I to am still far too interested in just what the
limits might be for "pure control".

-bill

[From Bill Powers (950106.0730 MST)]

Bill Leach (950105.2334 EST) --

Your comments to Martin Taylor echo my own thoughts pretty closely (as
do some of your direct communications on other subjects). I think we
both feel that there's little to be gained by adding complications to a
simple model whose explanatory possibilities have hardly been glimpsed.
And it's always comforting to me to hear from a fellow analogue computer
user (Wolfgang Zocher being another on CSGnet) who has seen the negative
feedback control systems in the computer itself. Working with my
Philbrick computers taught me most of what I know about the properties
of negative feedback systems and the capabilities of non-symbolic
computation. Only an old analogue-computing type would recognize this as
the solution of a second-order differential equation:

driving function integrators
-----------------|\ |\ output
          _______| \_________| \_____________
         > > / | / |
         > >/ |/ |
         > >
         > /| inverter |
          _____________/ |_____________
                       \ |
                         \|

!
I sometimes wish I had never mentioned reorganization -- if you look at
Ch. 14, you will see that I was very circumspect about it, mentioning
many possibilities but not claiming to know which was right, or even
that this was the right model. The main reason for including it was that
people always asked "what about learning?" I was mainly trying to show
the difference between a performance model and a learning model, because
psychologists hardly seemed to recognize any difference. I've always
felt that we can't really study learning or reorganization until we have
a model that can characterize performance accurately -- if you can't do
that before and after learning, how can you know what has changed?

It's very hard to persuade people to go back and reexamine all the old
ideas, especially the simple ones, in terms of PCT. But this is what has
to be done to lay the groundwork for studying more complex things like
reorganization. Only experimentation can give us the specific
performance models we need, complete with numerical parameters. Trying
to develop advanced models of reorganization before we even know what
its product is is, in my view, a waste of time. We could come up with
nifty designs that actually do some learning, but have nothing to do
with the ways organisms learn. It's the same problem that cropped up in
AI -- are we trying to _design_ intelligent machines, or model the way
organisms really work? The approach is very different in the two cases.
The first case can be handled with pencil and paper and computer
simulations. The second requires, absolutely requires, experimenting
with real behaving systems. When you build models without comparing them
with real behavior, your design can wander off randomly into never-never
land, with never a sign of the moment that you cross the border between
a workable model and irrelevant fantasy.

···

----------------------------------------------------------------------
Best,

Bill P.