hPCT Learning - Trying Again

Good Morning Bill,

Hope you have a great day. Happy Birthday Bill. I had breakfast with David Goldstein this morning. His birthday is 9/1/10.

Carter

···

— On Sat, 8/28/10, Bill Powers powers_w@FRONTIER.NET wrote:

From: Bill Powers powers_w@FRONTIER.NET
Subject: Re: hPCT Learning - Trying Again
To: CSGNET@LISTSERV.ILLINOIS.EDU
Date: Saturday, August 28, 2010, 12:23 PM

[From Bill Powers (2010.08.28.0740 MDT)]

Ted Cloak (2010.08.27.1638 MTD) –

TC:The “data”, if I may call it that, is my observations of a kitten growing
into an adolescent cat. I’ve watched Sidi (short for Obsidian) practice
running, jumping, stalking, pouncing – hour after hour, day after day. Each
day (well, week) I observed her getting more and more expert at those tasks.
I’m trying to explain this natural (as opposed to laboratory) learning which
is, of course, common to all vertebrates and probably all animal species.

BP: So am I. Reorganization is my proposed basic method of learning, which I think may be the natural way common to all animals that learn.
However, I’m finding it difficult to communicate just how PCT reorganization works, so this conversation is turning out to be useful. If I keep working on making it clearer, perhaps eventually it will become comprehensible. Evidently you as well as others already have a concept
of learning in mind, and you’re assuming that the reorganization approach works the same way. But it doesn’t.
The main fact is that PCT reorganization does not work by trying out different reference signals or different behaviors. The best way to see it in action (other than watching cats grow up) is to look at the demos in LCS3.
Demo 7-2 shows a simple system with just three controllers and three environmental variables. It starts out with all three control systems sensing variables constructed in three different ways from all three environmental variables. The input weights of the three systems, nine in all, are set at random at the start of a run and remain the same from then on; adding input reorganization is a project for the future. The output functions of the three systems are each connected to all three environmental variables through adjustable weights, which are initially set to zero. In this demo, the three reference signals
vary in a repeating pattern of magnitudes that traces out a Lissajjous pattern in three dimensions. This pattern of reference signals never changes. But control is so poor at first that the controlled perceptions change very differently from the way the reference signals change.
The only variables being altered by reorganization are the three output weights in each of the three systems. For each system, there is (in addition to the three weights) a set of three auxiliary numbers that are set at random to values between -1 and 1. These “speed” numbers are added (multiplied by a very small constant like 1/100,000) to the output weights on every iteration of the program. The output weights therefore begin to change at a constant speed, the speeds being proportional to the “speed” numbers. This corresponds to the “swimming” phase of E. coli’s behavior.
The three control systems start life in a very crude form. The loop gain is
zero because all the output weights are zero, Soon after the start, all the output weights are nonzero, so each of the three control systems is trying to affect all three environmental variables. The weights, however are very small so the amount of action generated is small at first, and the weights are not adjusted appropriately, so the control is very bad. The control systems interfere with each other because each one affects not only the variable it is supposed to be controlling, but variables in the other two system as well, and it may not be affecting its own variable enough or in the right direction.
This beginning situation is similar to what is found by neurologists looking at motor systems in neonates. There is a crude general input-output arrangement connecting senses to muscles, so the control systems are sort of sketched in, thanks to evolution. But there are far more connections from sensory to motor nuclei than are needed, and most
of them are not the right connections. As maturation and practice proceed, the number of connections is gradually reduced, a process they call “pruning.” In the end, the wrong connections and superfluous connections are pruned away (I would say, the weights are reduced close to zero), leaving only the right connections. And what doesn’t show in the crude observations possible in living brains is that the remaining weights continue to change so the control systems become more stable and more skillful at keeping errors very small.
So at first, we have the weights changing at some rate, a different rate for each of the three output weights. If the running average of squared error of a control system is decreasing as a result, because control is getting better even though still not very good, that change continues. Eventually, however, the three weights will be as close to the right amounts as possible, and then the changes will start making the
error larger. As soon as that happens, there is a tumble. A reorganizing control system sitting off to one side senses the increase in absolute error, and produces an output that changes the auxiliary “speed” variables to new values between -1 and 1 – at random. This starts the output weights changing, iteration by iteration of the program, in different proportions, so if we plotted the weights in three dimensions, the three-dimensional direction in which the resultant is moving would be different. That change of direction is a tumble.

There is just as much chance that this random change will leave the error still increasing or increasing faster as making it start to decrease, in which case another tumble will occur immediately. If the tumbles come close together, the weights will not change by very much. Eventually, a tumble will set the weights to changing in a direction that makes the error decrease, and the tumbles will cease. The
weights will go on changing in the new proportions as long as the error keeps getting smaller.

Clearly, this principle should make all the weights approach the values they must have for the error in each control system to get as small as it can get. I have found that multiplying the effects of the speed variable by a number proportional to the absolute amount of error produces more efficient convergence, making the speed of change approach zero as the error approaches zero.

It must be understood that all during this process, the reference signals for the three systems in the demo are varying in a fixed pattern that never changes. In other demos, the change in reference signal is made random, and random disturbances are added, and the reorganizing process still converges to the best control possible to achieve by slowly adjusting output weights. The actual behavior patterns, in that case, may never repeat, and the reference values may
never repeat their patterns, either, yet control will continue to improve over time. I have shown all these effects in various demonstrations, though only the simplest of them are in LCS3.

Demo 8-1 shows this same principle with random disturbances and with the reference signals varying in a fixed pattern, for an arm with 14 degrees of freedom. You can see the movements becoming more regular as time goes on, starting with clumsy flailings and ending with a smooth regular Tai Chi exercise pattern. The Tai Chi reference pattern remains exactly the same from beginning to end of the demo, but the ability to control the arm to match it while resisting the effects of disturbances continually improves.

TC:I do think we need to understand better how a control system (CS) learns
how to control its output to obtain/maintain the input demanded by its
reference signal, and I’ve got a suggestion for that, below.

BP: I hope you can see now that reorganization theory provides the explanation you’re looking for, and the demos do exactly what you describe.

TC: But being in a
hierarchy, a CS also needs to learn how to help the CS which addresses it
obtain/maintain the input demanded by that CS’s reference signal.

BP: I disagree. All that a control system in the hierarchy has to learn to do is adjust its outputs to lower systems in a way that keeps its own perceptions matching whatever reference signal it is given, as quickly and accurately as possible. When that is achieved, a still-higher control system can use the lower one as its means of controlling its own, higher-order, perception: adjusting the reference signal in the lower system will quickly and accurately make the perceptual signal in that lower system follow the changing values of the reference signal, and thus provide an input (among many) to the higher perceptual input function that is needed to produce the desired amount of the higher-order perception.

I’ll
leave it at that for now.

Best,

Bill P.

[From Bill Powers (2010.08.29.0930 MSDT)]

Martin Taylor 2010.08.28.13.23 --

MMT: Consider the classic example of keeping your car in its lane. The inputs to the control unit that senses where the car is in the lane are all visual, but the output that matters provides a reference value to the unit that controls steering wheel angle. The perception of steering wheel angle does not contribute in any way to the perception of where the car is in its lane.

BP: However, perception of the wheel angle in relationship to the offset of the car from the center of the lane may figure into which way the wheel is turned.

I won't claim I know that all loops are closed. But any open-loop behavior seems very strange to me. The actions I am taking are always part of the experiences I am having, along with their effects. It would be hard NOT to notice the actions, or some direct effect of the actions that can be perceived as a measure of them (like the sideward acceleration due to making the car turn).

There may be deeper reasons for always closing the loop when possible. Reorganization depends on altering parameters of control, which in the demos of LCS3 are coefficients in a polynomial describing how a number of lower-order perceptions depend on variations in a higher-order output signal. These coefficients are varied until control is optimized and the value of the function of all the lower-order perceptual signals involved matches a fixed or changing reference signal.

If the higher-order perceptual signal did not depend on lower-order perceptions in the same subsystems used to control the higher-order variable, I think it might be impossible to make this sort of reorganization work. It still might be that reorganization could be made to work without this kind of feedback, but I don't see how, at the moment.

MMT: Changing the location of the car in its lane is a pure side-effect of changing the steering wheel angle, since only if the car is moving and there happens to be a functional steering linkage between steering wheel angle and road wheel angle does altering the steering wheel angle influence where the car is in its lane, even though control of the steering wheel angle may be perfect while the car is stationary and the steering linkage is cut.

BP: You're tracing causation from the bottom up, whereas I trace it from the top down. Changing the location of the car in its lane is the reason for turning the steering wheel, an effect of turning the steering wheel when the car is moving. We seldom try to steer a stationary car or a car with a broken steering mechanism. It's not an accidental side-effect like making the tires squeal. If turning the wheel made the car turn the wrong way, this discrepancy would be obvious and striking. So the reference signal specifying wheel angle is not just emitted and forgotten. In a true open-loop system there would be no awareness, at the level of relationships, that the wheel was at a different angle from the right one.

MMT: As another example, consider the standard pursuit tracking experiment. The controlled perception is the visual perception of the separation between target and cursor. The output from the unit that controls this perception goes to some set of systems that control joystick movement (either position or velocity -- it doesn't matter which to make the point). Neither the position nor the velocity of the joystick enter into the perception of the separation of the cursor and the target. The effect of the joystick on the cursor is a pure side-effect of control of the joystick position (or velocity).

BP: But the relationship between joystick movements and cursor movements is easily and directly perceived. Rick and I did some experiments in which the relationship of cursor movement to mouse movement was reversed without disturbing the cursor position, at a moment when the cursor velocity was zero. The first indication of the reversal came when the next mouse movement made the cursor move the wrong way. For the first 0.4 seconds, the variable showed a runaway waveform, closely fitting a positive exponential. Then something inside the person reversed and control was restored. Clearly the relationship of the cursor movement to the mouse movement was being monitored. This monitoring was not necessary as long as the experimental conditions remained constant. But it had to be there all the time since there was no way to predict or anticipate when the next reversal would occur. The delay before the correction was approximately constant over many trials and several subjects.

MMT: One could easily describe hundreds of such cases, in which the output of a CSA provides a reference value to some CSB, but the perception controlled by that CSB does not contribute to the perception controlled by the CSA. In those cases, CSB control has as a side-effect the ability of CSA to control. I have no idea whether this is more often true than not, but does the fact that it is sometimes true affect the design of your model?

As a gratuitous aside, I recommend you study carefully Bill's description of e-coli reorganization. I know you were trying to provide a workable model for Bill's insight that the reference value may well not be the signal output by CSA, but instead may be the output of a content-addressable memory addressed by the CSA output, while Bill's e-coli reorganization demos (so far as I know) do not include this memory function. Would your scheme work within e-coli reorganization as Bill described it? For example, if the initial contents of the content-addressable memory were random, with more addresses than were likely to be needed, and the contents were updated with higher probability when reorganization is proceeding slowly, while unused addresses were removed over time from the list (the neonate pruning noted by Bill), would that work?

BP: I think we should be cautious about using the "content-addressable memory" idea. It has never been tested (even in simulation) to see if it's feasible. At lower levels, relationships and below, it's a very clumsy way of controlling continuous variables because the address has to be varied in a quasi-continuous way to produce almost-continuous variations in the reference signal as required by dynamic control. All the examples I thought up in B:CP involved higher-order perceptions where it's at least conceivable that reference signals are varied step-wise and remain fixed for at least a second or two before being changed again. It doesn't seem likely to me that continuous control would involve this mechanism -- if it's actually involved at any level at all. One reason I neglected this idea is that I never got to the point of actually believing it. It was just a proposal.

Just a few questions. Maybe they are off-base, but I don't know the answers to them. And, as an aside to Bill, if I ever get around to modelling them, it won't be soon. I do have someone interested in doing some PCT programming with me, but not until after the New Year.

Good luck -- it will be good to see what is actually required to make this idea work, if that is possible.

Best,

Bill P.

[Martin Taylor 2010.08.29.13.23]

  [From Bill Powers (2010.08.29.0930 MSDT)]




  Martin Taylor 2010.08.28.13.23 --
    MMT: Consider the classic example of

keeping your car in its lane. The inputs to the control unit
that senses where the car is in the lane are all visual, but the
output that matters provides a reference value to the unit that
controls steering wheel angle. The perception of steering wheel
angle does not contribute in any way to the perception of where
the car is in its lane.

  BP: However, perception of the wheel angle in relationship to the

offset of the car from the center of the lane may figure into
which way the wheel is turned.

I don't think I ever look at the steering wheel to determine its

angle before I act to make it be more clockwise or less clockwise.
To look at the wheel every time I want to drift right or left would
distract me from controlling my lane position. As you have often
pointed out over the years, one can’t independently control the
perceptions of the steering wheel angle and of the car’s position in
its lane. Of course, I can observe the wheel position any time I
want wiothout trying to control it independently. Most of the time I
don’t care to look, except often when I am parking and want to be
able to exit the parking space in a particular direction. Then I try
to ensure that the steering wheel ends up in an orientation biased
toward the direction in which I will want to leave the slot. But
when I do that, I’m not controlling for the position of the car in
its lane (slot). That has already been fixed.

  I won't claim I know that all loops are closed. But any open-loop

behavior seems very strange to me.

To me, too. I don't claim to KNOW anything, but most of the time I

act as though what I believe is factually true, and one of the
things I believe is the foundational principle of PCT, that all
behaviour is the control of perception. So I tend immediately to
disbelieve any claim that a particular behaviour is open-loop.
Hence, even if some behaviour cannot be immediately identified as
serving to control some perception. I tend to assume that there
exists some such controlled perception, even though perhaps the
behaviour in question might be ineffective in influencing it (as has
been my behaviour in attempting to get you to believe the foregoing,
or as is a baby’s behaviour in waving its arms and legs if the
controlled perception is something to do with locomotion).

  The actions I am taking are always part of the

experiences I am having, along with their effects. It would be
hard NOT to notice the actions, or some direct effect of the
actions that can be perceived as a measure of them (like the
sideward acceleration due to making the car turn).

Agreed. Although most of the time we are not consciously aware of

most perceptions that are well under control, and it takes
considerable training to become able to be aware of some of them,
especially low-level ones. We do seem to notice actions we take in
controlling perceptions that are not as much under control as we
might like.

  There may be deeper reasons for always closing the loop when

possible.

I'm not clear what you mean here by "closing the loop". Failure to

close the loop would seem to me to mean either aimless random acts
or stimulus-response forced actions. The former is within the
possibilities allowed by PCT, and is likely to be common in the
early stages of reorganization. The latter is not. But what follows
seems to suggest that by an unclosed loop you mean something
different from either.

In this diagram, consistent side effects of the actions of the B

loop influence the A perception through an environmental pathway.
For example, perception B might be the position of a joystick, and
perception A the separation between target and cursor in a tracking
task. In your language, is the output of the A subsystem “open
loop”? Do you consider A to be a controlled variable? In my
language, perception A is just as much controlled as is perception
B. In my language, nothing in this diagram is “open loop”.

![SideEffectLoop.jpg|402x462](upload://fXfGehO3yvqAdCLnJWp9bHZtEyc.jpeg)
  Reorganization depends on altering parameters of

control, which in the demos of LCS3 are coefficients in a
polynomial describing how a number of lower-order perceptions
depend on variations in a higher-order output signal. These
coefficients are varied until control is optimized and the value
of the function of all the lower-order perceptual signals involved
matches a fixed or changing reference signal.

  If the higher-order perceptual signal did not depend on

lower-order perceptions in the same subsystems used to control the
higher-order variable, I think it might be impossible to make this
sort of reorganization work. It still might be that reorganization
could be made to work without this kind of feedback, but I don’t
see how, at the moment.

Assuming that here you are talking as though perception A in the

diagram is “open loop”, I lose the logic.

You are treating only reorganization of the output weights, are you

not? If that is the case, what could it possibly matter where the
perceptual input functions at the higher level come from, provided
that the output weights go to lower-level control systems whose
actions influence the values of the higher-order perceptions. The
two seem to me to be quite decoupled. Sometimes, by chance, one or
more of the lower systems whose reference values are affected by a
particular higher-level system may contribute their controlled
perceptions to that particular higher-level perception, but it seems
to me that this will happen quite by chance.

I could perhaps find it easier to have an intuition that there was

direct reciprocity between the output and input connections between
levels if both perceptual and output weights were being reorganized
simultaneously, but even then I would argue that what matters for
reorganization is whether perceptions are controlled, and that the
side-effects of the actions in controlling those particular
perceptions are more likely than not to maintain intrinsic variables
near their reference values. What controlled perceptions will be
maintained through reorganization should depend not on where their
inputs come from, but on how the side effects of their control
output actions influence the intrinsic variables. So although it may
be intuitively more attractive to anticipate feedback reciprocity
when both input and output connections are being reorganized
together, I don’t find the argument persuasive.

To get a persuasive answer, one would have to try either to analyze

the possibilities mathematically or run many reorganization
simulations in each of which there are many intrinsic variables that
could be influenced by the side effects of the control actions
through a complex environment. That’s a rather large project!

Talking of demo models, have you found that in your 14df Arm model

there is a bias toward feedback reciprocity?

    MMT: Changing the location of the car in

its lane is a pure side-effect of changing the steering wheel
angle, since only if the car is moving and there happens to be a
functional steering linkage between steering wheel angle and
road wheel angle does altering the steering wheel angle
influence where the car is in its lane, even though control of
the steering wheel angle may be perfect while the car is
stationary and the steering linkage is cut.

  BP: You're tracing causation from the bottom up, whereas I trace

it from the top down.

That isn't really what I am trying to do. What I am doing is

illustrating that controlling the steering wheel angle is decoupled
from perceiving the location of the car in its lane. Maybe it’s not
an effective demo for you, and I accept that it has loopholes, but
I’m not tracing causation in either direction.

  Changing the location of the car in its lane is the

reason for turning the steering wheel, an effect of turning the
steering wheel when the car is moving. We seldom try to steer a
stationary car or a car with a broken steering mechanism.

True. And the fact that there might be other reasons for controlling

one’s perception of turning the wheel is a major loophole in my
intended demonstration.

  It's not an accidental side-effect like making the

tires squeal. If turning the wheel made the car turn the wrong
way, this discrepancy would be obvious and striking. So the
reference signal specifying wheel angle is not just emitted and
forgotten.

No, it's not an _accidental_ side-effect. It is a designed

side-effect. The output of the car-in-lane control system is
continuously emitted and the steering wheel angle is changed
accordingly. Following the side-effects of that control action
further along the chain of causality, the output of the
steering-wheel-angle control system influences the steering linkage
to change angle of the road wheels, and hence the car’s direction.
To be sure, it’s a side effect built into the environment of the
steering wheel control system by the car designer, but that makes it
no less a side-effect. The angle of the wheels affects the direction
of the car, but in a noisy way dependent on the road camber and
crosswinds, among other effects, eventually leading to a change in
the visual relationships the driver sees through the windshield. The
visual relationships are perceived as a change of the position of
the car in its lane, but they don’t include any perception of the
angle of the steering wheel.

Sometimes a particular steering-wheel angle may correspond to a

rightward drift, sometimes to a leftward drift. That doesn’t matter
if the output from the car-in-lane control just alters the
steering-wheel-angle reference “more” or “less”, rather than trying
to set a particular angle.

  In a true open-loop system there would be no

awareness, at the level of relationships, that the wheel was at a
different angle from the right one.

A) I don't see how you could be so sure of that, even if there

happened to be a true “open loop” situation. B) The only
relationship here that matters is the one computed in the comparator
of the steering wheel angle control unit. If we perceive “error”,
that’s where we would get the perception that the steering wheel was
at a different angle from the right one. C) The “car-in-lane”
control unit has no way of knowing what steering wheel angle is
correct, because of the noisy relationship between angle and drift
in the lane. All the car-in-lane unit can know is its perception of
where the car is in its lane and (at a different level) how fast
that “in-lane” position is changing. One or other of those two units
will output “more” or “less”, or the functional equivalent, but if
it tried to output “92 degrees” it would have a hard time keeping
the car in its lane. (D) How much of the time are we conscious of
the steering-wheel angle, anyway?

    MMT: As another example, consider the

standard pursuit tracking experiment. The controlled perception
is the visual perception of the separation between target and
cursor. The output from the unit that controls this perception
goes to some set of systems that control joystick movement
(either position or velocity – it doesn’t matter which to make
the point). Neither the position nor the velocity of the
joystick enter into the perception of the separation of the
cursor and the target. The effect of the joystick on the cursor
is a pure side-effect of control of the joystick position (or
velocity).

  BP: But the relationship between joystick movements and cursor

movements is easily and directly perceived.

True, And irrelevant to the fact that the muscular control movements

that influence the joystick are not part of the perception of the
cursor-target distance. Also, to repeat a comment from the steering
wheel example, it is not true that any particular position of the
joystick compensates for any particular separation of cursor and
target.

  Rick and I did some experiments in which the

relationship of cursor movement to mouse movement was reversed
without disturbing the cursor position, at a moment when the
cursor velocity was zero. The first indication of the reversal
came when the next mouse movement made the cursor move the wrong
way. For the first 0.4 seconds, the variable showed a runaway
waveform, closely fitting a positive exponential. Then something
inside the person reversed and control was restored. Clearly the
relationship of the cursor movement to the mouse movement was
being monitored. This monitoring was not necessary as long as the
experimental conditions remained constant. But it had to be there
all the time since there was no way to predict or anticipate when
the next reversal would occur. The delay before the correction was
approximately constant over many trials and several subjects.

Very nice experiments they were, too. How are they relevant to this

discussion? As you argued in discussing these experiments, the
reversal represents the actions of a higher-level control system
that perceives the relationship between joystick and cursor
movements. I see no reason to dispute that analysis, but neither do
I see what it has to do with the topic under discussion.

  BP: I think we should be cautious about using the

“content-addressable memory” idea. It has never been tested (even
in simulation) to see if it’s feasible.

Yes, one should be cautious. But I have been thinking about its

ramifications, and they are many, most of which I would not suggest
on CSGnet because I would expect to be told that since I had not
simulated them in a model they effectively did not exist. Your
reasons for proposing them in the first place seemed cogent when I
reread B:CP, and I see no reason to drop them.

  At lower levels, relationships and below, it's a very

clumsy way of controlling continuous variables because the address
has to be varied in a quasi-continuous way to produce
almost-continuous variations in the reference signal as required
by dynamic control.

That's true, but at the same time the mechanism would provide for

arbitrary nonlinear continuous relationships, even non-monotonic
ones (though it’s hard to see where those might be used, unless it
is in an “I give up” situation where the output needed would be
higher than the lower control system could control for). I do accept
that the brain has to be energy-efficient, and that inefficient
circuitry is likely to be removed over evolutionary time, just as
synaptic connections are pruned over maturation time in an
individual. So I tend to agree that it is less likely that
content-addressable memory is much, if ever, used at the lower
levels.

  All the examples I thought up in B:CP involved

higher-order perceptions where it’s at least conceivable that
reference signals are varied step-wise and remain fixed for at
least a second or two before being changed again.

That seems very reasonable. I'm not going to argue one way or the

other on that point, but my intuition agrees with yours. I might
perhaps go so far as to hesitantly suggest it might occur only in
category perception, and not at higher levels such as logic or
program. In the other thread, I was invoking the mechanism at the
category level.

  One reason I neglected this idea is that I never got

to the point of actually believing it. It was just a proposal.

One doesn't have to believe a proposal to follow it up and see where

accepting it (or denying it) would lead, and see whether the
implications account for any observations not hitherto included in
the theory.

I realize your preference is not to follow up any lead unless you

can convince yourself by modelling that it is at least plausible. My
preference is to follow it up and see what it might imply, and if
its implications seem important, then put in the analytic effort to
see if it can be shown to be one of (a) necessary (b) plausible, (c)
implausible (d) impossible. Modelling can show only plausibility.
Modelling cannot show necessity, implausibility, or impossibility,
but analysis has the potential to demonstrate any of the four. It is
because the laws of thermodynamics and a disbelief in clairvoyance
dictate the necessity of PCT that I believe it so strongly, not
because of the success of any models in accounting for the data in
particular circumstances.

Martin
···

On 2010/08/29 12:22 PM, Bill Powers wrote:

[From Rick Marken (2010.08.30.1210)]

Martin Taylor (2010.08.29.13.23)–

It is

because the laws of thermodynamics and a disbelief in clairvoyance
dictate the necessity of PCT that I believe it so strongly, not
because of the success of any models in accounting for the data in
particular circumstances.

What if a model that is inconsistent with the laws of thermodynamics accounts for data more successfully than one that is consistent with those laws? Who are you going to believe, the law of thermodynamics or your lying eyes;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2010.08.30.23.05]

[From Rick Marken (2010.08.30.1210)]

        Martin Taylor

(2010.08.29.13.23)–

        It is because the laws

of thermodynamics and a disbelief in clairvoyance dictate
the necessity of PCT that I believe it so strongly, not
because of the success of any models in accounting for the
data in particular circumstances.

  What if a model that is inconsistent with the laws of

thermodynamics accounts for data more successfully than one that
is consistent with those laws? Who are you going to believe, the
law of thermodynamics or your lying eyes;-)

I would probably assume that the successful model that is

inconsistent with the laws does not describe the process that gives
rise to the data, no matter how accurately it accounts for the data.
The less successful model also probably does not accurately describe
the process, but at least it has a chance to be modified so that it
could do so, and the modification would probably then fit the data
better.

It's very easy to make models that fit any data you want, if you are

able to make the model violate whatever natural law you want. Here’s
a sketch of a general model that fits all data from any experiment:
The model has a prophetic function and a mathematical function
generator. It predicts using its prophetic function what the data
will be, and uses its general function generator to create a
function that will fit the data as closely as you want when the data
are actually collected. The data would be fit better by that model
than by any model that conforms to the currently understood laws of
physics, but I would not use the excellence of fit as evidence that
the successful model describes the processes that resulted in the
data from the experiment.

However, that's not the point, is it, as I gather from your smiley.

The point is that the need for PCT can be derived from the basic
laws of physics with no need for any model fitting. I sketched that
derivation in my Editorial for the PCT Special Issue of IJHCS. Even
if PCT-based models happened to fit data very badly, I would say
they were incorrectly structured or had badly judged parameter
values, not that PCT was wrong. For PCT to be wrong would require
changes in our understanding of the laws of physics, and even then
it probably would not be badly wrong, just as orbital calculations
using Newtonian dynamics are not badly wrong even though we now
think in terms of General Relativity rather than of action at a
distance. The fact that simple PCT-based models often fit the data
very well is just gravy, suggesting that maybe real biological
systems often use quite simple control structures.

Martin
···

On 2010/08/30 3:10 PM, Richard Marken wrote:

[From Rick Marken (2010.08.31.0830)]

Martin Taylor (2010.08.30.23.05)–

Rick Marken (2010.08.30.1210)

  What if a model that is inconsistent with the laws of

thermodynamics accounts for data more successfully than one that
is consistent with those laws? Who are you going to believe, the
law of thermodynamics or your lying eyes;-)
It’s very easy to make models that fit any data you want, if you are
able to make the model violate whatever natural law you want.

Actually, you can do that even without violating “natural law”. With enough parameters you can get almost any model to fit any data set. But the solution to this apparent problem is simply to do another experiment to test the model in new circumstances. If the model still fits the data then the model is not rejected. You keep doing this – varying experimental circumstances to see if the same model continues to predict the data – and as long as the model keeps fitting the data then it is not rejected, whether the model violates someone’s notion of “natural law” or not. A completely arbitrary model that predicts one data set will likely be quickly ruled out by a new experiment that is designed to test that model under difference circumstances. The model will almost certainly fail to account for the data in the new situation. This is my concept of how science is done, anyway.

However, that's not the point, is it, as I gather from your smiley.

The point is that the need for PCT can be derived from the basic
laws of physics with no need for any model fitting.

The point (of the smiley face) was that I disagree with that emphatically.

I sketched that

derivation in my Editorial for the PCT Special Issue of IJHCS.

Could you give a quick (1 short paragraph) recap of that? I’d be especially interested in knowing whether your derivation shows that S-R theory cannot be derived from the basic laws of physics and is, thus, not needed.

Even

if PCT-based models happened to fit data very badly, I would say
they were incorrectly structured or had badly judged parameter
values, not that PCT was wrong.

You talk about this in terms of PCT in general. But what about individual models, like yours and mine, which are both based on PCT. Can we forego the experiments and just test them by seeing which is most consistent with the laws of thermodynamics?

Best

Rick

···
On 2010/08/30 3:10 PM, Richard Marken wrote:


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Frank Lenk (2010.09.21.0809 CDT)]

Bill – Based on your suggestion below that to understand
reorganization we should take the time to fully understand Demo 7-2, I’ve
been looking at the Delphi code supplied on the LCS III disc. Given I
want to create my own model filled with agents who have this important
capacity, I figured I would need to dive in at this level at some point, and
there is no better time than the present. I have to confess, however, my
computer programming skills don’t seem up to the task of deciphering it
completely.

Is there a way to get (from you or Bruce Abbot) a more
completely commented version of the procedures “Control” and “Reorganize”
that would help me understand what it is doing, and why, perhaps matching up
your description below with the lines of code that accomplish it?

I apologize for the general nature of this request, and I am
hopeful that others who are interested in a detailed understanding of how
control and reorganization can be simulated will find it useful as well. But there
is something missing in my understanding of what the code is doing, and because
it is missing, I am having trouble being more precise about what it is I am not
understanding.

Thank you for your patience with me.

Frank

···

From: Control Systems
Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Bill
Powers
Sent: Saturday, August 28, 2010 11:24 AM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: [CSGNET] hPCT Learning - Trying Again

[From Bill Powers (2010.08.28.0740 MDT)]

Ted Cloak (2010.08.27.1638 MTD) –

TC:The “data”, if I may call it that, is my
observations of a kitten growing
into an adolescent cat. I’ve watched Sidi (short for Obsidian) practice
running, jumping, stalking, pouncing – hour after hour, day after day. Each
day (well, week) I observed her getting more and more expert at those tasks.
I’m trying to explain this natural (as opposed to laboratory) learning which
is, of course, common to all vertebrates and probably all animal species.

BP: So am I. Reorganization is my proposed basic method of learning, which I
think may be the natural way common to all animals that learn.
However, I’m finding it difficult to communicate just how PCT reorganization
works, so this conversation is turning out to be useful. If I keep working on
making it clearer, perhaps eventually it will become comprehensible. Evidently
you as well as others already have a concept of learning in mind, and you’re
assuming that the reorganization approach works the same way. But it doesn’t.
The main fact is that PCT reorganization does not work by trying out different
reference signals or different behaviors. The best way to see it in action
(other than watching cats grow up) is to look at the demos in LCS3.
Demo 7-2 shows a simple system with just three controllers and three
environmental variables. It starts out with all three control systems sensing
variables constructed in three different ways from all three environmental
variables. The input weights of the three systems, nine in all, are set at
random at the start of a run and remain the same from then on; adding input
reorganization is a project for the future. The output functions of the three
systems are each connected to all three environmental variables through
adjustable weights, which are initially set to zero. In this demo, the three
reference signals vary in a repeating pattern of magnitudes that traces out a
Lissajjous pattern in three dimensions. This pattern of reference signals never
changes.
But control is so poor at first that the controlled
perceptions change very differently from the way the reference signals
change.
The only variables being altered by reorganization are the three output weights
in each of the three systems. For each system, there is (in addition to the
three weights) a set of three auxiliary numbers that are set at random to
values between -1 and 1. These “speed” numbers are added (multiplied
by a very small constant like 1/100,000) to the output weights on every
iteration of the program. The output weights therefore begin to change at a
constant speed, the speeds being proportional to the “speed” numbers.
This corresponds to the “swimming” phase of E. coli’s behavior.
The three control systems start life in a very crude form. The loop gain is
zero because all the output weights are zero, Soon after the start, all the
output weights are nonzero, so each of the three control systems is trying to
affect all three environmental variables. The weights, however are very small
so the amount of action generated is small at first, and the weights are not
adjusted appropriately, so the control is very bad. The control systems
interfere with each other because each one affects not only the variable it is
supposed to be controlling, but variables in the other two system as well, and
it may not be affecting its own variable enough or in the right direction.
This beginning situation is similar to what is found by neurologists looking at
motor systems in neonates. There is a crude general input-output arrangement
connecting senses to muscles, so the control systems are sort of sketched in,
thanks to evolution. But there are far more connections from sensory to motor
nuclei than are needed, and most of them are not the right connections. As
maturation and practice proceed, the number of connections is gradually
reduced, a process they call “pruning.” In the end, the wrong
connections and superfluous connections are pruned away (I would say, the
weights are reduced close to zero), leaving only the right connections. And
what doesn’t show in the crude observations possible in living brains is that
the remaining weights continue to change so the control systems become more
stable and more skillful at keeping errors very small.
So at first, we have the weights changing at some rate, a different rate for
each of the three output weights. If the running average of squared error of a
control system is decreasing as a result, because control is getting better
even though still not very good, that change continues. Eventually, however,
the three weights will be as close to the right amounts as possible, and then
the changes will start making the error larger. As soon as that happens, there
is a tumble. A reorganizing control system sitting off to one side senses the
increase in absolute error, and produces an output that changes the auxiliary
“speed” variables to new values between -1 and 1 – at random.
This starts the output weights changing, iteration by iteration of the program,
in different proportions, so if we plotted the weights in three dimensions, the
three-dimensional direction in which the resultant is moving would be
different. That change of direction is a tumble.

There is just as much chance that this random change will leave the error still
increasing or increasing faster as making it start to decrease, in which case
another tumble will occur immediately. If the tumbles come close together, the
weights will not change by very much. Eventually, a tumble will set the weights
to changing in a direction that makes the error decrease, and the tumbles will
cease. The weights will go on changing in the new proportions as long as the
error keeps getting smaller.

Clearly, this principle should make all the weights approach the values they
must have for the error in each control system to get as small as it can get. I
have found that multiplying the effects of the speed variable by a number
proportional to the absolute amount of error produces more efficient
convergence, making the speed of change approach zero as the error approaches
zero.

It must be understood that all during this process, the reference signals for
the three systems in the demo are varying in a fixed pattern that never
changes. In other demos, the change in reference signal is made random, and
random disturbances are added, and the reorganizing process still converges to
the best control possible to achieve by slowly adjusting output weights. The
actual behavior patterns, in that case, may never repeat, and the reference
values may never repeat their patterns, either, yet control will continue to
improve over time. I have shown all these effects in various demonstrations,
though only the simplest of them are in LCS3.

Demo 8-1 shows this same principle with random disturbances and with the
reference signals varying in a fixed pattern, for an arm with 14 degrees of
freedom. You can see the movements becoming more regular as time goes on,
starting with clumsy flailings and ending with a smooth regular Tai Chi
exercise pattern. The Tai Chi reference pattern remains exactly the same from
beginning to end of the demo, but the ability to control the arm to match it
while resisting the effects of disturbances continually improves.

TC:I do think we need to understand better how a control
system (CS) learns
how to control its output to obtain/maintain the input demanded by its
reference signal, and I’ve got a suggestion for that, below.

BP: I hope you can see now that reorganization theory provides the explanation
you’re looking for, and the demos do exactly what you describe.

TC: But being in a
hierarchy, a CS also needs to learn how to help the CS which addresses it
obtain/maintain the input demanded by that CS’s reference signal.

BP: I disagree. All that a control system in the hierarchy has to learn to do
is adjust its outputs to lower systems in a way that keeps its own perceptions
matching whatever reference signal it is given, as quickly and accurately as
possible. When that is achieved, a still-higher control system can use the
lower one as its means of controlling its own, higher-order, perception:
adjusting the reference signal in the lower system will quickly and accurately
make the perceptual signal in that lower system follow the changing values of
the reference signal, and thus provide an input (among many) to the higher
perceptual input function that is needed to produce the desired amount of the higher-order
perception.

I’ll leave it at that for now.

Best,

Bill P.

[From Bill Powers (2010.09.22.-725 MDT)]

Frank Lenk (2010.09.21.0809 CDT) --

I'm busy off the list putting a paper together with some other people so don't have much time right now. But sure, I'll do what you ask -- just remind me again in a couple of weeks if I don't manage to do it in that time.

Best,

Bill P.

[From Frank Lenk (2010.11.02.1104 CDT)

Bill - you may remember me trying to better understand reorganization by diving into the code for Demo 7-2 in LCS3. Do you think you still might find the time to provide detailed comments or annotation for the procedures "Reorganize" and "Control"? A more precise understanding of how reorganization and control are modeled would be of great help to me (and I hope of at least some value to others on this list).

Frank

···

-----Original Message-----
From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Bill Powers
Sent: Wednesday, September 22, 2010 8:28 AM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: [CSGNET] hPCT Learning - Trying Again

[From Bill Powers (2010.09.22.-725 MDT)]

Frank Lenk (2010.09.21.0809 CDT) --

I'm busy off the list putting a paper together with some other people
so don't have much time right now. But sure, I'll do what you ask --
just remind me again in a couple of weeks if I don't manage to do it
in that time.

Best,

Bill P.

[From Bill Powers (2010.11.02.1050 MDT)]

[From Frank Lenk (2010.11.02.1104 CDT)

Bill - you may remember me trying to better understand reorganization by diving into the code for Demo 7-2 in LCS3. Do you think you still might find the time to provide detailed comments or annotation for the procedures "Reorganize" and "Control"? A more precise understanding of how reorganization and control are modeled would be of great help to me (and I hope of at least some value to others on this list).

Frank

I'll do this by direct communication, Frank, though not right now. CSGnet has become a forum for other kinds of discussions, and while I watch it to see if anything of interest is happening, I'm turning my attention to PCT projects elsewhere.

Best,

Bill P.