This topic needs a title

[from Avery Andrews 920818]

  (penni sibun 920816)

> i don't really understand
>y'all's rhetorical use of ``mysterious'';

``mysterious'' means mysterious to a behaviorist or an old-fashioned
planning weenie (or, I guess, any random bad guy).

>i think it's conceivable that pct-type control might be more
>interactionist than any of us understands at this point. however,
>part b) is squarely cognitivist: it requires an inside-outside line
>to be drawn, and puts crucial stuff on the inside of the line.

This doesn't fit with my conception of what `cognitivism' is - I would
take cognitivism-in-field-X as being the position that everything worth
understanding in field X can be understood as a process of building
mental representations in the head. This is different from drawing
an inside-outside line & locating certain things inside the line. In
fact, Chapman and Agre seem to do this: Sonja has a clear
inside-outside line, with the visual system & much else located inside
it. What matters, I think, is whether you are supposed to be able to
understand why the stuff inside does anything useful, without also
understanding whats going on outside. This is denied by PCT, and not
implied by my (b).

I actually think its essential to find ways of understanding internal
structure & relating it to `behavior', since otherwise, because
everything after all does bottom out in physics, why don't we all go
to the beach and wait to buy the book when Stephen Hawking & Co finish
figuring everything out?

Switching topics ...
The driving story illustrates nicely what I at the moment consider to be
the biggest potential problem for PCT (ignorance of how perception works
is of course another big problem, but it afflicts everybody) - it often
seems to happen that one can
think of certain abstract&high-level variables as being maintained
(car moving down road = 1; car lying on it or beside it as a junk heap =
0; reference level = 1) by means of various relatively concrete and
straightforward low level variables as being maintained, but in between
there's a vast zone where all sorts of stuff might be going on, which,
at least initially, seems to have about as much structure as a plate of
spaghetti. e.g. sometimes one is controlling for `car near middle of
road', other times `car in snow rut', other times `car to right (or is
it left??)' of reflectors, & on top of that there's anticipatory
compensation (knowledge of the road), however that works, etc.

A simpler case of this sort of thing is the top-level control of the
beerbug: the whole bug's nervous system can be regarded as controlling
for the bug having a high energy level (low energy levels trigger
`behaviors' that typically wind up having the effect that the energy
level gets raised), but the actual way in which the sub-systems in charge of
wandering, edge-following, and odor-hunting negotatiate to achieve this
end is pretty confusing. I certainly can't be sure that PCT ideas will
help in clarifying the workings of this kind of system, though I
consider it worth spending some time to try to find out. What my point (b)
says is that either these negotations will turn out to be castable in PCT
terms, with some benefit derived from this way of looking at them, or
the model bug will prove to be too dumb to be viable, and the real
bugs that are smart enough to keep themselves alive will have PCT-style
internals. Such is the claim, at any rate. Obviously, the jury has
barely begun to sit ....

(Bill Powers ???)
(since cognitivists aren't trying to understand behavior, I don't think
that

I seem to have mislaid the posting this is a reply to, but as far as
I can make out, sonja's limitation to the 8 joystick directions is in
no way crucial. She would not require deep modifications if she
was supposed to drive, say a hovercraft sled with thruster and rotator
engines, able to move in any direction (like the spaceship in
Asteroids). The reason these interactive AI gizmos effect control is
that what they were designed to to is achieve interesting results under
circumstances that change rapidly and unpredictably relative to the
amount of time it takes them to do anything significant (e.g. kill
a monster, as opposed to move a pixel to the left). The fact that
the targets move around unpredictably means that there are unpredictable
disturbances in the path from gross output to net result, except that
the disturbances that C&A focus on are high-level, distal ones
(where the object you're heading for actually is) rather than low level
proximal ones (how much torque you get for how much neural current).
The same dog, just barking in a different corner of the yard.

As for CGA HiRes, writing with setcolor(BLACK) doesn't effect erasure
(in CGA HiRes) , but the setwritemode(1) trick looks like what I was
looking for.

I had already drawn the gloomy conclusion that re-writing was the only
way to erase in Borland graphics, by looking at the NSCK code and seeing
that that seemed to be how Pat & Greg were doing it.

Avery.Andrews@anu.edu.au

(penni sibun 920817.2100)

   [from Avery Andrews 920818]

    >i think it's conceivable that pct-type control might be more
    >interactionist than any of us understands at this point. however,
    >part b) is squarely cognitivist: it requires an inside-outside line
    >to be drawn, and puts crucial stuff on the inside of the line.

   This doesn't fit with my conception of what `cognitivism' is - I would
   take cognitivism-in-field-X as being the position that everything worth
   understanding in field X can be understood as a process of building
   mental representations in the head. This is different from drawing
   an inside-outside line & locating certain things inside the line.

but _in extremis_, which is certainly how i'm arguing, just saying
something's in the head is drawing a line.

  In
   fact, Chapman and Agre seem to do this: Sonja has a clear
   inside-outside line, with the visual system & much else located inside
   it.

well i don't think sonja's implementation really addresses
inside/outside (though it does address other things), and i don't
think c&a think it does either. i don't think anyone knows how to
address this in building frobs right now. sonja's deemphasis on the
central system makes it not completely inconsistent w/ interactionism,
even if c&a succumbed to linedrawing in building it.

cheers.

        --penni

[Avery Andrews 920818:1543]
  (penni sibun 920817.2100)

I guess I don't (yet) see the point of not drawing a line between the
inside & outside of critters. Maybe Sonja is not the best example of
this, because she's not a full-scall critter-in-environment simulation, but
Randy Beer's bug is, and there seems to me to be a clear difference
between the neural circuits & currents on the inside & the locations of
the food-patches, barriers, etc. on the outside. I take Beer's point not
to be that there is no inside-outside distinction, but that the
explanations for behavioral patterns (at the `molar' level, if I
remember my psych. jargon corrrectly) are often to be find in neither
place exclusively.

Avery.Andrews@anu.edu.au

[Avery Andrews 9108191231]

(penni sibun 920818.1200)

  >maybe someone can explain this to me. when you say ``signal'' or
  >``variable'' or ``percept'', it has connotations to me of a unified
  >thing, like a tone, or a light intensity. but when i look at the
  >road, i am not perceiving something like a tone. if you can explain
  >how all the stuff my eyes take in can be a single unified thing, maybe
  >i won't find it so oversimplified.

With a sonja-like visual system, one might set one marker to tracking
the hood-ornament, another the center line of the road, & let the controlled
perception be the distance between them (or so I guess--having never
built a sonja-like visual system, I don't have full faith in my
intuitions about them).

On the general subject of the supportiveness of the environment, I
suspect that the difference between PCT & c&a-style interactionism
might be more one of rhetorical emphasis than substance. PCT in
fact depends very much on the fact that many (in fact most, by an
overwhelmingly large margin) features of the environment can be
leaned on. A steering subsystem would for example depend on all sorts
of facts about roads and carparts about which it knows nothing, as
well as upon the workings of the lower-level control-systems it
works through. To perform at a minimal level, all that it has to `know'
is the relationship between change in torque-applied-to-the-wheel to change in
relationship-between-hood-ornament-and-centerline-markers (of course,
for hi performance, lots more is needed, but the extra knowledge isn't
about car parts, etc.).

One theme of c&a interactionism, as I hear it so far, is that you
don't have to know very much to get by -- keeping your eye on the
situation and following a few simple rules is enough. PCT claims that
you can say a bit (or maybe a lot) more than that, in particular,
something about the general nature of the kinds of rules that will in
fact suffice to get you by -- that on the whole, they tend to be such
as to keep perceptions ( = the output of `registrars') at particular
values, or at least within particular ranges of values. The claim is
that architectures that don't do this won't work. I would take this
claim to be decisively refuted if someone built a robot vehicle that
did stay on the road in an interactionist manner, but just because it
was `the easiest thing to do', without containing anything remotely
like a control system controlling a perception along the lines of
`position-of-vehicle-relative-to-the-road'.

It remains to be seen if this is a `golden thread' that will be the
indispensible key for untangling or building these systems, or if it is just
one of a dozen rules of thumb for doing so. It does so far seem to me
to be a useful organizing principle that is absent from c&a (but for all
I know, this could be only because they think it's so obvious that it
isn't worth mentioning). And it also remains to be seen if, when
applied complex systems, it will apply cleanly, or instead grow so much
hair that it will get difficult to discern the original beat under the
fuzz.

For example, whatabout the little positive feedback loop in the
beerbug's feeding controller (based on the supposed wiring of the
infamous Aplysia). What goes on here is that if the bug is hungry
enough (the energy level is far off enough from its reference level)
and over food, it will start eating, but being eating also excites
the feeding controller, so it keeps eating even when its energy level
is no longer low enough to successfully trigger eating. The whole
system can be regarded as (part of) a control system for the maintenance
of energy-level, but the positive feedback loop looks like at odd bit
stuck in.

Avery.Andrews@anu.edu.au

I've been thinking a bit about steering, & here's what I've come up
with so far. The only direct and simple relationship is between
the absolute angular position of the wheel, and the curvature of
the car's path, but I think that people pay little, if any, attention
to either of these factos. What probably does matter are such things as:

  a) the difference between where you are and where you want to be
      (transverse to the road), P*

  b) the rate at the car is drifting across the road, dP*.
      There may be serveral ways of picking this up (such as
      differentiating P*, or watching the motions of lines on
      the road w.r.t. the car hood, & I'm sure other things as well).

  c) the rotational velocity of the steering wheel.

  d) the angle between the orientation of the car and the tangent
      to the road.

  e) how far the wheel has turned since some time t0.

I'll only consider (a-c), tho (d-e) seem like they might play a role
too.

Observe that on a big lane change you don't normally try to get into
your new lane as quickly as possible, but drift transverse to the
road at a fairly steady rate. And also, that when you get close to where
you want to be, the drift rate needs to be smoothly decreased to zero.
So I propose that P* is controlled by means of controlling dP*. So dP*_ref
should be a function of P*, zero when P* is zero, small when P* is small,
but flattening off to a constant fairly quickly under normal
circumstances (I won't speculate about panic swerves).

Then dP* itself gets controlled by controlling the perceived rate
of motion of the steering wheel: when dP* is too big in the rightward
direction, crank the wheel counterclockwise, & vice versa (I suspect that
there are various ways in which the reference rate of wheel-rotation might be
determined, and that it might even be possible to devise experiments to
figure out which one(s) people were actuallay using). Lower-level
systems take care of summoning up enough torque to get the wheel to
rotate at the right speed (and producing the hand-over-hand motion needed
to effect a large total rotation).

Hence there is a three-level hierarchy of controlled perceptions: relative
position, rate-of-change of position, and rate-of-rotation of the steering
wheel. And of course there'd be some more levels involved in producing
the wheel movements. Monitoring the rate of change of (transverse) position
makes it unnecessary to do the otherwise complex calculations that would be
needed to figure out how fast to turn the steering wheel when, and monitoring
the rate of rotation of the steering while makes it unnecessary to predict
how much force needs to be applied to the wheel to get the required curvature
of the path.

A further elaboration: it seems plausible that drivers also assess the
curvature of the upcoming road, and anticipate how much turning of the
steering wheel will be needed to keep the car on course (it ought
to be possible to do a video-game experiment to see whether they
actually do this). So there could be an anticipator circuit
assessing the appearance of the road ahead, and producing
reference levels for rate of wheel-turning. The higher-level feedback
loop can then add an additional term to correct for the anticipator's errors.
Having control for a rate-of-turn of the steering wheel means that this
anticipator can be a lot simpler than it would otherwise have to be,
since it doesn't have to guess how much force ought to be applied to
the wheel (though there could be anticipators contributing the force
reference levels, they wouldn't have to be very precise).

Avery.Andrews@anu.edu.au

···

,

[Avery Andrews 920824.1034]

(Bill Powers 920822.0800)

Thanks for the pointers. Two further observations.

First, I am very sceptical that people normally perceive the wheel
angle. In fact, once I read an article in which you were advised to
put some tape on the wheel at the position that is at the top when the
wheels were pointing straight ahead, in order to make the wheel-angle
information available. If people are being advised to set up visual
aids to make it more available, that suggests that they aren't normally
using it. So my guess is that rate of movement and net
changes in angle (however observed), are what is actually noticed.

Second, while taking a long drive yesterday I noticed something that
doesn't fit into my story as told, which is that on long distance drives
at least, most of my (and my wife's, as far as I could see) steering
movements were quick & small adjustments, whereby the wheel was turned
various distances, but at what seemed subjectively to be a pretty
uniform rate. These motions seemed to be completed before there was
any noticeable change in the heading of the car. I don't yet have
a real story about what's going on here, but I think it involves
perceiving the car-heading to be wrong, ordering up more-or-less enough
path-curvature to change it quickly enough, and then repeating this
to straighten the car out again.

Avery.Andrews@anu.edu.au

[Avery Andrews 920828.0923]
(Bill Powers 920827.0800)

You say

the interactionist argument is in general deemphasizes individuals

and >emphasises their interactions.

Does this mean that interactionists believe that interactions can be
EXPLAINED without reference to the properties of individuals, or that
they can be DESCRIBED without such reference?

Neither, based on what I've seen. In the present intellectual climate
out there in ScienceLand, it is just a matter of whether you bother
to think about interactions at all. E.g. all PCT-ers are already
Interactionists.

Avery.Andrews@anu.edu.au

[FROM Dennis Delprato (920828)]

I supervise laboratory experiences of approx. 380 1-2-year
students and 60 3-4-yr. psychology majors each year. For
this I have worked up numerous labs that have several
administrative requirements. Basically, they are prepared such
lab instructors can conduct the labs with minimal direction from
me. Each lab has a manual of instructions with equipment, materials,
procedural, and such requirements spelled out for the instructors.
Each also has a handout for the students for them to use in the
lab meeting, for reference, and for lab reports, when required.
I lay out the above by way of seeking assistance from anyone who
might be interested in getting a PCT lab back on line for use here
and anywhere else (I am not interested in copywriting the thing).
I am interested in getting the lab "up."

As the late great Ross says, "Now, here's the story":

I do no programming, but was able to get the assistance of a
capable student who just about got a particular PCT lab
on line. I based the lab on Rick Marken's "Selection of
Consequences" paper (Psychol. Reports, 1985, 56, pp. 379-383 also
see Marken & Powers (1989, Behav. Neuroscience, 103, 1348-1355)
The lab is entitled "Goal-Seeking with Random Consequences of
Responses."

I require my labs to go as smoothly as possible from beginning
to end. In the case of this lab, this means the student must
have guidance provided from the time the Macintosh is turned on until
they turn it off. No problem. This goal was met. I like flexible
labs, by which I mean variables can be changed. No problem. We have
default values for several variables, but they can be changed.
I like a clear demonstration for students. No problem. The E. coli
problem that Rick devised is a beaut. I made it into a game. I require
data to be collected. No problem. The computer stores data and the
student can print it out at the end. Labs that "involve" students
because they are "interesting" are a plus. No problem here.

Given the lack of problems to this point, then just what is "my"
problem. The problem is that the "E. coli" program causes the
Macs to crash in an unpredictable (more or less) fashion. Not
only does the program crash, but other files on the hard drive get
contaminated. Now I greatly admire what my programmer has done to
this point, but working with amateurs can be frustrating. The first
things we looked for were viruses. No matter how hard we looked, we
could find no evidence of viruses. The last hypothesis had something
to do with the fact that the program used Rascal and perhaps a Mac
operating system that has some slight difference from the ones in
the Macs here. The student was at Mich. State when he did the
programming. It could be the Rascal. It could be the operating
systems, ....

Just to fill in on the task, the "EColi Game" is introduced to
the student by directions on the screen that read as follows:

"Your goal is to move the cursor into the target [a box with a size
that can be varied] as fast as possible. There will be a time limit
of five minutes."

"The only means that you have to control the cursor is to click
the mouse button in the Executor Window. Moving the mouse has no
effect on the cursor."

"When the game ends, you time and some other relevant data are
shown."

"If you need to stop the game before five minutes have elapsed,
click the mouse inside the target area." [I recall noting
that some other way is needed for this.]

"Hit any key to continue."

IN ANOTHER VERSION, instead of measuring the time to reach
the target, a run (trial) is set at a partucular duration and
the student earns points derived from the amount of time the
cursor stays inside the goal (target, box).

Per Rick Marken's E. coli preparation, responses (mouse
clicks) move the direction of the cursor according to a
random pattern.

Each student can collect a nice amount of data rather quickly
(we have 6 or so Macs). Performance over runs (trials) can
be plotted and even GROUP data (I know, shame) can be analyzed
via repeated-measures AOV. The latter can be used to show
how uninformative is the analysis of group data in that
wide individual differences are revealed in the data.

Students can even print out a graphic of their cursor's
tracks.

As I have implied, I have been able to have some
computers go through the entire data collection process.
It is just that the crashing is so unpredictable to render
routine running of the lab unwise.

I suppose I have supplied enough information to anyone
who might be interested. The recent comments on PCT in
the classroom led me to think of this undertaking that I
have let sit for about a year.

Dennis Delprato
Dept. of Psychology
Eastern Mich. Univ.
Ypsilanti, MI 48197 U.S.A.
Psy_Delprato.EMUNIX.EMICH.EDU

···

SUBJECT: PCT in the Classroom

[From Bruce Abbott (950813.1505 EST)]

Bill Powers (950813.0925 MDT) --

RAT 1
Corr, meal size vs intake = -0.87
Output gain = 1.04
Ref = 24.92
Mealsize correlation, real vs model = 0.98
Intake correlation, real vs model = 0.98

RAT 2
Corr, meal size vs intake = -0.91
Output gain = 1.20
Ref = 29.31
Mealsize correlation, real vs model = 0.99
Intake correlation, real vs model = 0.99

RAT 3
Corr, meal size vs intake = -0.47
Output gain = 0.40
Ref = 11.36
Mealsize correlation, real vs model = 0.93
Intake correlation, real vs model = 0.93

Well, if they couldn't vary meal _frequency_ to control intake rate via
lever-pressing, these rats sure did a hellofa job making up for it by
varying meal size! Nice job of modeling, Bill (as usual). Rat 3's
reference looks a tad too low, however (noise in the data?). What happens
if you try, say, 21 g?

Just when things are really looking interesting, I've got to run off to
other places for a while. I should be back Wednesday, Thursday at the latest.

Regards,

Bruce

[From Bill Powers (2008.09.18.1441 MDT)]–

Jim Wuwert 2008.09.18.1534EST.

[From Bill Powers (2008.09.18.0743 MDT)]

Jim Wuwert 2008.09.17.1923 EST –

I want to be in line with what I understand about PCT. I want to
live it out and the whole punishment thing seems so Skinnerian. So not
PCT. It’s (punishment) useless from what I have seen, but what other
option do we have? Everything is so rushed, how do we eliminate abortions
without punishing people and have it really work. I wonder if punishment
will really work. If it doesn’t, then what option do I
have?

The good reasons for hoping that punishment will have some good
effect is that it will solve the abortion problem much easier. If
we punish them, then I don’t have to take that much time to deal with it.
I can put them in jail and leave them there for X amount of time. It is
efficient and quick and orderly. If someone does X, then they have to go
to Y. I feel at peace when I know that will happen. If we can put someone
in jail, then I don’t have to take the time to talk them out of it or
develop a new social program. It is simpler.

The good reasons for not wanting punishment is that using a
different approach besides jail would involve more time and investment.
If I took the time to talk with people in these situations maybe I could
help them, thus I would be helping to prevent them from getting an
abortion. I could see them as real people with real feelings and real
circumstances rather than viewing them as “those people” who
believe in abortion. Maybe talking to them or using something other than
punishment would help save them from having an abortion, but also it may
help them have a better life. It may have a better long term effect. In
other words, abortions may decrease.

So the conflict here is between using punishment and using some other
means – talking with people or using something other than punishment.
The overall goal is the same either way, isn’t it? Eliminating
abortions.

Just above you bring in a something new that makes me think it’s a slight
change from the earlier parts; you say “Maybe talking to them or
using something other than punishment would help save them from having an
abortion, but also it may help them have a better life. It may have a
better long term effect. In other words, abortions may decrease.”
When you say “abortions may decrease,” is this different from
“eliminating abortions?” And you mention “I could see them
as real people with real feelings and real circumstances rather than
viewing them as ‘those people’ who believe in abortion.” Is that a
new idea, or something you’ve been thinking about? Any new thoughts about
those ideas now that I’ve picked them out to talk about?

Best,

Bill P.

[From Bill Powers (2003.09.28.1039 MDT)]

Bruce Nevin (2003.09.28.1140 EDT)--

Fascinating thread, Bruce; your observations are always illuminating and
challenging, though you will receive more comment from the challenged than
from the illuminated.

One of my subgoals in life is to do away with disciplines. Disciplines
offer the advantage of knowledge in depth about restricted regions of
experience, and they inevitably end up overgeneralizing by extending
specialized concepts into broader areas where perhaps some other notion
would serve better. The trick, of course, is to know when this is
happening, and I have no secret method. But sometimes things strike me...

An essential point is that these are social products, and not inventions
by individuals. To grasp this, think about the fact that language changes.

It strikes me that the term "social product" is essentially meaningless.
It's a term that refers to the fact that new patterns of behavior appear in
groups of people for no apparent reason. Rather than saying we don't know
why this particular pattern appeared in this social context, we attribute
some causal attribute to the society in question such that it is capable of
giving rise to the observed pattern (a dormitive principle). But of course
we never say what that ability is, because we don't know. To see that this
is an empty term, simply ask _how_ the society produced the product. Not
how the individuals produced it, but how the _society_ produced it. There
is, of course, no answer. Society can't actually "do" anything.

Similarly, when we say that "language changes" (or customs or laws or
morals or prices), we're describing the outcome of some unknown process,
not the process itself. But it sounds as if language is something that can
change all by itself, when stated that way. Language changes, causing us to
speak differently. Is that really what we would want to say?

This is also related to the discussion about the origins of language: who
invented each term we use? One answer is that for any given usage, there
had to have been some person who was the first to use it (though parallel
discoveries or inventions can happen). But another answer is that language
evolves from one form into another, so nobody can be said to have invented
it. But does _language_ evolve? Or is it that _people_ evolve new uses of
language?

What stands out for me in all of the above is the question of agency.
Societies, cultures, languages, and so on are not active agents; in
themselves they can't cause anything to happen, any more than the fact that
a traffic light turns green can cause cars to start moving. The behavioral
illusion applies, I think: we see an antecedent and a consequent, and infer
a causal relationship. But the observed relationship is valid only under
certain assumptions about the underlying organization. If a control system
is involved, appearances deceive.

What I'm coming to is this: the missing ingredient in all of these
discussions is the concept of interaction, of a network of
interconnections, in which the final pattern of behavior is not caused at
all, but is simply a characteristic of the individual elements interacting
(1) according to their individual characteristics, and (2) according to the
way they affect each other. Those two factors are precisely equal in their
importance; neither can possibly take precedence over the other. Somebody
might point to a horse and say "pfafflespritz", but within ten minutes, or
essentially simultaneously, someone else will reply by calling it a
"pfaffle," because "pfafflespritz" is too hard and slow to say using a
human mouth, and the first person who said it will probably never say or
hear the full word again, either. The characteristics of other individuals
reflect our actions back onto ourselves after passing through human
filters, and repeated interactions of this sort bring our behaviors into
least-effort agreement with our individual properties. There is no society;
there are only people. Yet what people do is stronly influenced by everyone
else in -- the society.

We can't think of society as something different from the people who make
it up. We can't think of language as something different from the
utterances of the individuals who speak it. And of course we can't explain
behavior that impinges on other people without taking into account the
reflected impingments produced as the other people adjust their actions to
maintain control of their own lives. But we need new tools for handling
this multidimensional, multiordinate, multisystemic process of closed-loop
interactions; the old words and concepts are simply insufficient. The old
disciplines are too narrow.

Best,

Bill P.

[From Bruce Abbott (970901.1215 EST)]

Rick Marken (970831.2100) --

I want to tell you how nice it is of you to explain to Bill
how his theory of control (where organisms select the intended
consequences of their actions) is also a theory of reinforcement
(where actions are selected by their reinforcing consequences).
Does control theory also explain the behavior of balls rolling
down inclined planes?

The explanations I am offering are control explanations for empirical
phenomena which have been given certain labels in EAB. I am not offering
PCT as an explanation of reinforcement _theory_, but of certain phenomena
that reinforcement theory is supposed to explain. I am sorry that you are
having such a hard time telling the difference.

Regards,

Bruce

Subject: Re: Control of behavior: Real and Illusory?

[From Wolfgang Zocher (980907.0950 MESZ)]

From Bill Powers (980906.0426 MDT)

Wolfgang abandoned those programs mainly because he found that others had
done the same thing, only in a more advanced form (MATLAB and SCILAB). I've
looked at SCILAB and it won't run on my computer, so I would have preferred
the programs Wolfgang was developing.

I think I'll come back to my programming efforts on PCTSim when I have my
own PC. One reason for abandoning the programming of PCTSim was the lack
of a PC - I "only" have a Unix Workstation and the devellopment of a
program which should be used on PCs is rather difficult to do on a Unix-
machine.

Wolfgang Zocher

···

-------------------------------------------------------------------
email: zocher@rrzn.uni-hannover.de (office)
       zocher@apollo.han.de (home)
www: http://www.unics.uni-hannover.de/rrzn/zocher
-------------------------------------------------------------------
--- Hiroshima '45 --- Chernobyl '86 --- Windows '95 ---
-------------------------------------------------------------------

[From: Dennis Delprato (920925)]

       RE: Postulates

                  Some Fundamentals of B. F. Skinner's Behaviorism
          (From article with above title, American Psychologist, 1992, 47,
                                November, in press)

                                 Dennis J. Delprato
                            Eastern Michigan University
                                          &
                                  Bryan D. Midgley
                                University of Kansas

       Despite claims of scientific superiority, neither B. F. Skinner nor
       any adherents of his behaviorism have seen fit to explicitly identify
       in one place its fundamental postulates. Indicative of their ties to
       positivism, several votaries of Skinner's brand of psychology served
       as referees of the paper cited above, and all rejected the idea that
       Skinner had any postulates or assumptions at all! Thus, the paper was
       only accepted for publication after the offending term, postulates,
       and, subsequently, the term, assumptions, were struck from its title.

       Why would one who has rejected Skinner's psychology complete a
       laborious undertaking that entailed scrutiny of Skinner's corpus with
       the aim of extracting and, to some extent, systematizing his approach
       to psychology? The most basic answer is because I am of the opinion
       that it is seriously flawed in a most fundamental way--its postulates
       (not necessarily all). Yet extant attempts to bury it are
       unconvincing. For example, perhaps the most-cited attack, Chomsky's,
       does not even stick with confronting Skinner's position, attributing
       Hull's drive reduction theory to Skinner's views, for example. I do
       not deny that Skinner's psychology is a step forward. Indeed,
       frequently those rejecting it move us backward as far as what it means
       to take a scientific (i.e., naturalistic) approach to psychological
       events. My reasoning is that in laying out Skinner's bedrock
       assumptions and generalizations in the most faithful way possible I
       have provided a legitimate target for scientific and scholarly debate.

       My idea for a fair, indeed very sympathetic, presentation of Skinner's
       science came after years of hearing that it represented the hope of
       the future and that those of us who were putting forth field and
       system views were offering nothing new. My decision to develop an
       organized presentation of Skinner's views in such a way that it met
       the approval of prominent Skinnerians derived from what I saw as a
       need to get in print an authoritative version of his most fundamental
       assumptions and generalizations so critics had *something to address.*

       Note that the motivation behind my systematization of Skinner's
       psychology is in no way included in the paper cited above. The paper
       was first submitted to the American Psychologist long before someone
       decided to put out a special issue devoted to Skinner. However, while
       it was in the review process, Skinner died and plans for the special
       issue evolved. What happened was that the submission was then placed
       into the hands of the special issue editor who gave it further
       scrutiny. This all greatly delayed editorial action but turned out to
       give it more legitimacy as a fair representation of Skinner's basic
       views.

       Below I have simply listed each of 11 points (note conflicting
       position on reductionism--VIII. A. & VIII. B.). The points
       (assumptions/generalizations) we derived are "data based" in that each
       is supported by at least two quotations from Skinner's professional
       writing (presented under each in the paper as written). Actually, all
       were supported by considerably more than two quotations; space
       considerations limited presentation of supporting data. In the actual
       paper, the quotations are followed by clarifying text with the major
       aim being to address them from Skinner's point of view. In several
       cases, we brought up different sub-issues (such as position on theory,
       "private events," and fundamental datum) under particular points.

       Why submit these to CSG-L?

       1. Skinner's psychology, it seems, more than any alternative to PCT
            comes up. To me, this is appropriate. Despite the potential for
            feedback control in operant theory, Skinner's steadfast adherence
            to a sophisticated brand of environmental determinism makes his
            views a challenging foil for PCT. It may be useful to have
            Skinner's fundamental assumptions handy.

       2. From what I have observed, those commenting about Skinner's
            psychology here *are* informed. In fact, along with a few who
            follow the interbehavioral literature (such as Noel Smith, Bill
            Verplanck, and Linda (Parrott) Hayes), commentators/critics here
            are the most informed I know of. Thus, I want to emphasize that
            I am *not* offering this posting here as a corrective to any of
            what I identify as misrepresentations of Skinner's views.

       3. For different reasons, commentators on CSG-L address behaviorism.
            At times, there is a tendency to equate Skinner's brand of
            behaviorism with generic or historical behaviorism. One of the
            most common mistakes critics make is failing to distinguish
            between Skinner's behaviorism and others of the *numerous*
            versions of behaviorism we have had, beginning around the end of
            the 19th century. The unadorned listing of Skinner's fundamental
            position below may not communicate this, but his was an atypical
            form of behaviorism. As Greg Williams previously noted, Skinner
            was no simpleton. He was progressive, in contrast, e.g., to so-
            called cognitive-behavioral types who often would have us back at
            Cartesian interactionism. At different times, CSG-L commentators
            have cited Skinner as coming close to a feedback-control
            approach. In my view, he could not get beyond a mere glimmer of
            a truly radically different conception of behavior because his
            assumptive base remained unexamined, hence unaltered, with the
            result of making it impossible to move beyond the operant as he
            viewed it.

       4. Perhaps most importantly, I would like to encourage a thoughtful
            effort to lay out the fundamental postulates of PCT/HCPT. I know
            that, who else, W. T. Powers, has come close to doing this. More
            than anything, the recent interchanges between Greg Williams and
            especially Bill over interpersonal control, influence,
            manipulation, ... led me to conclude that it might be useful for
            their purpose, as well as in general, if the fundamental
            postulates were to be set forth and frequently reiterated. I
            also presume they would not be set in stone. [After I prepared
            all this, a posting from Bill today (920925) suggests he may be
            working on a presentation of the fundamental position. Grand
            idea.]

       So, what follows are some fundamentals of Skinner's approach to
       matters psychological. I realize that they may not be particularly
       interesting or useful without accompanying text, but I'll bet they
       still function as a disturbance. I hope the adjusting behavior
       (behavior-1) will not be pressing SELECT followed by pressing DELETE.

                               I. Purpose of Science:
             The primary purpose of science is prediction and control.

                                  II. Methodology:
         The methodology is functional analysis which relates environmental
              independent variables to behavioral dependent variables.

                                 III. Determinism:
                       Behavior is determined; it is lawful.

                          IV. Locus of Behavioral Control:
              The causes of behavior are localized in the environment.

                             V. Consequential Causality:
           Selection by consequences is the primary causal mode by which
                 environment determines outcomes in living systems.

                                  VI. Materialism:
               Dualism is false. The only world is a physical world.

                          VII. Behavior as Subject Matter:
              The subject matter of psychological science is behavior
                                 and behavior only.

                               VIII. A. Reeducations:
        The subject matter of psychology is reducible (at least to biology).

                             VIII. B. Nonreductionism:
         Behavior cannot be completely explained in terms of biology or any
                          other "lower-level" discipline.

                  IX. Organism as the Locus of Biological Change:
        It is the organism that changes through evolutional and environmental
                     histories, and the changes are biological.

             X. Classification of Behavior into Respondent and Operant:
            There are two major classes of behavior, or more completely,
                   functional relations: respondent and operant.

                     XI. Stimulus Control of Operant Behavior:
          Operant behavior can be brought under the control of antecedent
        stimuli, and description of operant behavior usually requires three
             elementary terms and their functional interrelationships.

                  XII. On The Generality of Behavioral Principles:
       The full complexities of human activity--including language, thinking,
       consciousness, and science--are behavior to which all the above apply.

···

TO: CSG-L

[From Richard Thurman (930902.1630)]

After a very hectic summer I am finally catching up on some of the CSG-L
reading. The following two posts caught my attention:

Rick Marken (930829.2030)

Bill P. has just recently published a paper on the
control of what would surely be called "cognitive" variables (but are
just called "higher level perceptions" in PCT. I don't know the reference
for the article.

Bill could you supply a reference for the above mentioned paper. If it
is not in print yet could I get a hold of a copy? I am _very_ interested
in the control of "cognitive" variables.

Rick Marken (930831.0830)

As an answer I will copy the steps
of "The Test" as they were described in the final (4th) article in Powers'
Byte series entitled "The Nature of Robots" (Byte, June-Sept, 1979):

I recently finished reading the Byte articles. They are a real treasure
of information and as far as I am concerned the most clearly written
and understandable presentation of PCT I have seen yet. If others on this
list have not read these articles I cant recommend them enough. Definitely
a MUST READ!

I just want to thank Bill P. (and of course all the rest who contribute
to the PCT paradigm) for hanging in there these many years. It seems
that it would be easy to sit back and watch social scientists continue
to flounder but you all have actively put up with paper rejections,
inane criticisms, and (what seems to me to be the most painful aspect
of all) an endless stream of the same questions asked over and over
again. Not to mention answering the continual flow of 'yes buts' which
crop up in nearly every mention of PCT. The patient work you are doing
is greatly appreciated!

Rich

···

--------------------------------------------------
Richard Thurman
Air Force Armstrong Lab
BLDG. 558
Williams AFB AZ. 85240-6457

(602) 988-6561
Internet: Thurman%HRLOT1.Decnet@EIS.Brooks.AF.Mil
or
Thurman@192.207.189.65
---------------------------------------------------

[From Bill Powers (940914.0915 MDT)]
RE: The beak of the finch

I'm having some problems with Darwin's finches. They started when I read
that Boag had proposed an egg-swap experiment, to see whether beak and
body size would be influenced by foster parenting. This is roughly what
I had proposed a few posts ago, except that I had thought that beak size
might be affected by mechanical properties of diet, and wondered whether
feeding the babies on larger and harder seeds might increase their adult
beak size. Some body parts like muscles and fingernails do grow
according to use.

The egg-swap experiment was not done in the Galapagos (too much
disturbance of the natural scheme), but Smith later did one on Mandarte
Island, British Columbia.

p. 68:
     Boag ... found that it is nature, not nurture, that plays the
     larger role in deciding the size of the sparrows and the shape of
     their beaks.

This gave me pause, because it has the sound of substituting a black-
and-white dichotomy for what is really only a preponderance of effect --
the same old problem we have with psychological "facts." In fact this
says that nurture DOES have an effect on body size and beak shape, which
is the question I had raised. Since in some cases a difference in finch
beak size of only 0.5 millimeter (about 3%) makes the difference between
being able to crack a drought-resistant seed (4 out of 5 tries) and not
being able to crack it (0 or 1 out of 5 tries), it is possible that this
difference could be accounted for by nurture.

Moreover, the Smith study was not done with Darwin's finches, but with
sparrows in British Columbia. Here are some more facts:

Smith found that among song sparrows on Mandarte Island, the beaks
varied as much as 10 percent from the mean in only 4 in 10,000 birds.
Grant found that the beak of the cactus sparrow in the Galapagos varied
10 percent from the mean in 4 of 100 birds, 100 times the variance. Says
Weiner, "This is one of the most variable characters ever measured in a
bird." (p. 47)

P. 192:
     The width of the _fortis_ beak in the new generation ... is
     measurably narrower than the beaks of the generation before them -
     down from 8.86 millimeters at the time of the flood to 8.74
     millimeters now"

p. 193:
     In an adaptive landscape ... it can pay to carry a beak 3, 4, or 5
     millimeters away from the tried and true.

p. 68 again:
     Recent studies have shown that even the smallest details of bird
     life, from the exact size of the eggs to the number of eggs and the
     date they are laid, are heritable, too (at least to some degree).
     They are passed down from generation to generation in species after
     species of birds. This seems to be the rule rather than the
     exception in nature, just as Darwin imagined it to be, although not
     all variations in the living world are passed down as faithfully as
     the beak of the finch.

To say that something is heritable "to some degree" is far from saying
that it is inherited, period. It seems to me that different criteria (as
well as birds with imnmensely different variability) are being used to
judge heritability and variation, even though these are just two words
for the same thing (like frequency and period). A variation of 0.5
millimeter is seen as small when discussing selection effects that go
with cracking tough seeds, but would this also be considered a small
enough variation so that the same change in beak length could be taken
as evidence of faithfully passing down the beak of the finch? Either
this is sufficient variation to let natural selection come into play, or
it is small enough to show that this character is highly heritable and
that this lineage is NOT eliminated by natural selection.

I know what the intent is here: the beak size is passed down faithfully
from parent to offspring in a given lineage, but different birds in the
same generation pass down different beak sizes -- the "variability" is
within a generation but not across generations. This allows natural
selection to remove lineages with too small (or too large) a beak size,
leaving only those lines with the right beak size to propagate. That's
the theory. That's why the emphasis on "faithfully passing down"
characteristics from one generation to the next. But what happens if the
variations from one individual to another in the same generation are
large enough to support the principle of natural selection, but not
large enough to count as failure of a lineage to "survive?" You can't
use the same evidence to prove variation and lack of variation.

In one sense what was observed is perfectly understandable: when times
get tough, only the tough keep going. In some drought years, a minority
survived, and in fact no fledglings at all survived. The birds that
survived were obviously those with beaks big enough to crack the tough
seeds that remained, or with small enough bodies to live on the few
small seeds they could find and crack with their small beaks. Natural
selection obviously does select. There is no mystery here.

But of course natural selection doesn't really "select." The term
"natural" selection was picked by Darwin as an analogue of "artificial"
selection -- breeding of animals by systematic human selection, breeding
with a purpose. Inanimate nature does no selecting with a purpose, and
it's hard to make sense of "selection" without the idea of some
criterion of selection. Purposes reside only in organisms.

As Weiner points out, what does the selecting on the Galapagos Islands
is not primarily drought or rainfall; it is competition for food and
mates. The primary factor is conflict among control systems each trying
to get for themselves what they need. When seeds are plentiful, there is
little conflict and, as we see near the end of the book, hybrid species
begin to appear, slopping over the old boundaries of niches. The
specialization into large-beak, medium-beak, and small-beak birds occurs
only when the abundance of food decreases and birds begin to find
themselves in competition with other birds, with bees, and with other
animals. Competition, as we know, wastes resources, so the birds kill
each other off (indirectly) where they compete, leaving only the birds
controlling for variables that are independent of each other. Of course
the birds within a given niche are in the greatest competition of all,
but if the population is spread among niches this represents a minimum
in competition, especially when seeds fall into distinct size groups.

···

-----------------------------
I would truly like to get rid of this concept of natural selection,
because it implies purpose where there is none, and denies purpose where
it exists. The oddest thing about Weiner's book is the way it discusses
adaptations almost exclusively in purposive language, ascribing the
purposes to the finches. The finches acquire larger beaks to eat the
larger seeds, and so forth. He never says that the weather dries up in
order to eliminate the weaker finches, or that the islands are separated
in order to keep the finch populations separate. He never says that the
seeds with kernels left in them are sparsely mixed with empty hulls in
order to reduce the frequency with which finches will encounter them.
Yet to speak of nature exerting selecting effects on organisms would
require talking exactly that way.
---------------------------

Evolution is the result of interactions among purposive systems and
between purposive systems and the inanimate environment. These
interactions work out according to what the organisms want at many
levels, and according to the nature of the environment on which they
must act to get what they want. Populations rise and fall, forms change,
modes of behavior change. Sometimes populations fall because
reproductive success is too great. Organisms have the capacity to adapt,
to vary their own forms within limits of amount of change per
generation; this capacity shows up at every level of organization. One
result of it is evolution, a continual bias on changes that always works
in the direction of obtaining more complete control over aspects of the
environment that affect the organism. Since we know that organisms are
purposive at every level we have studied, it simply makes no sense to
see evolution as anything but a purposive process.
----------------------------
Well, there's lots more to think about on this subject. Enough for one
try.
----------------------------------------------------------------------
Best,

Bill P.

ial: 389, exp: 3, dist: 1, diff: 1, size: 3000, distrib: u

rib: u

ial: 390, exp: 8, dist: 2, diff: 5, size: 3000, distrib: g

: 3000, distrib: g

, exp: 10, dist: 3, diff: 3, size: 3000, distrib: u

ist: 3, diff: 3, size: 3000, distrib: u

3, diff: 3, size: 3000, distrib: u

rial: 392, exp: 0, dist: 4, diff: 6, size: 3000, distrib: j

: 392, exp: 0, dist: 4, diff: 6, size: 3000, distrib: j

Network (CSGnet)" <CSG-L@UIUCVMD.BITNET>
work (CSGnet)" <CSG-L@UIUCVMD.BITNET>
etwork (CSGnet)" <CSG-L@UIUCVMD.BITNET>
VMD.BITNET>

  Re: The Control Systems Group
  csg-l@vmd.cso.uiuc.edu
g-l@vmd.cso.uiuc.edu

t (940916?) says:
n't understand the
understand the
surprised? Why am I not going to have a heart attack and
rt attack and
ttack and
seems that every now and then someone will present truly technical
ms that every now and then someone will present truly technical
ormation regarding Controls Systems.
ontrols Systems.
ols Systems.
truly technical" information
acticing Control Engineers" bring
ineers" bring
rs" bring
g
the presentation to the level you like -- why don't you control it?
entation to the level you like -- why don't you control it?
tion to the level you like -- why don't you control it?
ulk of the material presented here has to do with
cal/psychological issues.
n is, should this forum really be named a "Controls
Controls
not. It is a group of people who are interested in
Control Systems). What would you suggest
this forum?
l@VMD.CSO.UIUC.EDU Wed Sep 21 08:07:47 1994
ath: <@vm.utcc.utoronto.ca:owner-csg-l@VMD.CSO.UIUC.EDU>
IUC.EDU>
.dciem.dnd.ca (DND/DCIEM-1.3)

ived: from vm.utcc.utoronto.ca by vm.utcc.utoronto.ca (IBM VM SMTP V2R2)
y vm.utcc.utoronto.ca (IBM VM SMTP V2R2)

: from UTORONTO.BITNET by vm.utcc.utoronto.ca (Mailer R2.10 ptf000)
NTO.BITNET by vm.utcc.utoronto.ca (Mailer R2.10 ptf000)
BITNET by vm.utcc.utoronto.ca (Mailer R2.10 ptf000)
a (Mailer R2.10 ptf000)
ailer R2.10 ptf000)
h BSMTP id 5285; Wed, 21 Sep 94 00:34:13 EDT
MTP id 5285; Wed, 21 Sep 94 00:34:13 EDT
n, 19 Sep 1994 12:57:45 -0600
work (CSGnet)" <CSG-L@UIUCVMD.BITNET>
)" <CSG-L@UIUCVMD.BITNET>
CSG-L@UIUCVMD.BITNET>
et)" <CSG-L@UIUCVMD.BITNET>
FLC@VAXF.COLORADO.EDU>
      CSG-L@vmd.cso.uiuc.edu
  CSG-L@vmd.cso.uiuc.edu
SG-L@UIUCVMD.BITNET>

to the implications of
ing ("he did not seem
back from him, because in his last to me
s (for suggesting that he should learn
thing about control theory before criticizing it). He said
g about control theory before criticizing it). He said
I do not plan to read the 1973 book you cite [BCP], partly,
     because based on what you have said in your letter, it will

le itself is
tself is

     invalid. People do not behave to control perceptions but to
values, ie, to live. In short, I believe that your
re _fundamentally_ mistaken. Given this, there is

vior, the control of
ve values". In any event, it should
theory it is not
deliberate ignorance.
l@VMD.CSO.UIUC.EDU Wed Sep 21 09:30:00 1994
ath: <@vm.utcc.utoronto.ca:owner-csg-l@VMD.CSO.UIUC.EDU>
IUC.EDU>
.dciem.dnd.ca (DND/DCIEM-1.3)

ived: from vm.utcc.utoronto.ca by vm.utcc.utoronto.ca (IBM VM SMTP V2R2)
a (IBM VM SMTP V2R2)
6627; Wed, 21 Sep 94 09:29:12 EDT
ET by vm.utcc.utoronto.ca (Mailer R2.10 ptf000)
c.utoronto.ca (Mailer R2.10 ptf000)
oronto.ca (Mailer R2.10 ptf000)
00)
with BSMTP id 6589; Wed, 21 Sep 94 01:57:37 EDT
589; Wed, 21 Sep 94 01:57:37 EDT
Wed, 21 Sep 94 01:57:37 EDT
6:27:13 -0600
)" <CSG-L@UIUCVMD.BITNET>
rk (CSGnet)" <CSG-L@UIUCVMD.BITNET>
CSGnet)" <CSG-L@UIUCVMD.BITNET>
" <POWERS_W%FLC@VAXF.COLORADO.EDU>
and HPCT (from Mary]
Multiple recipients of list CSG-L <CSG-L@UIUCVMD.BITNET>
R

my
nd
.
philosophy", and have cited numerous other philosophers to -
what? Question or justify PCT as philosophically legitimate? I'm

timate. Do they have the support PCT has of experimental
ence, simulations, and so on? That's what matters in PCT -
similarities to what others have thought, but whether
ers have thought, but whether
have thought, but whether
s. By which is meant
fects in the presumed
happens, as opposed to
en. We've been around the
ics - logical, self-
ind of legitimacy except
actually entering the system
he content of thought - specific principles, for example,
or morality - that isn't really what PCT is about. It's about
t

vents, or constructing categories. It may well be that a
s, or constructing categories. It may well be that a
erstanding of how humans are organized leads to a
s a lot of mileage to be had out of teaching
yone is organized as a control system, as some of
nized as a control system, as some of
d as a control system, as some of
tems have discovered) but it's
content (one's particular
and forget that PCT is
kes having values
ncept and has principles and values
be philosophically this is a Bad
how it can be avoided, or why it
ory, PCT, as you say, has to exist _in_ language, but
make it subordinate to language. I don't know why
to "distinguish PCT from an absolute frame of reference
ludes within itself all existence", or what would happen
e implied if you didn't (do you want to expand on that?). I
expand on that?). I
nd on that?). I
in order to handle them with language is only a guess, but Bill
cooked up a suggestive experiment a long time ago in which
understanding a complex pattern of lights in order to turn it off
would finally be understood as simply having to react to an event
- the greatly shortened reaction time indicating a shift to a
lower level of perception of the task.

Do you really think actions speak louder than words? Does the
incomprehensible behavior of a psychotic make sense on its own?
Or does it start to make sense when you (using words) begin to
put it in a theoretical framework (PCT or any other) - begin to
see its purpose, or the purposes it is trying (even if failing)
to achieve.

Incidentally, it is neat that you see psychosis as endless
unsuccessful reorganizing, because Bill has thought that too. But
why is it unsuccessful? Physiological explanations aside (for how
much warped physiology is a consequence, rather than a cause?),
the PCT handle on much psychological difficulty is conflict -
incompatible goals. There are various ways of coping with this
problem without reorganizing, and I think we all get away with
some of them without being too overtly neurotic, and sometimes
(often?) reorganize and resolve them on our own. But neuroses
serious enough to seek help for are coping methods that are not
working, and reorganization is being avoided (because it is too
painful, scary, or whatever. Therapy provides a safe place and
support to undertake it - it being the process of acknowledging
and owning both sides of the conflict, and going up levels to
where the incompatible goals are coming from, which seems to
permit reorganizing the conflict out of existence).

If psychosis is reorganization run wild, then it is different
from neurosis, which I think is a _failure_ to reorganize
(perhaps defending against the possibility of a psychotic
break?). Whether the underlying problem in a psychosis is
conflict also, I wouldn't know. The reorganizing is failing to
resolve the conflict, if conflict is the problem - possibly
because it exists at a higher or more intrinsic level than where
the reorganizing is taking place? Or because control systems are
damaged or defective in some way? Hung up in a positive feedback
loop?

There are very few people looking at mental illness from a PCT
point of view. Any further comments you have would be very
interesting to me (and others too, I'm sure).

Mary P.

[From Bill Powers (940924.1550 MDT)]

Martin Taylor (940923.1200)

RE: multidimensional evolution

I'm trying to set up a C program for doing the evolution thing with an
E. coli control system. I'm having some trouble, which maybe you can
help with.

In your model you define a "fitness space" in which there is a distance
D from a target. The target is defined, apparently, as the position
0,0,...0 in this space. I infer that you compute D as

sqrt(sum(g[n]^2), where g[n] is the value of the n-th gene, with perhaps
a scaling factor.

The implication is that to approach maximum fitness (D = 0), all the
gene values must approach zero. Is this correct?

If that is right, we can generalize by defining maximum fitness as a
condition in which the value of each gene approaches a value g*[n]. This
would define a position D* in fitness space as some position other than
at the origin. Then a fitness error could be defined as

e = sqrt(sum((g*[n] - g[n])^2))

Specifying all the g*[n] values is then equivalent to specifying the
target position in your fitness space, and we don't need the dummy
variable D. If all the g*[n] are zero, we have the original case.

You can see where I am going.

···

-----------------------
Second problem: survival

I assume that we're talking here about an organism that reproduces by
mitosis. This, of course, implies a doubling of the population for each
generation if there is 100% survival. The population will level off when
all sources of mortality are such as to reduce the survival rate to 50%.

In your model, however, I don't see how you handle the production of
offspring, or how you assure that the steady-state average survival rate
is 50%. If each ancestor produces one offspring, the ancestor must
immediately die if the population is to be constant. It seems to me that
you make the survival rate an arbitrary function of the gene values,
unaffected by the population, and then simply make up the difference
between the number of survivors and an arbitrary total population by
adding copies of individual organisms selected at random. I don't think
that creates the same effect. You're losing a functional relationship.

I think the right way to do this is to make survival depend not only on
the gene values but on the total population, and let both ancestors and
offspring survive so the basic growth rate is 100% per generation. We
should find a total final survival rate of 50% if the model is correctly
constructed. A two-stage process, it seems to me, is required. First,
the reproduction rate must be a function of the intrinsic genetic error,
sqrt((g*[n] - g[n])^2). The vector g*[n] defines the maximum
reproduction rate. We could take this as a measure of the general health
of the organism. Then the survival rate from one generation to the next
must be a decreasing function of the total population. The population
will find its own level instead of being set arbitrarily, and it will
interact with survival rate as it should.
---------------------
Last problem: mutation

Without any mutations, we would find the population coming to some final
state with each genome being represented as a percentage of the total
determined mainly by reproduction rate. The genome determines the basic
reproduction rate and the environment determines the survival rate. So
as a check on the model we can start with a random distribution of a few
genomes and see if the final population comes to about the expected
distribution of genomes.

Then we can introduce mutations that occur with an average frequency of
one per n individuals per generation. This should result in a shift in
the distribution, with slower reproducers becoming rarer in proportion
to the total number. The total population might change, too.

Finally, we can introduce the E. coli type of system in which the mean
interval between mutations is varied about the mean rate in a way
related to the rate of change of intrinsic genetic error (or perhaps e *
de/dt). The result should be a further shift of the population. We can
then start reducing the basic mutation rate, to see if there is a point
where with the control system the organisms evolve quickly to a final
state and without it they evolve more slowly or not at all.

What do you think?
----------------------------------------------------------------------
Best,

Bill P.

[From Bruce Abbott (950929.1245 EST)]

i.kurtzer (950928.2245) --

issacithinkyouvereallyshownusallthatourusualmeansofwritingactuallywasteconsi
derabletimeandeffortwhichcouldbebetterspentworkingonthe_content_ofourthinkin
gratherthannigglingdetailslikecapitalizationspellingpunctuationandparagraphi
ng.infactasyoucannodoubttellyouveencouragedmetogoevenfurtherbyeliminatingall
thosewastefulspacessothatmycommunicationsfromnowonwillbedenselypackedwithmyt
houghtsandopinionstothebenefitofall.thanksforshowingushowtoeliminatesuchtime
andspacewastingconventionsfromourwriting!

regardsbruce

From Greg Williams (921014)

Bill Powers (921013.0930)

I do think, though, that it might be possible for acquired criteria
to override the built-in ones. I don't think that possibility is a
problem for either of our viewpoints.

I agree in general, but I'm not sure what operation you mean by the
term "override."

"Override" means that the error relative to an acquired reference signal,
necessary for reorganization to start, is less than the errors relative to
inherited reference signals, necessary for reorganization to start in the
absence of the acquired reference signal.

The control hierarchy is learned specifically as a means of preventing
critical error from becoming large enough to cause significant
reorganization. That's automatic; reorganization simply continues
until the critical error IS prevented from becoming that large. When
the learned control processes work well enough, critical error does
not become large enough to cause their organization to be altered.
That's why they persist.

Sounds good to me.

If memory, not reorganization, is involved in a particular instance
of "facilitation" (or, more generally, "purposive influence"), then
that instance is a kind of "rubber-banding," which might not have,
for you, what you call "deep theoretical significance," but
certainly has great practical significance AND scientific
significance, in my opinion.

How is it an example of "rubber-banding?" I don't understand.

If the control structure stays the same (no reorganization), then new
information simply disturbs a controlled variable, resulting in "offsetting"
actions to maintain control of that variable. If you are controlling for
phoning someone and I tell you that the phone number was recently changed, you
control by using you fingers to dial the NEW number -- different actions, same
controlled variable.

If this is the crux of our dispute, then our dispute seems to come
down to a quantitative question: loop gain.

I think the loop gain can range from very low to very high for a swimming
teacher (as one instance), just as the loop gain can range widely for a
subject controlling (in Rick's famous experiment) for keeping a dot near a
certain point on a computer screen when the dot is subject to a random
alteration in the direction of its movement each time the subject presses a
key. I think the situations are analogous.

Don't get me wrong; I'm not saying that people can't TRY to control
others by means like the one you suggest. I'm not even saying that
they don't convince themselves that they ARE controlling others by
such means. But whatever control does exist is mostly in the
imagination. Just consider the looseness and uncertainty in the
scenario you propose.

"Looseness" and "uncertainty" need to be evaluated by looking at whether this
kind of control works virtually all the time or only part of the time or
never. As I look at the "wild," it works quite efficiently virtually all the
time if the controller has an accurate model of something the other wants.

The controller's effects are small, statistical, unreliable, and exceedingly
slow. The loop gain must be close to zero.

Just as in Rick's experiment, the effects needn't be slow, unreliable,
exceedingly slow, OR statistical (well, you might use statistics to
economically describe the TRAJECTORIES in both cases, but the controller's
intended outcome, namely, seeing the cursor or other person's actions he/she
wants to see, does NOT need a statistical description, only a criterion (set
by the controller) for meeting/not meeting the desire, which can be met by a
CLASS of POSSIBLE actions). Aside: now that I think about it, there is no need
for the class of possible actions to be finite; it is a CONCEPTUAL class,
i.e., "the infinite class of all possible trajectories of the dot which remain
within an inch of the dot," or "the infinite class of all possible ways of
coming to stay afloat without external support in deep water."

Think how easy it would be for the putative controllee to see the point of
what the controller is doing and simply decide not to cooperate. Always
assuming, of course, that there is no underlying threat of irresistable force
that itself would be the actual means of control.

SEZ PCT: THE CONSTRAINT ON SUCCESSFUL CONTROL OF YOUR PERCEPTIONS WHICH DEPEND
ON ANOTHER'S ACTIONS IS THAT YOU DISTURB (OR CAREFULLY DON'T DISTURB) IN SUCH
A WAY THAT THE OTHER'S ONGOING CONTROL (IF THERE IS NO REORGANIZATION) OR NEW
CONTROL (IF THERE IS REORGANIZATION) RESULTS IN WHAT _BOTH_ PARTIES WANT. To
meet this constraint requires interacting with the other enough (maybe just
statistically, at the population level) to make a model of part of the other's
control structure. If the model is poor, the other might "simply decide not to
cooperate" -- but if the model is good, and the putative controllee sees the
point FOR THE CONTROLLEE of what the controller is doing, then the controllee
WILL cooperate, because cooperation will get the controllee what he/she wants.
(Of course, if deception is involved, or if the controllee is unsophisticated,
the controllee might only BELIEVE that he/she will get what he/she wants and
not get what he/she doesn't want. Folks who understand the very real
possibility of this sort of thing "patronize" others by warning them that what
they (currently) want might not turn out to be best for them.)

IN PRACTICE, I see that this works much of the time: A indeed does
perform actions B wants to see, and often within a short time.

If that is what you see, the only explanation I can think of is that
you have misconstrued what you see. A much simpler explanation is that
A has perceived what all of B's elaborate preparations are aimed at,
and has decided to help B out by doing what B wants. I could see that
as leading quickly and specifically to production of the behavior that
B wants to see.

You've almost got it. Actually, B wants A to "help out" A, which happens
(perhaps unbeknownst to A) to "help out" B, and might (but need not and
usually doesn't!!!) in fact (via 20-20 hindsight) result in what A wouldn't
consider beneficial for him/her.

Of course B might take this to indicate success of his or her method of
control, particularly if it's control of A that B wants.

Yes, and it would be "success" in the sense that B could have made a POOR
model of what A wants, and thus would have failed. But B succeeded, because B
met the constraints for successful control of his/her perceptions which depend
on some of A's actions.

I don't see how the method that you have outlined could be either quick or
specific. Perhaps you have left something out of the description.

I hope the above helps. Perhaps the analogy with Rick's experiment will be
most instructive for you.

Exchange relations are not control of another person. They
specifically avoid the abitrary influencing of one person's actions to
satisfy the goals of another. One person does not study another simply
to get what is wanted out of the other; that is a control relation.

"Arbitrary" is in the eye of the beholder. I claim that attempted control of
ANYTHING is subject to certain constraints (you can't lift a two-ton rock by
hand, and you can't make a person want what they don't want). I've been saying
all along that I want to look at the CONSTRAINTS ON CONTROL of one's
perceptions which depend on others' actions. IN GENERAL, CONTROL CANNOT BE
ARBITRARY. If what you mean by "control of others" is ARBITRARILY MAKING THEM
ACT AS YOU WISH, then you're NOT talking about what I am talking about (except
in number 4 of my summary -- using force/threat of force) -- and even that
kind of control cannot be ABSOLUTELY arbitrary (sticking a gun to my head
won't result in my picking up a two-ton rock for you, sorry, better shoot).

This is a very different social relationship from one in which people
memorize each others' characteristics, plot and intrigue, manipulate
situations and environments, all so they can get what they want even
if the other person doesn't want to cooperate or doesn't know there is
manipulation going on.

(Common) exchange interactions as well as the (not so common) nasty stuff you
cite here BOTH involve attempts to control one's perceptions which depend on
others' actions, and so should fit somewhere in my summary schema (the four
kinds of "social" control). And PCT explains why exchange is more common than
the nasty stuff.

It's difficult to get what you want or need from other people, because
everyone is defensive about "being controlled."

Some definitely with much higher loop gain than others. But the other's loop
gain regarding "not being controlled" isn't a problem for a controller with a
good model of what the other wants. In types 1-3 of my summary, the controllee
is getting what he/she wants (at the time) for control to be successful.

They're defensive about that because that's what THEY are trying to do; they
want to be the controller, not the controlled.

Some definitely with much higher loop gain than others. The success of types
1-3 control requires that the controllee feel "in control," not "being
controlled."

Today my son Evan was having a problem with his new birthday
present, a radio- controlled truck. He asked me to help him figure
out what was wrong with the transmitter. Some experiments guided by
me showed a weak battery. Next time he'll be able to cure the
malady himself. No, he didn't hold a gun to MY head, either. We
BOTH got to where we wanted to be.

See how easy it is when nobody is trying to figure out how to control
someone else? He asks, you give. The hardest part for you is waiting
to give until he asks.

You seem to equate ALL of control of one's perception which depend on
some of another's actions with (mainly) type 4 in my summary and some sub-
types (involving deception and unsophisicated controllees) of type 2. I
believe those types/sub-types are rare relative to the other types/sub-types
in everyday life. Much control is (approximately) symmetric: win-win. I have
yet to be convinced that ANY social interactions involving intention on the
part of at least one of the parties involved do NOT involve one of the four
types of control of one's perceptions which depend on others' actions. Such
interactions include "exchange" as well as "teaching" and "con games."

I believe that an understanding of the constraints we face in attempting
control of types 1 through 3 might help to avoid escalation to type 4.

I think we also judge people by their intentions. You know, "Why are
you being so nice to me today?"

We certainly do. (And not always cynically. "His heart is in the right
place.") But believing it important to CHANGE their intentions is a sure path
to violence. Attempting to see them ACT the way you want need not be -- if you
take their intentions into consideration. Sez PCT.

Best (now why did he say that?),

Greg