A simple form of reorganization

[Hans Blom, 970716f]

(Bill Powers (970716.0757 MDT))

If you want to propose that Aplysia actually has the ability to
perceive "injury" and a reference signal setting the desired degree
of injury, I will not reject that out of hand.

We are forever forbidden, I guess, to know what Aplysia (or anyone or
anything else) makes of its perceptions. "How does it feel to be a
bat?" is a favorite AI theme for endless and always inconclusive
discussions. How does it feel to be an Aplysia? Do fish feel pain?
Anglers say (and believe) no, naturalists yes. Each has a political
(i.e. control) agenda...

Injury is a concept used by humans. We do not -- and cannot -- know
whether it is a concept for an Aplysia. If we decide that Aplysia
experiences injury, it will be based on the _human_ concept of injury
and its experimental operationalization. "This is how I define injury
in my experiment", says the scientist, "so that I can measure it. And
this is how I measure gill withdrawal. And this is how I calculate a
correlation between both measurements". If you don't like his
definition of what constitutes injury (or gill withdrawal, or
correlation), you may propose alternatives. But foremost is that _we
humans_ can measure it.

All I want is for you to recognize just what it is you are proposing
when you say that Aplysia reacts in order to protect its gills from
injury.

We seem to "recognize" quite different pictures. Ambiguity?
Incomplete information? Bias?

Of course when you put your proposal as I suggest, you may decide
that you don't really believe (as I don't) that Aplysia is
sufficiently complex to perceive and control something as abstract
as the general idea of "injury."

Asking Aplysia to understand the human concept of injury is really
too much :-).

It might perceive pain, but I doubt strongly that it could perceive
"damage." And I don't believe that you think it could, either.

Pain and damage (or its threat) seem to be closely connected in the
human vocabulary, so I'm not quite sure which fine point you wish to
make here.

Anyway, what we -- once again -- find is different concepts of what
constitutes a goal. In your view it must be physically present, e.g.
in the form of a wire that carries a voltage. In systems theory and
cybernetics (see for instance The Macroscope) a goal _emerges_, e.g.
from a stationary or dynamic equilibrium between opposing forces,
flows or any other type of "tendency". Such a goal need not have a
direct physical and observable reality, although it may. That's why I
called you stricter.

Greetings,

Hans

[Hans Blom, 970716g]

(Bill Powers (970716.0814 MDT))

I consider habituation to be an adjustment of the perceptual
apparatus that "filters out" and discards predictable perceptions
that are unimportant for how we generate actions.

This is the same mode of explanation that I have been arguing
against in the discussion of Aplysia. You are describing what you
see as plausible consequences of habituation, and giving them causal
force.

No no no. "Causal force" would be the PCT view, not mine. It would
result from a physically present goal (reference level) and the
machinery that realizes it. I'm content with correlational evidence:
imprecisely said, if I see that some quantity remains far stabler
than expected, I posit the presence of a control process. Very much
like The Test...

The only recourse you have then, if you want a scientific model, is
to appeal to some indefinite process like evolution, which can
explain anything we want explained.

It's great that science forever asks the "why" question, of course.
It's a fine way to explore. But sometimes it is quite enough -- for
control purposes! -- to know that something _is_ the case without
knowing _why_ it is the case. Control may even be said to be the way
due to which things can function well despite large uncertainties.

Well, that's my personal bias as an engineer. "You produce things", a
theoretical physicist may say to me, "without knowing why they work.
That would be unacceptable to me." Yet he may very well buy those
things...

Greetings,

Hans

[From Bill Powers (970717.0445 MDT)]

Hans Blom, 970716f--

We are forever forbidden, I guess, to know what Aplysia (or anyone or
anything else) makes of its perceptions. ...
Injury is a concept used by humans. We do not -- and cannot -- know
whether it is a concept for an Aplysia.

It isn't the "concept" of injury we need to be concerned with -- it's
whether there is a perceptual signal inside Aplysia that corresponds to
what we perceive as injury, as opposed to pain. Injury is perception of
tissue damage, which is not only a general idea but involves a body image
and the capacity to symbolize it. It seems doubtful to me in the extreme
that 20,000 neurons would be sufficient to support such a capacity.

While we never reach certainty in science, it is possible to make
reasonable inferences about what another organism is perceiving by using
the Test. The Test can't reveal things that are perceived but not
controlled, but it can show us the _kinds_ of things that are perceived,
and narrow them down to the point where we could look for neural correlates.

If we decide that Aplysia
experiences injury, it will be based on the _human_ concept of injury
and its experimental operationalization. "This is how I define injury
in my experiment", says the scientist, "so that I can measure it. And
this is how I measure gill withdrawal. And this is how I calculate a
correlation between both measurements". If you don't like his
definition of what constitutes injury (or gill withdrawal, or
correlation), you may propose alternatives. But foremost is that _we
humans_ can measure it.

Yes, but that kind of anthropomorphism is at least minimized by the
procedures of the Test. The scientist who proceeds as you describe is
making an arbitrary definition of what is being perceived, but is using no
test to see if the organism is really perceiving it. And typically, the
definitions of the stimulus are pretty loose. The scientist would not be
likely to inflict actual tissue damage in order to measure Aplysia's
response; more likely, he would administer some degree of pressure to the
siphon, and verbally classify it as "threat of injury" or something like
that. The "injury" aspect would exist primarily in the scientist's
imagination, not in the data.

All I want is for you to recognize just what it is you are proposing
when you say that Aplysia reacts in order to protect its gills from
injury.

We seem to "recognize" quite different pictures. Ambiguity?
Incomplete information? Bias?

All I'm saying is that the language you use to describe what's going on
inflates the information content of the data by a large factor.

Of course when you put your proposal as I suggest, you may decide
that you don't really believe (as I don't) that Aplysia is
sufficiently complex to perceive and control something as abstract
as the general idea of "injury."

Asking Aplysia to understand the human concept of injury is really
too much :-).

Good, I'm glad you agree.

It might perceive pain, but I doubt strongly that it could perceive
"damage." And I don't believe that you think it could, either.

Pain and damage (or its threat) seem to be closely connected in the
human vocabulary, so I'm not quite sure which fine point you wish to
make here.

The point is that Aplysia probably can't perceive such things, so it is
unlikely to exhibit control behavior concerned with them.

Anyway, what we -- once again -- find is different concepts of what
constitutes a goal. In your view it must be physically present, e.g.
in the form of a wire that carries a voltage.

Yes, exactly. PCT is a mechanistic theory. Whether you agree or not, I am
glad to see you acknowleging this aspect of PCT for the first time I am
aware of.

In systems theory and
cybernetics (see for instance The Macroscope) a goal _emerges_, e.g.
from a stationary or dynamic equilibrium between opposing forces,
flows or any other type of "tendency". Such a goal need not have a
direct physical and observable reality, although it may. That's why I
called you stricter.

Again, I'm glad to receive the acknowledgement. My underlying aim is to
explain behavior in terms that would still apply if I were not present to
observe and characterize it. In my opinion, systems theory and cybernetics
talk much more about the observer than the observed.

Best,

Bill P.

[From Bill Powers (970717.0517 MDT)]

Hans Blom, 970716g --

You are describing what you
see as plausible consequences of habituation, and giving them causal
force.

No no no. "Causal force" would be the PCT view, not mine. It would
result from a physically present goal (reference level) and the
machinery that realizes it. I'm content with correlational evidence:
imprecisely said, if I see that some quantity remains far stabler
than expected, I posit the presence of a control process. Very much
like The Test...

But if you say that Aplysia covers its gill "to" protect it from injury,
you are using language that most commonly implies that the result explains
the action that produces it. Without control theory, however, it is
impossible to give a complete account of how a consequence can guide the
antecedent actions that lead to it. And without the discipline of control
theory, you can mistake mere outcomes of actions for controlled
consequences of those actions, as is commonly done in biology, psychology,
evolutionary theory, and many other fields. What could be a systematic
testing of hypotheses becomes merely a series of unverified guesses.

The only recourse you have then, if you want a scientific model, is
to appeal to some indefinite process like evolution, which can
explain anything we want explained.

It's great that science forever asks the "why" question, of course.
It's a fine way to explore. But sometimes it is quite enough -- for
control purposes! -- to know that something _is_ the case without
knowing _why_ it is the case. Control may even be said to be the way
due to which things can function well despite large uncertainties.

If all you want is to control some action of Aplysia, then I agree that
it's enough to know what it will do when you touch it. Most people don't
ask how the world works; it's sufficient for their purposes to be able to
affect it in the ways they want. If you flip the switch and the light turns
on, what more do you need to know about electricity?

Science, however, at least as I think of it, seeks something more than mere
control; it seeks understanding. Understanding how a system works does
improve one's ability to control it, but understanding itself is also a
goal shared by at least some human beings, whether the understanding is
used for control or not. If that were not true, there could hardly be a
science of astronomy.

Well, that's my personal bias as an engineer. "You produce things", a
theoretical physicist may say to me, "without knowing why they work.
That would be unacceptable to me." Yet he may very well buy those
things...

Fair enough. Nobody says that an engineer is required to be curious about
how things work. But such an engineer obviously has to take on faith the
findings of those who DO study how things work. I'm sure that the
theoretical physicist not only buys the engineer's products, but is
grateful for the time and care put into their construction. And the
engineer, calculating stresses and strains, must at least sometimes give a
thought to those who worked out the theory that led to the equations.

Best,

Bill P.

[Martin Taylor 970717 12:00]

[Hans Blom, 970716d]

Paradoxically, Bruce may suddenly "hear" his clock (or at least
something unusual) if it _stops_ ticking...

More interestingly, one hears the "unheard" noise for a few seconds before
it stops. The implication is that the perceptual correlates of the physical
noise are there, quite normally, in the perceptual hierarchy, but they
are not in consciousness until the "surprise" happens (the stop).

The place of _conscious_ perception is one of the speculative aspects
of PCT. It isn't part of the "standard" theory.

Martin

Mervyn van Kuyen (970803 23:45 CET)

[From Bill Powers (970715.0857 MDT)]

[...]

If you will recall, the model I propose says that there is an intrinsic
control system, with reference signals for each critical variable. These
reference signals (in whatever form they take) are the embodied purposes of
the system itself. On the other hand, "survival" is not (I presume)
represented as an embodied reference signal, so it is not a goal of the
organism.

What bothers me in the system diagram you present in your introduction to
PCT (at your homepage), is that the element that produces the reference
receives no feedback. How does this element know what kind of context the
organism is in?

In my paper "The Neural Servo Model: Mechanism, Strategy and Tactics for
survival" I present a structure that learns to embody reference patterns
in a fully recurrent network. Survival is, to my opinion, definitely a
intrinsic tendency of this network. Isn't survival our ultimate agenda?

Have a nice day,

Mervyn van Kuyen

mervyn@xs4all.nl
www.xs4all.nl/~mervyn

<html>

<! OPEN THIS DOCUMENT WITH YOUR WEB BROWSER!>

<b><font size=8>The Neural Servo Model</font><br>
<font size=5>Mechanism, Strategy and Tactics for Survival</font></b>
<font size=3>
<p>
<b>Abstract</b><br>
<p>
In this paper it is suggested that simple feedback control:
a) is an effective means of increasing chances of survival for any organism
b) carries a hidden agenda for more complex systems and organisms.
An additional mechanism that exposes this hidden agenda
has been explored and tested in simulations, providing a neural network
with a generic strategy for survival in the real world.This additional
mechanism is suggested to model our system of (mental) arousal.
<p>
<b>Keywords:</b> arousal, EEG, neural network, servo control, survival</b>
<p>
<hr>
<p>
<b>Introduction: Learning computers?</b>
<p>
If computers can learn, what do we need programmers for?
Well, computers can't learn, but what we <i>can</i> do,
is design programs that can learn.
Designing such a program can be very similar to programming
a computer game for a human player. This is because playing a game
is all about learning. Most games offer a world (that is to some degree
similar to the world we usually live in) and an agent, controlled by the
player.
The agent can have human appearance, but it can also be a helicopter
or a space ship.
<p>
Every game allows the agent a degree of control. For example, control over the
location of the agent. However, control is useless without knowledge.
Where should the agent go next? Which locations are dangerous?
To answer these questions the player has to acquire knowledge.
The act of exerting control is simple (moving the joystick), it is
exerting useful control that has to be learned.
<p>
Both people and computer programs can acquire useful control skills by means
of feedback. For most games, this feedback takes a very simple form.
An indication of how well you are playing is often simply the length of your
game. This would be all the feedback needed to train a neural network as well.
<p>
A nice example of a neural network is a crossing with traffic lights and
induction sensors. This is the kind of network that only gives a green light
when you move your car close enough to the traffic light. The knowledge
that this network represents is very limited. The most important rule
is that two crossing roads should not get a green light at the same time.
One solution would be that one traffic light dominates. This would
be practical if this light regulates a very sparse stream of traffic. In other
cases it is more practical to alternate between the two directions, just like
a network without sensors would do.
<p>
The point of this exercise is to demonstrate that the structure of network
can represent knowledge and that the structure can exhibit useful control.
To make such a network learn, one can introduce evolution. Evolution
requires two things: variation and selection. Variation can be obtained
by introducing changes to the structure (knowledge) of the network.
Selection can be obtained by introducing a learning rule, for example
'keep those changes that reduce the number of sensed cars'.
<p>
This learning rule is quite effective, since it would acquire quite an amount
of knowledge, especially in a community without traffic regulation except
'green is go'!
First the system shall find our basic rule: don't give two green lights. Why?
The resulting traffic jam and collisions provide an increase in the number
of sensed cars so that the system will never preserve connections that do give
two
green lights at the same time. It <i>will</i> preserve connections that give
green lights in such a manner that the crossing will become less crowded,
implying less resistance for the flow of traffic. So, we have a learning system
that would be useful in real life!
<p>
Even more practical is it to simulate such a network, the traffic and the
manipulation (evolution) of the network inside a computer. The traffic
would represent the world (the problem) and the network the agent. This
allows us to train and test the network without causing serious accidents.
<p>
Now that you are more familiar with the mode of thought associated with
artificial intelligence, I shall continue with the introduction
of a control mechanism that is much simpler than this traffic control
system: the servo.

<p>
<hr>
<p>
<b>1 Mechanism for survival: The servo</b>
<p>
The first reason why a servo can be suggested to be a fundamental
mechanism for survival is that it is active until it is satisfied.
An example of a servo is a thermostat that controls a central heating system.
You provide a preset, a goal temperature. All the thermostat has to do, is
measure the actual temperature and activate the heater if the actual
temperature
is lower than the goal temperature: <i>heat supply = reality - goal</i><br>
(negative 'heat supply' could activate a cooling system).
<p>
The question is, who controls the heater? Well, the only thing that changes
is reality itself (as a result of the heat supply). For this reason, another
term
for servo control is feedback control. Reality can control itself by means of
structures that exploit feedback.
<p>
Any organism advanced enough to establish (genetically or through experience)
preferences or <i>goal states</i> (like having a full stomach) can benefit from
these
control structures that automatically initiate actions (like swimming) until
all
goal states are satisfied. This is the first level of explanation that suggests
that the servo could well be a fundamental mechanism for survival.

<p>
<hr>
<p>
<b>2 Strategy for survival: The hidden agenda of the servo</b>
<p>
The second level of explanation is hidden. Hidden in the sense that
to provide a step by step demonstration would take to much time.
This is what the study of complexity is all about: finding out what
certain rules do if applied to large scale games and vast amounts of
moves. In other words, doing computer simulations and observing
the results...
<p>
The rule that has been tested during this study is the single rule that is
incorporated in the structure of any servo: <i>be active until satisfied</i>.
Another feature of a true servo controller is that its actions propel
reality towards a satisfactory state (no difference between reality and goal).
<p>
When we use a neural network to simulate a servo that gives us an important
advantage: the goal state of such a 'neural' servo is not fixed like in a
servo.
A neural servo can become satisfied not only by propelling reality toward
its goal state, but also by propelling its goal state toward reality. This
probably sounds quite abstract, but it describes the most interesting device
I have ever seen! That is because when we train a neural
net to become a servo, we are not limited to just one 'control channel', but
we can build a network of hundreds of servos connected by internal circuits!
The only thing we have to do, is enable evolution to search for connections
that satisfy the system.
<p>
Initially the system has no goal state 'in mind' for reality. This means
that if the system were able to blow up its sensed environment (if there would
be a detonator switch it could connect to) it would do so with great
satisfaction.
However, this is not the hidden agenda of the servo I want to discuss here.
<p>
Since a neural servo is probably not able to destroy its environment, it shall
make a trade-off between the learning and control effort. For example, if a
neural servo has to control a central heating system it would acquire
connections
that generate a goal temperature that is equal to the actual temperature.
The effectiveness of this approach relies on the fact that
each time the actual temperature changes it will be much easier
to <i>control</i> the room temperature (after the structure has come to
represent
a goal temperature) than to <i>change this knowledge</i> embodied by the
network.
<p>
In effect, a neural servo tends to keep its environment the way it got to know
it.
This means that if the neural servo is a legitimate model for human
intelligence,
a child inevitably (since its lack of control over its environment) has to
learn all
about its environment, while later on it will use the control it gains (by
learning
to communicate and by developing senso-motorical skills) to preserve or
recreate
the kind of environment it has grown up in. If its initial environment has been
a
socially and physically healthy one, this tendency provides an effective
strategy
for survival (for any social organism).
<p>
So far, we have explored the hidden agenda of the servo mechanism merely by
means of
a thought experiment. The neural servo seems to be able to provide a master
plan
for life: <i>get to know your environment until you have gained enough control
to preserve (or recreate) it the way you found it.</i>

<p>
<hr>
<p>
<b>3 Tactics for survival: Arousal unlocks the hidden agenda</b>
<p>
Compared to the results of the thought experiment, the results emerging from
actual simulations may appear very limited. Their strength, however, lies
mainly in their
modeling power for the effects of arousal, not in the absolute intelligence or
control gained by the simulated neural servo. This additional mechanism,
arousal, is suggested to be the key to the servo's hidden agenda.
<p>
The first neural servo that has been simulated, could exert no control at all,
while it was exposed to a repetive series of patterns through an array of six
input channels.
The simulated network consisted of six servo controllers ('thermostats') which
could be connected internally by a maximum of 1200 connections between 36
neural nodes.
These neural nodes generated an outgoing pulse whenever their input per saldo
exceeded a certain threshold.
<p>
This network, growing by means evolution (selecting for connections that
satisfy the system), turned out to be unable to learn effectively. It could
progress only until
it failed. After any 'bad luck', the system could not return to its old level
of performance by destroying the last added connections. In order to understand
this effect, we'll have to return to the basic concept of feedback control.
<p>
A servo is satisfied only if reality and goal state are equal: <i>input =
reality - goal.</i><br>
So task of this neural servo was to generate goal states that absorbed reality
before
it got a chance to pour into the network. If it had a lucky day, the prototype
could
find a few appropriate goal states before it made a bad connection.
Fortunately, these
lucky runs provided an interesting insight: Every latest improvement operated
upon reverberating patterns that had already been transformed by the structure
that was already in place!
<p>
Although it is impossible to imagine the effect of hundreds of feedback loops,
delays
and logical operations on a reverberating pattern, it is possible to see why
the prototype could not maintain its effectiveness after a failure. All the
operations acting at once
simply didn't have the same effect as when improvements kicked in, one by one,
being collected by evolution!
<p>
Solving this problem was easy enough. Connections already had a property
called 'physical strength', which was increased whenever the system became more
satisfied. This way older connections had high strength, new ones low strength
(this allowed evolution to destroy weak connections whenever the system became
less satisfied, so that the system returned to its original structural state).
So, any mechanism that could re-enable all connections in order of their
physical strength, would cause the operations to transform the patterns in the
right sequence!
<p>
An existing system that could do this job very well, is our arousal system.<br>
<i>One of the things our arousal system does, is creating fluctuating electric
potentials
across wide areas of the brain</i>. These oscillations are often assumed to be
a mechanism for synchronizing neuronal firing. <i>I would suggest that these
potentials
act as offsets that effectively disable and re-enable connections in order of
their physical strength, effectively downsizing a network to a more primitive
state before 'releasing' more and more recent parts of the network again</i>.
<p>
For the simulated neural servo this additional system did the trick: it learned
to absorb repetitive series of four patterns for up to 85%. The hidden agenda
of the servo turned out to be within reach as well: confronted with two
patterns, alternating at
random intervals, the neural servo preffered 'flipping' back to the first
pattern and learn that single pattern above learning both pattern and exerting
no physical control. This is about
the most basic variation of 'get to know your environment until you have gained
enough control to preserve (or recreate) it the way you found it'. <i>So, the
'hidden agenda' is a real phenomenom, and arousal is all we need to unlock
it.</i>

<p>
<hr>
<p>
<b>3.1 Tactics for survival: Arousal in action</b>
<p>
The electric potentials in the brain (measured with an EEG) are only one
observable aspect of our system of arousal. This system connects many things:
<p>
<center><table border=1>
<tr><th>Level of arousal: <th>Low <th>High
<tr><td>pulse rate: <td>slow <td>fast
<tr><td>respiration: <td>slow <td>fast
<tr><td>brainwaves: <td>slow <td>fast
<tr><td>heat release: <td>slow <td>fast
<tr><td>digestion: <td>fast <td>slow
<tr><td>growth: <td>fast <td>slow
</table></center>
<p>
Let's put the model to the test. When we are excited, our arousal is high. We
breathe
fast to acquire more oxygen, our heart beats fast in order to get the oxygen
where
it is needed: to the muscles. Heat generated by the muscles is released quickly
to avoid overheating. Digestion and growth are put on hold. What happens in the
brain?<br>
The electric oscillations in our brain are small and fast. So, according to the
neural servo model this results in having a nearly complete network ready and
being able to perform many (small) operation sequences per second. A limitation
of this mode of
functioning is that large operation sequences are not done. This means that
context switches are very hard to follow: it is difficult to operate a
telephone when you have already started to panic.
<p>
At the opposite side of the spectrum we find 'sleep'. When we sleep, our
arousal is low.
We breathe slowly, we hardly consume energy. Heat release is minimal. Digestion
recombines complex molecules in order to keep our body in good shape. The
electric
oscillations in our brain are slow and big. The deeper our sleep, the more
primitive
is the part of our brain that still functions: we go back to our childhood. Any
signal
that we pick up reverberates through this primitive brain. The lack of reality
causes
more and more feedback to occur, until we fall back into an even more deep
sleep.
In the morning when there is a sufficient supply of reality pouring into our
senses,
the brain can finally launch a successful attack, creating a phantasy world
that keeps
matching reality until we are fully 'awake': dreaming correctly or correcting
reality.
<p>
A lack of sleep has the same effects as panicing. We are unable to refresh our
phantasy world, which requires a very long sequence of operations to build
(slow brainwaves). We then become aware of the fact that we create reality from
within:
we 'start' halucinating.

<p>
<hr>
<p>
<b>4 Future work</b>
<p>
To conclude this paper I will outline, in a very condensed form, some
more issues that could be addressed by this model:
<p>
<li>One question that has remained unanswered sofar,
is whether the neural servo is a biological reality.
Many observations seem to be in favour of this model. It turns out, for
example, that people tend to go back to their early childhood during deep
sleep.
This supports the suggestion that low arousal (sleep) would temporarily
downsize
our neural networks to more primitive states.
<p>
<li>Another question could be the role of arousal in communication. For
example, our breathing rate is controlled by our arousal, so it could well be a
means for communicating our moods. Note that a rizing tone of speech relates to
increasingly powerful b
reathing, rizing arousal. This may seem trivial,but why should evolution put
any effort into giving people some universal protocol for communication when it
can get emotional messages across for free?
<p>
<li>Consciousness, finally, could be suggested to 'stand or fall' with the way
arousal
allows more or less complete networks to transform the patterns that travel
within
them. Remember that new connections do not add 'new ingredients' but that they
are more like food <i>processors:</i> they transform a pattern each time it
passes by!
For this reason it is likely that more advanced (recent) connections often
inhibit
or correct the transformations of their precursors. This way a picture arises
of a
network that can allocate each subset of itself (a more or less primitive
state) time
in order to generate a certain pattern: a pattern that should match reality to
the greatest possible extent. According to this model the ability to manipulate
one's own arousal by any means would extent diversity of the patterns that can
be grown within o
ne's mind.
<br>
It would be a matter of experience, however, whether or not these manipulations
result in sensible patterns: patterns that match reality or have any other
meaning.
This is where we might have stumbled upon a criterion for consciousness. Maybe
we
lose consciousness only if our arousal makes such a shift that a very unusual
sequence
of operations is performed, resulting in a pattern that is increasingly
mismatching
reality?
<p>
<p>
<hr>
<p>
<center><h5><b>Copyright (c) 1997 Mervyn van Kuyen -- All rights reserved.
</h5></b></center>
<p>
<hr>
<p>
<center><table><table border=3>
<tr><td>Updated July 21 (1997)</td>
<td><a href="index.html">Back to MERVYN INVENTURES</a>
</td></tr></table></center>
<p><hr><p>
</html>

[From Bill Powers (970803.1946 MDT)]

Mervyn van Kuyen (970803 23:45 CET)

What bothers me in the system diagram you present in your introduction to
PCT (at your homepage), is that the element that produces the reference
receives no feedback. How does this element know what kind of context the
organism is in?

The block diagram is just one unit in a much larger organization, which we
refer to as HPCT -- hierarchical perceptual control theory. The reference
signal that is shown would actually be the net output from a number of
higher-level control systems, each of which receives many inputs and
produces many outputs. You might look up my book, "Behavior: the control of
perception" to get an idea of how I think the hierarchy is organized at
various levels.

In the introductory paper on my Web page, I intended only to show the basic
concepts of negative feedback control. If you follow up the links to the
CSG Web page, you will find much more extensive materials.

Your paper is interesting, but I think you will find that at least some of
the basic ideas are already part of Perceptual Control Theory. This is not
to say you have wasted your time: I am always reassured when someone else,
from an independent start, comes up with the same ideas. I agree with you
that control theory gives us a very powerful tool for understanding and
modeling behavior.
It may help you to know that I have been trying to get behavioral
scientists to consider control theory for over 40 years.

Best,

Bill P.