Five Layer Model

[Steve Adey (991015.0945GMT)]

Apologies again for a rather long email, which I hope is
interesting.

As a result of Bill's request to forward to him my original
message on my five layer model ideas, I have had an off-CSGnet
corrspondence with him to try and advance my understanding
of HPCT. I am greatful to Bill for spending time answering
my emails, but now Bill feels that he needs to spend more time
on his modelling activities so I have tried to capture the essence
of our emails and copy them to the net to see if anyone else has
some views.

This edited version tries to omit philosophical issues, and some
of the consciousness and non-linear feedback issues that had crept
into our discussion, to focus on the HPCT model issues. Appologies
in advance to Bill, if he should think that I have excessively
pruned or edited the emails to render his words out of context.

Note: It is of the nature of email interaction that the sequence
      below is not ordered by time but more by subject.

The key modelling issues seems to me, if I am interested in
unexpected events caused by other organisms/agents in a simulated
world; how many levels are required to model interesting behaviour,
learning and reorganisation that simulates a 'real' organism? Is
it possible with a five layer model or what should the layers be?

<insert for first email exchange>

===================================================================
Re: Five Level Model (was Conflict and HPCT)
[From Bill Powers (991005.0653 MDT)]

Steve Adey (991004.1400 GMT)--
. . .

Bill:
No, we haven't modeled (simulated) anything above the
relationship level, and the so-called "event" level is missing
below that level. Even in models of configuration control (only
the third level) we have to postulate that there is a
configuration-perceiver; I have no idea how a person can perceive
the "shape" shown in Demo 1 (and especially its variations), so
the control model just has to assume there is a perceptual signal
corresponding to the computer variable that is varying to alter
the shape of the visual image.
-----------------------------------------------------------------

Steve:
I am interested in the fact that you skip level 5 in your model
because it represents an interesting level at which the organism
can perceive events over which it may have no control.

Bill:
Not only events, but intensities, sensations, configurations,
transitions, [events], relationships, and so on. At every level
there are some perceptions that we can't affect by acting on the
external world.

Steve:
When I first found out about PCT, I found the statement "behavior
is the control of perception" as troubling because my interest in
autonomous control was concerned with simulated worlds where
events happen that are outside the agents control.

Bill:
The thesis is not that all perceptions are under control. It is that
no behavior ever occurs except for the purpose of controlling some
perception.

Steve:
This statement seems to be identical with stating that there is
nothing that we can do that we cannot also perceive. At some level
that is trivially true but there are many cases where we are very
unsure of its effect and hence what we will perceive in the future.

Steve:
The attention of the agent has to be
drawn to these events and their significance has to be recognised
and then the agent will act to counter them.

Bill:
Conscious attention is by no means necessary for a control system to
counteract disturbances and bring perceptions to their reference
levels. Most human control systems operate unconsciously most of the
time, although practically any of them can also operate in the
conscious mode. A gust of wind, for example, may push your car to
one side, but the first you know of it consciously is seeing your
hands twisting the wheel against it. Yet you can easily do the same
thing consciously.

Steve:
I think that there is a significant difference between where I am
running a control system, and where I have to institute one because
I perceive something that I might want to control.

Steve:
Thus my description comes closer to a stimulus-response than
a control of perception. I recognise that lower levels are indeed
doing control of perception but at this event level it is less
obvious.

Bill:
I don't want to defend this level -- I'm beginning to suspect that
it's badly out of sequence -- but let me just define "control of
events". Control is required when events that have just been
produced or have just occurred do not always match the event that
one intended to occur. The only way to correct an error is to repeat
the event, adjusting one or more lower reference signals
(transitions, configurations, sensations, intensities) to
make the new event closer to the intended one. Sometimes repeating
an event is impossible; striking one's last match can't be done
over.

Steve:
I do not expect you to defend a position which was even at the start
only tentative, but I was concerned at how little the upper levels
had appeared to be challenged or tested in almost 30 years. Please
publish your current thoughts in draft on CSGnet so we can understand
your latest views. However for me the issue is not the certainty of
an effect the organism desires but the totally unpredicted effect from
another organism (or nature in general). S-R responses in the lab have
this latter category.

Steve:
What I wonder is whether a 5 level model would be feasible
(at the moment only as a thought experiment to derive a
specification/understanding), and whether it should correspond
to something a bit like a fly. That perceives some simple
features of the world and then when a rate exceeds some level
it interprets it as an event and responds to avoid trouble (e.g.
my hand).

Sounds possible.

What is less clear is whether the fly could be classed as a
learning system with reorganization or whether everything is
hard-wired.

We'd just have to study how flies control things and see if the
parameters of control (in a model matched to the behavior) sometimes
begin varying randomly, eventually settling down to different
values. In principle this can be done; the main difficulties are
practical. But people have done such impossible-seeming experiments
with insects that I wouldn't declare any experiment impossible.

Yesterday I saw the following reference, but have not followed it up:
Heisenburg, M., Heusipp, M. & Wanke, C. (1995) "Structural plasticity
in the Drosophila brain" Journal of Neuroscience, 15, 1951-60.

<cut>

Steve:
I think that it is unlikely that _conciousness_ would have evolved
if there were not some special advantages.

Bill:
I agree. I think that consciousness may direct reorganization to
places in the brain where there is actually a problem, rather than
letting it undo organizations that are working well. but I don't
think it's needed to carry out perceptual computations at any level.

Steve:
My worry is that your descriptions of reorganisation have always
emphasised the randomness of it, which may be needed to find some
solutions but local improvement seems to me a more likely first step.
However local improvement needs some focusing system

<cut>

Steve:
There is no doubt that
negative feedback is acting _unconsciously_ at the level of balance
but that says nothing about the higher levels. The presence of the
'program control' 7th level in your model actually ensures that any
organism that has that level present is not limited to simple
negative feedback.

Bill:
That's right, but it's still just a mechanism, like a computer.
Consciousness isn't needed to make it run. Or is it? I don't really
know. We need some experiments on this subject: what difference it
makes whether or not consciousness is present. But so far, I've seen
no proposals about the properties of consciousness that give it any
abilities that a neuronal computer couldn't have.

Steve:
I assume that what I feel is consciousness and I also attribute it
to higher animals like dogs. It must give an evolutionary advantage
or it would never have been refined like it has in humans.

<cut>

Steve:
If you define the references as purpose then conciousness is
clearly separate, but the account of conciousness in HPCT currently
seems to be inadequate.

Bill:
I agree completely. However, most of the things other people
attributed to consciousness are done in HPCT by automatic neural
control systems. Not that I can say _how_ they're done, but in
principle they should be doable by neural networks. I haven't heard
ANY adequate accounts of consciousness, have you?

Steve:
I am just starting to read about adaptive behaviour systems and found
an interesting thesis at:
http://lucs.fil.lu.se/Staff/Christian.Balkenius/Thesis

<cut>

Steve:
Your basic outline of the levels may well be correct and forms the
interesting part of you theory, along with a suggested mechanism
for memory. To say that the higher levels use "the same laws of
negative feedback" has yet to be demonstrated, since you have no
models for those levels.

Bill:
The hold-up on the models is trying to figure out how, for example,
a sequence-control system could perceive sequences. If we just
postulate an input function, saying that a sequence of lower-level
signals is going on and that there is a sequence-level perceptual
signal representing it, without trying to explain how that
perceptual signal was derived, then we could probably go on to
finish out the control system model. But I like to get the lower
levels completed before trying to guess about higher levels,
and we're hung up at the third level where configurations are
perceived. Nobody has a good object-in-context recognizer that
could, for example, provide a signal continuously representing the
orientation of a cube relative to its surroundings. And that's a
_simple_ problem. I don't think we're ready to start modeling higher
levels. We don't have the right kind of mathematics yet.

Steve:
I understand where you are coming from but for anyone who has
a background in AI they seem not very interesting. A conviction
that your low level feedback mechanisms will scale up to higher
levels just sounds unconvincing (even if it turns out to be true).
You may have no choice but to persue your methodology, but there
are other strategies and hence a gulf which is hard to bridge.
Having read about HPCT and tried to understand it (but no doubt
failed), I will use it as a framework to evaluate other ideas.

<insert for second email exchange>

-----------------------------------------------------------

Bill:
The digital revolution came along just in time to abort the main
threads of progress in our understanding of organisms.
------------------------------------------------------------------

Steve:
The computer revolution rather allowed our attention to switch from
low level issues to higher level cognitive ones. Admittedly some of
the techniques we have developed (like exhaustive search) are not
likely to be those employed by the brain. However I sense that
heuristic search is employed by the brain in that it normally
presents likely candidate solutions to the conscious mind. I know
that you have used your 'observer' (which you seem to equate with
the reorganisation mechanism) as something like consciousness, but
I still do not understand how well your ideas cope with the full
richness of thought, sensation and intentionality as we experience
it.

Bill:
The main bad thing the digital revolution did was to introduce the
perfect stimulus-response device, in which outputs could be
precisely planned and then precisely brought about. The idea that a
natural environment would _unpredictably alter_ the outputs before
their effects occurred was simply missing; thus the need for
continuous control was not seen.

Steve:
It appears that the demonstration of the e-coli algorithm also depends
on the goal remaining predictable. So long as there is an evaluation
function that can be measured, then even if it is a heuristic one it
can be minimised. I would expect better performance from a single
measure than from X-Y; especially if they are on different displays
like the old wartime radars. The existance of an evaluation function
for multiple levels seems to me much more problematic.

Bill:
I have no objection to proposing that the brain does digital-like
things. I think that's beyond doubt. But even while it's doing them,
there's a continuous quantitative aspect even to higher-level
behaviors. We talk about getting closer to an agreement, or
approaching an understanding, or gradually forming a plan, or
striving to be more logical. And even in the _purely_ digital realm,
we can talk about, for example, a theorem-prover setting subgoals
for subroutines to seek, the only difference from analog control
systems being that the variables change in steps rather than
smoothly.

Steve:
As always the question is what are we trying to model and whether the
approximations in the model are appropriate for the purpose that it
will serve. Serial digital machines can simulate parallel analog or
digital machines, but it is not yet clear to me whether parallel
brain machines can simulate serial machines easily because parallel
algorithms normally have to be specially developed.

-------------------------------------------------------------------
RE: real human perception.
Bill:
We're a long way from claiming to cope with wholly naturalistic
experience in all its richness, interactions, and complexity. But
what we can cope with is suggestive enough to hint that any big bets
on other ideas should perhaps be postponed until more data are
obtained.
-----------------------------------------------------------------

Steve:
There seems to be a major issue in the binding problem of
consciousness that is not resolved by any of the proposal that I have
seen (including HPCT).

<cut>

Steve:
I have no problems with your low level models, since they work, but
they do not exhibit learning, which seems to me key to higher
levels.

Bill:
Look on my web page for the Little Man model containing the
"artificial cerebellum". It's an example of one possible kind of
learning.

Listen, this could go on forever. I have other stuff I have to do,
and lacking any natural cutoff point I'd better just quit here.
Nuff said for now.

Steve:
I have sent a reply since I have composed it, but do not feel the
necessity to reply unless you have a thought that address the issue
of what modelling could/should be done to refine ideas on
reorganisation and learning.

regards Steve

[From Rick Marken (991015.1300)]

Steve Adey (991015.0945GMT)--

The key modelling issues seems to me, if I am interested in
unexpected events caused by other organisms/agents in a simulated
world; how many levels are required to model interesting behaviour,
learning and reorganisation that simulates a 'real' organism? Is
it possible with a five layer model or what should the layers be?

Could you describe your modeling project in a bit more detail.
It sounds to me like you are trying to model agents controlling
variables in a world where disturbances to those variables
(unexpected events) are caused by other organisms/agents. Is
this right?

By the way, did you get the zipped version of my three demos?

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates mailto: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

from [ Marc Abrams (991015.1604) ]

It's probably a poor way of handling this but I can't figure out a better
way so I'll go with interjecting my comments among yours and Bill's.

[Steve Adey (991015.0945GMT)]

The key modelling issues seems to me, if I am interested in
unexpected events caused by other organisms/agents in a simulated
world; how many levels are required to model interesting behaviour,
learning and reorganisation that simulates a 'real' organism?

Why is this the case? "Unexpected events" indicates a variable environment.
from a modeling perspective it really doesn't matter how or who makes the
environment unstable. The issue becomes one of "stablizing" the environment.
The current thinking in AI circles, Robotics, and Adaptive Behavior is to
limit the "problem space" so you can "stabilize" the environment enough to
allow a practical solution to a narrow problem. Some efforts have been made
and are being made with _Reinforcement Learning_ ( NO this is not Skinnerian
reinforcement :slight_smile: ) as a way of dealing with ( i.e. learning and
reorganizing ) in unstable environments. I think it shows some promise in
possibly helping us deal with reorganization. We are a _far_ cry from
simulating a "real " organism. We can and have simulated _aspects_ of
organism's but not the whole enchilada.

Why does it matter haoe amny "levels" are involved. I think you are doing a
great diservice to yourself in limiting your thinking to _any_ number of
levels. I think the most important thing to concern yourself with is
understanding the basic model. We don't really know how any two or three
other levels might interact given Bill's conjectures in Chap 15. of B:CP.
Sure Rick has done a heorarchy model but it just shows that the
_mechanism's_ for control would apply to more then one level. So if you
throw in memory and reorganization we don't know what happens between any
two levels at this point. Restricting the names of the levels means you must
shoe horn in data to fit an existing structure. It might be right, and it
may not. But why constrain yourself?

Is it possible with a five layer model or what should the layers be?

Who knows?

<insert for first email exchange>
>>===================================================================
>>Re: Five Level Model (was Conflict and HPCT)
>>[From Bill Powers (991005.0653 MDT)]
>>
>>> Steve Adey (991004.1400 GMT)--
>>> . . .
>>>> Bill:
>>>> No, we haven't modeled (simulated) anything above the
>>>> relationship level, and the so-called "event" level is missing
>>>> below that level. Even in models of configuration control (only
>>>> the third level) we have to postulate that there is a
>>>> configuration-perceiver; I have no idea how a person can perceive
>>>> the "shape" shown in Demo 1 (and especially its variations), so
>>>> the control model just has to assume there is a perceptual signal
>>>> corresponding to the computer variable that is varying to alter
>>>> the shape of the visual image.
>>>>-----------------------------------------------------------------

Steve, this specifically addresses my concern above. A guide yes, useful,
yes. But it is not chisled in stone. And talking about it won't help
advance it much either. Need to think of ways of _testing_ it.

>>> Steve:
>>> I am interested in the fact that you skip level 5 in your model
>>> because it represents an interesting level at which the organism
>>> can perceive events over which it may have no control.
>
> Bill:
> Not only events, but intensities, sensations, configurations,
> transitions, [events], relationships, and so on. At every level
> there are some perceptions that we can't affect by acting on the
> external world.

You both are talking about different things here. Steve, Bill is talking
about the fact that at _all_ levels there are things that you will not be
able to change by acting on the external world. You are referring to some
"capabilities" ( experiencing events ) we have

>>> Steve:
>>> When I first found out about PCT, I found the statement "behavior
>>> is the control of perception" as troubling because my interest in
>>> autonomous control was concerned with simulated worlds where
>>> events happen that are outside the agents control.
>
> Bill:
> The thesis is not that all perceptions are under control. It is that
> no behavior ever occurs except for the purpose of controlling some
> perception.

Steve, This is a _key_ statement. You need to understand what this means vis
a vie your statement. You really need to be clear about this.

Steve:
This statement seems to be identical with stating that there is
nothing that we can do that we cannot also perceive.

Bill is not saying this. We do (act) in part, because of what we percieve

At some level
that is trivially true but there are many cases where we are very
unsure of its effect and hence what we will perceive in the future.

Take a deep breath :-). We are _never_ sure of it's effect. That's why it's
part of a negative feedback loop. One that is both continuous and unending.

>>> Steve:
>>> The attention of the agent has to be
>>> drawn to these events and their significance has to be recognised
>>> and then the agent will act to counter them.
>
> Bill:
> Conscious attention is by no means necessary for a control system to
> counteract disturbances and bring perceptions to their reference
> levels. Most human control systems operate unconsciously most of the
> time, although practically any of them can also operate in the
> conscious mode. A gust of wind, for example, may push your car to
> one side, but the first you know of it consciously is seeing your
> hands twisting the wheel against it. Yet you can easily do the same
> thing consciously.

Bill, this might be true but begs the question. How? Steve is dealing with
autonomous AI agents. He would like to have a robot that could identify
environmental disturbances, act to counter them and complete the 'goal(s) of
the agent. Real easy to talk about this in CT terms but how do you implement
such a system?

Steve:
I think that there is a significant difference between where I am
running a control system, and where I have to institute one because
I perceive something that I might want to control.

Steve you don't "perceive" things you want or don't want to control.

>>> Steve:
>>> Thus my description comes closer to a stimulus-response than
>>> a control of perception. I recognise that lower levels are indeed
>>> doing control of perception but at this event level it is less
>>> obvious.
>
> Bill:
> I don't want to defend this level -- I'm beginning to suspect that
> it's badly out of sequence -- but let me just define "control of
> events". Control is required when events that have just been
> produced or have just occurred do not always match the event that
> one intended to occur. The only way to correct an error is to repeat
> the event, adjusting one or more lower reference signals
> (transitions, configurations, sensations, intensities) to
> make the new event closer to the intended one. Sometimes repeating
> an event is impossible; striking one's last match can't be done
> over.

Steve:
I do not expect you to defend a position which was even at the start
only tentative, but I was concerned at how little the upper levels
had appeared to be challenged or tested in almost 30 years. Please
publish your current thoughts in draft on CSGnet so we can understand
your latest views. However for me the issue is not the certainty of
an effect the organism desires but the totally unpredicted effect from
another organism (or nature in general). S-R responses in the lab have
this latter category.

If your really interested in doing some work I would suggest a bevy of
reading. Go to the CSG Web site for Bibliographic info.
Bill's 4 part article in _Byte_ Magazine. His article in _Science_.
_Volitional Action_ edited by Wayne Hershberger, ABS Journal, Edited by Rick
Marken, LCS I & LCS II Compiled Papers of Bill Powers, _Origins of Purpose_
Powers, and Mind Readings by Rick Marken. All of these papers are old. But
they provide some _fertile_ ground for someone who has a real desire to
explore some these issues using CT,

This is it for today. I am pooped. I'll come back tomorrow with the rest
:-).

Marc

[From Tim Carey (991016.1045)]

[Steve Adey (991015.0945GMT)]

Bill:
The main bad thing the digital revolution did was to introduce the
perfect stimulus-response device, in which outputs could be
precisely planned and then precisely brought about.

Could someone briefly explain the distinction between analogue and digital
in terms of how these two systems operate differently please? The
distinction is made often on CSG but I'm hopelessly ignorant of such things
and so I am able to understand sentences like the one above on only the most
superficial level.

Cheers,

Tim

[From Bill Powers (991016.0939 MDT)]

Steve Adey (991015.0945GMT)

Good idea to go public with our private conversion, Steve. Others will no
doubt have interesting comments to offer. And this way, Mary won't yell at
me for engaging in private writings that she thinks everyone should see.

Best,

Bill P.

[From Richard Kennaway (991016.2100 BST)]

Bruce Abbott (991016.1205 EST):
[explanation of difference between analogue and digital]

That's all true, but I think there's a bit more to it that Bill's
comment may have been alluding to. A digital computation is absolutely,
precisely, repeatable. The same computation, applied to the same data,
will always yield *exactly* the same result (jokes about "bit rot"
notwithstanding). If I want to find the five billionth digit of pi, all
I have to do is grind through the computation for long enough, and I
will get exactly the right answer, exactly the same answer as anyone
else using any correct algorithm.

This may foster the illusion that we could solve analogue problems as
exactly, if only we knew the starting conditions accurately enough and
had an accurate enough model. Then we could control things through
accurate measurement of initial conditions, modelling, and planning --
hence "modern optimal control theory". But this is not the case: for
most of the variables we want to control, measurement and modelling
sufficiently accurate to be able to make such predictions over a useful
length of time is difficult or impossible. These variables cannot be
controlled unless we observe their current values. And if we do observe
them, then accurate modelling ceases to be necessary.

-- Richard Kennaway, jrk@sys.uea.ac.uk, http://www.sys.uea.ac.uk/~jrk/
   School of Information Systems, Univ. of East Anglia, Norwich, U.K.

[From Tim Carey (991017.0720)]

[From Bruce Abbott (991016.1205 EST)]

Thanks heaps Bruce that was exactly what I wanted.

_Now_, however, I have another query. Are analogue computers still around?
Were they/are they used in the same way that digital computers are used?
Could they be used for word processing etc., or did their ability to handle
continuous variables mean they were used for a specific purpose?

Cheers,

Tim

[From Bruce Abbott (991016.1845 EST)]

Tim Carey (991017.0720) --

Bruce Abbott (991016.1205 EST)

Thanks heaps Bruce that was exactly what I wanted.

_Now_, however, I have another query. Are analogue computers still around?
Were they/are they used in the same way that digital computers are used?
Could they be used for word processing etc., or did their ability to handle
continuous variables mean they were used for a specific purpose?

I'm sure Bill could give you a better answer to this than I could, as he has
had actual experience using analogue computers and I haven't. But as Bill
may still be busy, I'll take a stab at an answer. Analogue computers are
still around -- for some purposes they are still better than digital ones.
When the U.S. Navy decided to refurbish one of the old WWII battleships,
they considered replacing the old mechanical (analogue) computers used to
aim the big guns, but discovered much to everyone's surprise that they were
still as good for the job as any electronic replacement.

Analogue computers would not be used for word processing and similar
applications, as they are not well suited for doing logical operations such
as matching a given discrete output (printing the letter "k") to a given
discrete input (typing the "k" key). For those uses a digital system is far
superior. Implementing a "program" on an electronic analogue computer
involves (or at least used to involve) literally _wiring up_ (via patch
cords) a circuit made up of analogue devices (voltages sources, capacitors,
resistors, etc.) that generate the analogue signals and perform analog
operations on them (e.g., integration). Each such circuit provided a single
simulation of the physical system being modeled; after you were done with it
you had to rewire the computer.

Many years ago I looked into using a digital computer to record events in an
operant conditioning experiment. The temporal spacing of the events
depended on a rat's behavior during an experimental run. Later those events
were to be "played back" to the animal at the same temporal spacings.
Personal computers were just becoming available on the market place, were
quite expensive ($10K), and had limited memory. Just storing the numbers
generated during the run exceeded the available memory capacity. Rather
than buying the computer, I ended up accomplishing the same thing by
recording the events as tones on an ordinary reel-to-reel tape recorder and
using the tones during playback to trigger the events. On the tape the time
between events was represented analogue fashion by the distance along the
tape between recorded tone signals. The whole system cost less than $400
and performed the job admirably. However, once I finished the experiment, I
had no further use for the system. The same would not have been true of the
digital computer.

Regards,

Bruce

[From Tim Carey (991017.1140)]

[From Bruce Abbott (991016.1845 EST)]

Thanks Bruce.

[From: Steve Adey (991018.1200GMT)]

Rick Marken (991015.1300)]

Could you describe your modeling project in a bit more detail.
It sounds to me like you are trying to model agents controlling
variables in a world where disturbances to those variables
(unexpected events) are caused by other organisms/agents. Is
this right?

The background to my interest is having worked on an AI system to
produce intelligent agents within a military training simulator
to give more 'intelligent' behaviour to multiple vehicles that
would normally be on programmed behaviours or under the control
of instructors. It is very difficult to produce flexible enough
programs to control these vehicles to attain the training
objectives when the students have a wide range of options. The
result is that instructors end up controlling the vehicles
manually with little support and cannot easily address their
monitoring objectives. However that is just my background, and
I have not yet formulated sufficient understanding of what HPCT
could do in any of my current areas of interest (application of
advanced software techniques to real-world problems). I was
trying to formulate an HPCT modelling requirement that would
address the issues that I can relate to most easily in the
simplest way possible so that I learn either a new technique
that works or come to understand what limits currently have
to be placed on the HPCT model. My idea of a fly was just
an attempt to produce a candidate organism to experience
events that were outside its direct control. I.e dissimilar
from organisms at a similar level trying to independently
control a shared variable.

By the way, did you get the zipped version of my three demos?

Thanks Rick, I got it but due to the way things are set up here
at work I could not get it to run and have not yet taken it home.
Unfortunately the Strategy course for my distance learning MBA
looms at the start of November so any modelling is likely to
be limited to thought experiments or refining requirements
for the next six months.

Thanks for your help and response
Steve Adey

[From: Steve Adey (991018.1230GMT)]

Marc Abrams (991015.1604)

Why is this the case? "Unexpected events" indicates a variable
environment. From a modeling perspective it really doesn't matter
how or who makes the environment unstable. The issue becomes one of
"stablizing" the environment.

"Stablizing" the environment is key because in the case of a
significant event it seems to require more than negative feedback
as it takes the organism into a new operating regime. The fly switches
from feeding mode to escape mode in response to my swat (perhaps I
have expressed it to anthropomorphically but you get the idea).

The current thinking in AI circles, Robotics, and Adaptive Behavior
is to limit the "problem space" so you can "stabilize" the
environment enough to allow a practical solution to a narrow
problem. Some efforts have been made and are being made with
_Reinforcement Learning_ ( NO this is not Skinnerian reinforcement
:slight_smile: ) as a way of dealing with ( i.e. learning and reorganizing )
in unstable environments. I think it shows some promise in
possibly helping us deal with reorganization. We are a _far_ cry
from simulating a "real " organism. We can and have simulated
_aspects_ of organism's but not the whole enchilada.

My model was not intended to suggest a real fly but just some aspects
that are appropriate to thinking about sevel key levels. It is not
'real' and we might require it to act in ways that flies may not act
(e.g. memory and learning).

Why does it matter how many "levels" are involved. I think you are
doing a great diservice to yourself in limiting your thinking to
_any_ number of levels. I think the most important thing to concern
yourself with is understanding the basic model. We don't really know
how any two or three other levels might interact given Bill's
conjectures in Chap 15. of B:CP. Sure Rick has done a hierarchy
model but it just shows that the _mechanism's_ for control would
apply to more then one level. So if you throw in memory and
reorganization we don't know what happens between any two levels at
this point. Restricting the names of the levels means you must
shoe horn in data to fit an existing structure. It might be right,
and it may not. But why constrain yourself?

My suggestion was to try and keep the complexity down by postulating
a minimum set of level that might be required in a model to expose
the issues in which I am interested. I have no data, hence my request
for possible S-R work with 'simple' organisms that might be useful
for comparison. The constraints are my understanding and a
sufficiently well defined specification so that it can be modelled in
software.

Bill:
<cut>

I have no idea how a person can
perceive the "shape" shown in Demo 1 (and especially its
variations), so the control model just has to assume there
is a perceptual signal corresponding to the computer variable
that is varying to alter the shape of the visual image.
-------------------------------------------------------------

Rick:

Steve, this specifically addresses my concern above. A guide yes,
useful, yes. But it is not chisled in stone. And talking about it
won't help advance it much either. Need to think of ways of
_testing_ it.

I was going to explicitly stay away from problems of vision by
taking a very simplistic view of what a fly perceives.

Steve:
I am interested in the fact that you skip level 5 in your model
because it represents an interesting level at which the organism
can perceive events over which it may have no control.

Bill:
Not only events, but intensities, sensations, configurations,
transitions, [events], relationships, and so on. At every level
there are some perceptions that we can't affect by acting on the
external world.

Rick:
You both are talking about different things here. Steve, Bill is
talking about the fact that at _all_ levels there are things that
you will not be able to change by acting on the external world. You
are referring to some "capabilities" ( experiencing events ) we have

I am interested in the class of events to which an organism must
respond that are outside its control or occur unexpectedly
(e.g. preditor attacking pray organism). These do not seem to be
control of perception but response to perception. I am not sure what
you mean by "capabilities" (experiencing events), can you expand?

Steve:
When I first found out about PCT, I found the statement
"behavior is the control of perception" as troubling because my
interest in autonomous control was concerned with simulated
worlds where events happen that are outside the agents control.

Bill:
The thesis is not that all perceptions are under control. It is
that no behavior ever occurs except for the purpose of controlling
some perception.

Rick:
Steve, This is a _key_ statement. You need to understand what this
means vis a vie your statement. You really need to be clear about
this.

Perhaps I should have said responding to some perception because it
seems to me that events are outside our control.

Steve:
This statement seems to be identical with stating that there is
nothing that we can do that we cannot also perceive.

Bill is not saying this. We do (act) in part, because of what we
perceive

The disturbance happens to the world, which affects our perception
and hence our behaviour - but Bill does not like this formulation.
For me he seems correct at the lower levels ,if they can handle
the disturbance, but some disturbances are significant events.
I am interested in why this picture is wrong at the event level as
it may bring the flies perception of danger back under 'control'.

···

+--------------------------------------------------------------------+

                       world |
+-----+ +----------------------------------+ |
>other> disturbance | perceive fly intrinsic? | |
>agent> ----------> stimulus |---------> comparator <--------- | |
+-----+ variable | event | reference | |
                              > v | |
                     response |<--------- behaviour | |
                              > action | |
                              +----------------------------------+ |
                                                                   >

+--------------------------------------------------------------------+

I always contrast the simple description of the driver controlling
his speed and direction with a more complex picture of a top racing
driver controlling for maximum speed round the track and through
the back-markers. There he is operating on the edge of adhesion
through the corner and sensing for the event of loss of control.
The course is opportunistic, based on slower drivers reactions,
and not a simple following of a ideal line.

<cut>

Bill, this might be true but begs the question. How? Steve is
dealing with autonomous AI agents. He would like to have a robot
that could identify environmental disturbances, act to counter them
and complete the 'goal(s) of the agent. Real easy to talk about this
in CT terms but how do you implement such a system?

Bang on target Rick!

Steve:
I think that there is a significant difference between where I am
running a control system, and where I have to institute one because
I perceive something that I might want to control.

Steve you don't "perceive" things you want or don't want to control.

Rick, the issue is the significance of events. We perceive so much but
only some of them are significant events that give rise to action.
How do we decide when an event has occurred that is significant for
some plan/program we are carrying out before it stops the plan
working. However this is talking at a level way above a five level
model.

If your really interested in doing some work I would suggest a bevy
of reading. Go to the CSG Web site for Bibliographic info.
Bill's 4 part article in _Byte_ Magazine. His article in _Science_.
_Volitional Action_ edited by Wayne Hershberger, ABS Journal, Edited
by Rick Marken, LCS I & LCS II Compiled Papers of Bill Powers,
_Origins of Purpose_ Powers, and Mind Readings by Rick Marken. All
of these papers are old. But they provide some _fertile_ ground for
someone who has a real desire to explore some these issues using CT,

I have seen references to all these, but since it is all cost and time
to cross the pond to England, I was reluctant to buy anything before
I was sure whether they tackle the issues at the level that I am
interested in. What I was looking for was a single article or book
that covers HPCT and it seems to be lacking. Teasing out the ideas
from a vast literature is beyond the time I have available, given the
MBA. Too much seems to be uncertain (speculation) to yet be ready
for serious modelling, which was the original challenge.

This is it for today. I am pooped. I'll come back tomorrow with the
rest :-).

I am off to Rome for 4 days to help integrate an Air Traffic Control
system so no more from me till next week. I have the digests printed
out so I will read them on the flight and in the hotel.

regards Steve

from [ Marc Abrams (991018.1104) ]

[From: Steve Adey (991018.1230GMT)]

Steve these statements were made by me not Rick

Me:

<cut>
> Bill, this might be true but begs the question. How? Steve is
> dealing with autonomous AI agents. He would like to have a robot
> that could identify environmental disturbances, act to counter them
> and complete the 'goal(s) of the agent. Real easy to talk about this
> in CT terms but how do you implement such a system?

Bang on target Rick!

>> Steve:
>> I think that there is a significant difference between where I am
>> running a control system, and where I have to institute one because
>> I perceive something that I might want to control.

Me again. :slight_smile:

> Steve you don't "perceive" things you want or don't want to control.

Rick, the issue is the significance of events. We perceive so much but
only some of them are significant events that give rise to action.
How do we decide when an event has occurred that is significant for
some plan/program we are carrying out before it stops the plan
working. However this is talking at a level way above a five level
model.

Me:

> If your really interested in doing some work I would suggest a bevy
> of reading. Go to the CSG Web site for Bibliographic info.
> Bill's 4 part article in _Byte_ Magazine. His article in _Science_.
> _Volitional Action_ edited by Wayne Hershberger, ABS Journal, Edited
> by Rick Marken, LCS I & LCS II Compiled Papers of Bill Powers,
> _Origins of Purpose_ Powers, and Mind Readings by Rick Marken. All
> of these papers are old. But they provide some _fertile_ ground for
> someone who has a real desire to explore some these issues using CT,

I have seen references to all these, but since it is all cost and time
to cross the pond to England, I was reluctant to buy anything before
I was sure whether they tackle the issues at the level that I am
interested in. What I was looking for was a single article or book
that covers HPCT and it seems to be lacking. Teasing out the ideas
from a vast literature is beyond the time I have available, given the
MBA. Too much seems to be uncertain (speculation) to yet be ready
for serious modelling, which was the original challenge.

Unfortunately it seems that way. I hope you find what your looking for.
I am not sure you will be able to do anything like it without a great deal
of "putting things together". That is in terms of both time and money.

Marc

[From Bill Powers (991019.0536 MDT)]

Bruce Abbott (991016.1205 EST)--

A nice explanation of digital and analog variables. I'd like to expand on
it a bit, particularly with regard to digital and analog _computations_ and
_measurements_, my thoughts being inspired by your reference to addition.

Analog computations are literally carried out by analogy, without an
intermediate step of conversion of physical quantities into numbers and
then conversion of the numerical results back into physical quantities. But
digital computations require that extra step.

For example, suppose I want to know the total depth of water in two
beakers, both being cylinders and having equal diameters. I could measure
the depths by lowering a ruler marked in metric units into them, seeing
where the wetted length ended, and writing down both readings -- say I find
that the depths are 71 and 92 millimeters. Then I could use methods for
manipulating numbers that I learned in school. Add 1 and 2 to get 3. Write
down 3. Add 7 and 9 to get 16; write the 6 to the left of the three and
carry the 1. Write the 1 to the left of the 6, since there's nothing to add
it to. Read 163 millimeters. Then go back to the ruler, find the mark for
163 millimeters, put a finger on it, and observe the total length from the
end of the ruler to where the finger is. That is the total depth that would
be seen if the water from one beaker were added to the water in the other
beaker.

The same computation can be done by analog means, using two sticks with no
calibration marks on them. Lower each stick into a beaker and pull it back
out. Each stick will have a wet segment. Place the two wet segments end to
end, and observe the total length from the end of the first stick to the
end of the second stick's wet segment, the total length being equal to the
depth of water that would be seen if one beaker were emptied into the other.

Note that you never have to assign any numbers to the depths or lengths.
You're working directly with lengths, and the answer you want is given
directly as an observable length. Length is directly and quantitatively
analogous to depth, so this is an analog computation in which lengths on
sticks are used to compute the sum of depths of water in beakers.

Also, note a fact that is easily forgotten. When you use the digital
method, in principle you could express the answer to as many decimal places
as you like: the depths could be written as 70.9903 mm and 92.0097 mm.
Their sum would be exactly 163.0000 mm. So I could say that the depth of
water resulting when you empty one beaker into the other would be 163.0000
mm, couldn't I? Digital methods clearly allow much greater precision than
analog methods, don't they?

Well, actually, no. The problem is that you must first convert an analog
observation (the length of a wetted segment) into a numerical
representation, and the accuracy with which you do this is limited to the
accuracy of the analog observation, no matter how many decimal places you
use. And after you've found the sum, you have to translate the numerical
sum into an analog position on a ruler again. So the use of even
infinite-precision digital representations will not give you any
improvement in accuracy over the analog method. The basic limit is how
closely you can observe the position of the end of a wetted segment, and
the next limit is how accurately you can interpolate that position between
the marks on the ruler.

Now consider how a neural comparator might operate. A comparator receives
two input neural signals and emits an output neural signal, the error
signal, representing the difference in magnitudes between them.

One way to design a comparator would be to assume two converters that could
turn the magnitudes of the input signals into binary numbers, then a
digital subtractor which could subtract one binary number from the other to
produce a number representing the error, and finally a reverse converter
that could convert the binary error into the magnitude of a neural signal
again. These conversions and computations would be done very rapidly and
repetitively. That's apparently how a lot of people think of neural
computations.

Another way, the way it's really done at least in some neural comparators,
is to use analog methods only. Inside the comparator neuron is an
electrical potential. This potential has an average value roughly equal to
the total frequency of excitatory impulses reaching the cells _minus_ the
total frequency of inhibitory impulses reaching the same cell. And the
potential inside the cell determines the frequency of impulses that leave
the cell as its output. So the frequency of the error signal is always
about proportional to the frequency of the excitatory signal minus the
frequency of the inhibitory signal, without any numerical calculations ever
being involved.

Of course in a neural analog comparator, the error signal is zero if the
excitatory signal is greater than the inhibitory signal. But a second
comparator can reverse the inputs in terms of their physical meaning, and
produce the error signal when the first signal is greater than the second
signal. The second error signal would be connect to have effects opposite
to those of the first error signal.

When "modern control theorists" talk about computing the neural signals
required to produce an end-result in the environment, they not only assume
that there's a computer that can do the computations with as much accuracy
and speed as required (both very high), but that the data on which the
computations are based and the conversion from the computational results to
the actual states of physical variables are just as accurate. Of course
those input and output conversions involve _analog_ variables, so no matter
how accurate the intervening computations are, the overall accuracy is only
that of the conversion from and into analog terms. The extra accuracy of
the supposed digital operations is of very little use.

Best,

Bill P.

[From Bill Powers (991019.0720 MDT)]

Tim Carey (991017.0720)--

_Now_, however, I have another query. Are analogue computers still around?
Were they/are they used in the same way that digital computers are used?
Could they be used for word processing etc., or did their ability to handle
continuous variables mean they were used for a specific purpose?

Sure, they're still around. Wolfgang Zocher has a working one at home. He
uses it to check the performance of his simulations of analog computers on
digital computers.

Actually, even digital computers are made of analog devices: transistors.
When you connect transistors together in the right ways, they can _imitate_
a digital machine in which only 1s and 0s exist. In a real computer, a 0 is
any voltage less than about 0.5 volts, and a 1 is any voltage above about
2.5 volts. You can easily fool a TTL logic gate into acting as an ordinary
analog amplifier by adding a few external feedback components, because it's
basically an analog device.

In HPCT, digital operations occur at levels from categories upward -- but
they're not exclusively digital. Even categories have "degrees of
categoriness." If you start changing the details of some object like an
elephant, the object becomes less and less like the ideal object of its
type, in a way more typical of an analog variable than a digital one. So I
think the underlying machinery is analog in nature, with digital operations
being piggybacked on the analog processes.

Best,

Bill P.