[Steve Adey (991015.0945GMT)]
Apologies again for a rather long email, which I hope is
interesting.
As a result of Bill's request to forward to him my original
message on my five layer model ideas, I have had an off-CSGnet
corrspondence with him to try and advance my understanding
of HPCT. I am greatful to Bill for spending time answering
my emails, but now Bill feels that he needs to spend more time
on his modelling activities so I have tried to capture the essence
of our emails and copy them to the net to see if anyone else has
some views.
This edited version tries to omit philosophical issues, and some
of the consciousness and non-linear feedback issues that had crept
into our discussion, to focus on the HPCT model issues. Appologies
in advance to Bill, if he should think that I have excessively
pruned or edited the emails to render his words out of context.
Note: It is of the nature of email interaction that the sequence
below is not ordered by time but more by subject.
The key modelling issues seems to me, if I am interested in
unexpected events caused by other organisms/agents in a simulated
world; how many levels are required to model interesting behaviour,
learning and reorganisation that simulates a 'real' organism? Is
it possible with a five layer model or what should the layers be?
<insert for first email exchange>
===================================================================
Re: Five Level Model (was Conflict and HPCT)
[From Bill Powers (991005.0653 MDT)]Steve Adey (991004.1400 GMT)--
. . .Bill:
No, we haven't modeled (simulated) anything above the
relationship level, and the so-called "event" level is missing
below that level. Even in models of configuration control (only
the third level) we have to postulate that there is a
configuration-perceiver; I have no idea how a person can perceive
the "shape" shown in Demo 1 (and especially its variations), so
the control model just has to assume there is a perceptual signal
corresponding to the computer variable that is varying to alter
the shape of the visual image.
-----------------------------------------------------------------Steve:
I am interested in the fact that you skip level 5 in your model
because it represents an interesting level at which the organism
can perceive events over which it may have no control.Bill:
Not only events, but intensities, sensations, configurations,
transitions, [events], relationships, and so on. At every level
there are some perceptions that we can't affect by acting on the
external world.Steve:
When I first found out about PCT, I found the statement "behavior
is the control of perception" as troubling because my interest in
autonomous control was concerned with simulated worlds where
events happen that are outside the agents control.Bill:
The thesis is not that all perceptions are under control. It is that
no behavior ever occurs except for the purpose of controlling some
perception.
Steve:
This statement seems to be identical with stating that there is
nothing that we can do that we cannot also perceive. At some level
that is trivially true but there are many cases where we are very
unsure of its effect and hence what we will perceive in the future.
Steve:
The attention of the agent has to be
drawn to these events and their significance has to be recognised
and then the agent will act to counter them.Bill:
Conscious attention is by no means necessary for a control system to
counteract disturbances and bring perceptions to their reference
levels. Most human control systems operate unconsciously most of the
time, although practically any of them can also operate in the
conscious mode. A gust of wind, for example, may push your car to
one side, but the first you know of it consciously is seeing your
hands twisting the wheel against it. Yet you can easily do the same
thing consciously.
Steve:
I think that there is a significant difference between where I am
running a control system, and where I have to institute one because
I perceive something that I might want to control.
Steve:
Thus my description comes closer to a stimulus-response than
a control of perception. I recognise that lower levels are indeed
doing control of perception but at this event level it is less
obvious.Bill:
I don't want to defend this level -- I'm beginning to suspect that
it's badly out of sequence -- but let me just define "control of
events". Control is required when events that have just been
produced or have just occurred do not always match the event that
one intended to occur. The only way to correct an error is to repeat
the event, adjusting one or more lower reference signals
(transitions, configurations, sensations, intensities) to
make the new event closer to the intended one. Sometimes repeating
an event is impossible; striking one's last match can't be done
over.
Steve:
I do not expect you to defend a position which was even at the start
only tentative, but I was concerned at how little the upper levels
had appeared to be challenged or tested in almost 30 years. Please
publish your current thoughts in draft on CSGnet so we can understand
your latest views. However for me the issue is not the certainty of
an effect the organism desires but the totally unpredicted effect from
another organism (or nature in general). S-R responses in the lab have
this latter category.
Steve:
What I wonder is whether a 5 level model would be feasible
(at the moment only as a thought experiment to derive a
specification/understanding), and whether it should correspond
to something a bit like a fly. That perceives some simple
features of the world and then when a rate exceeds some level
it interprets it as an event and responds to avoid trouble (e.g.
my hand).Sounds possible.
What is less clear is whether the fly could be classed as a
learning system with reorganization or whether everything is
hard-wired.We'd just have to study how flies control things and see if the
parameters of control (in a model matched to the behavior) sometimes
begin varying randomly, eventually settling down to different
values. In principle this can be done; the main difficulties are
practical. But people have done such impossible-seeming experiments
with insects that I wouldn't declare any experiment impossible.
Yesterday I saw the following reference, but have not followed it up:
Heisenburg, M., Heusipp, M. & Wanke, C. (1995) "Structural plasticity
in the Drosophila brain" Journal of Neuroscience, 15, 1951-60.
<cut>
Steve:
I think that it is unlikely that _conciousness_ would have evolved
if there were not some special advantages.Bill:
I agree. I think that consciousness may direct reorganization to
places in the brain where there is actually a problem, rather than
letting it undo organizations that are working well. but I don't
think it's needed to carry out perceptual computations at any level.
Steve:
My worry is that your descriptions of reorganisation have always
emphasised the randomness of it, which may be needed to find some
solutions but local improvement seems to me a more likely first step.
However local improvement needs some focusing system
<cut>
Steve:
There is no doubt that
negative feedback is acting _unconsciously_ at the level of balance
but that says nothing about the higher levels. The presence of the
'program control' 7th level in your model actually ensures that any
organism that has that level present is not limited to simple
negative feedback.
Bill:
That's right, but it's still just a mechanism, like a computer.
Consciousness isn't needed to make it run. Or is it? I don't really
know. We need some experiments on this subject: what difference it
makes whether or not consciousness is present. But so far, I've seen
no proposals about the properties of consciousness that give it any
abilities that a neuronal computer couldn't have.
Steve:
I assume that what I feel is consciousness and I also attribute it
to higher animals like dogs. It must give an evolutionary advantage
or it would never have been refined like it has in humans.
<cut>
Steve:
If you define the references as purpose then conciousness is
clearly separate, but the account of conciousness in HPCT currently
seems to be inadequate.Bill:
I agree completely. However, most of the things other people
attributed to consciousness are done in HPCT by automatic neural
control systems. Not that I can say _how_ they're done, but in
principle they should be doable by neural networks. I haven't heard
ANY adequate accounts of consciousness, have you?
Steve:
I am just starting to read about adaptive behaviour systems and found
an interesting thesis at:
http://lucs.fil.lu.se/Staff/Christian.Balkenius/Thesis
<cut>
Steve:
Your basic outline of the levels may well be correct and forms the
interesting part of you theory, along with a suggested mechanism
for memory. To say that the higher levels use "the same laws of
negative feedback" has yet to be demonstrated, since you have no
models for those levels.Bill:
The hold-up on the models is trying to figure out how, for example,
a sequence-control system could perceive sequences. If we just
postulate an input function, saying that a sequence of lower-level
signals is going on and that there is a sequence-level perceptual
signal representing it, without trying to explain how that
perceptual signal was derived, then we could probably go on to
finish out the control system model. But I like to get the lower
levels completed before trying to guess about higher levels,
and we're hung up at the third level where configurations are
perceived. Nobody has a good object-in-context recognizer that
could, for example, provide a signal continuously representing the
orientation of a cube relative to its surroundings. And that's a
_simple_ problem. I don't think we're ready to start modeling higher
levels. We don't have the right kind of mathematics yet.
Steve:
I understand where you are coming from but for anyone who has
a background in AI they seem not very interesting. A conviction
that your low level feedback mechanisms will scale up to higher
levels just sounds unconvincing (even if it turns out to be true).
You may have no choice but to persue your methodology, but there
are other strategies and hence a gulf which is hard to bridge.
Having read about HPCT and tried to understand it (but no doubt
failed), I will use it as a framework to evaluate other ideas.
<insert for second email exchange>
-----------------------------------------------------------
Bill:
The digital revolution came along just in time to abort the main
threads of progress in our understanding of organisms.
------------------------------------------------------------------Steve:
The computer revolution rather allowed our attention to switch from
low level issues to higher level cognitive ones. Admittedly some of
the techniques we have developed (like exhaustive search) are not
likely to be those employed by the brain. However I sense that
heuristic search is employed by the brain in that it normally
presents likely candidate solutions to the conscious mind. I know
that you have used your 'observer' (which you seem to equate with
the reorganisation mechanism) as something like consciousness, but
I still do not understand how well your ideas cope with the full
richness of thought, sensation and intentionality as we experience
it.Bill:
The main bad thing the digital revolution did was to introduce the
perfect stimulus-response device, in which outputs could be
precisely planned and then precisely brought about. The idea that a
natural environment would _unpredictably alter_ the outputs before
their effects occurred was simply missing; thus the need for
continuous control was not seen.
Steve:
It appears that the demonstration of the e-coli algorithm also depends
on the goal remaining predictable. So long as there is an evaluation
function that can be measured, then even if it is a heuristic one it
can be minimised. I would expect better performance from a single
measure than from X-Y; especially if they are on different displays
like the old wartime radars. The existance of an evaluation function
for multiple levels seems to me much more problematic.
Bill:
I have no objection to proposing that the brain does digital-like
things. I think that's beyond doubt. But even while it's doing them,
there's a continuous quantitative aspect even to higher-level
behaviors. We talk about getting closer to an agreement, or
approaching an understanding, or gradually forming a plan, or
striving to be more logical. And even in the _purely_ digital realm,
we can talk about, for example, a theorem-prover setting subgoals
for subroutines to seek, the only difference from analog control
systems being that the variables change in steps rather than
smoothly.
Steve:
As always the question is what are we trying to model and whether the
approximations in the model are appropriate for the purpose that it
will serve. Serial digital machines can simulate parallel analog or
digital machines, but it is not yet clear to me whether parallel
brain machines can simulate serial machines easily because parallel
algorithms normally have to be specially developed.
-------------------------------------------------------------------
RE: real human perception.
Bill:
We're a long way from claiming to cope with wholly naturalistic
experience in all its richness, interactions, and complexity. But
what we can cope with is suggestive enough to hint that any big bets
on other ideas should perhaps be postponed until more data are
obtained.
-----------------------------------------------------------------
Steve:
There seems to be a major issue in the binding problem of
consciousness that is not resolved by any of the proposal that I have
seen (including HPCT).
<cut>
Steve:
I have no problems with your low level models, since they work, but
they do not exhibit learning, which seems to me key to higher
levels.Bill:
Look on my web page for the Little Man model containing the
"artificial cerebellum". It's an example of one possible kind of
learning.Listen, this could go on forever. I have other stuff I have to do,
and lacking any natural cutoff point I'd better just quit here.
Nuff said for now.
Steve:
I have sent a reply since I have composed it, but do not feel the
necessity to reply unless you have a thought that address the issue
of what modelling could/should be done to refine ideas on
reorganisation and learning.
regards Steve