[from Avery Andrews 920818]
(penni sibun 920816)
> i don't really understand
>y'all's rhetorical use of ``mysterious'';
``mysterious'' means mysterious to a behaviorist or an old-fashioned
planning weenie (or, I guess, any random bad guy).
>i think it's conceivable that pct-type control might be more
>interactionist than any of us understands at this point. however,
>part b) is squarely cognitivist: it requires an inside-outside line
>to be drawn, and puts crucial stuff on the inside of the line.
This doesn't fit with my conception of what `cognitivism' is - I would
take cognitivism-in-field-X as being the position that everything worth
understanding in field X can be understood as a process of building
mental representations in the head. This is different from drawing
an inside-outside line & locating certain things inside the line. In
fact, Chapman and Agre seem to do this: Sonja has a clear
inside-outside line, with the visual system & much else located inside
it. What matters, I think, is whether you are supposed to be able to
understand why the stuff inside does anything useful, without also
understanding whats going on outside. This is denied by PCT, and not
implied by my (b).
I actually think its essential to find ways of understanding internal
structure & relating it to `behavior', since otherwise, because
everything after all does bottom out in physics, why don't we all go
to the beach and wait to buy the book when Stephen Hawking & Co finish
figuring everything out?
Switching topics ...
The driving story illustrates nicely what I at the moment consider to be
the biggest potential problem for PCT (ignorance of how perception works
is of course another big problem, but it afflicts everybody) - it often
seems to happen that one can
think of certain abstract&high-level variables as being maintained
(car moving down road = 1; car lying on it or beside it as a junk heap =
0; reference level = 1) by means of various relatively concrete and
straightforward low level variables as being maintained, but in between
there's a vast zone where all sorts of stuff might be going on, which,
at least initially, seems to have about as much structure as a plate of
spaghetti. e.g. sometimes one is controlling for `car near middle of
road', other times `car in snow rut', other times `car to right (or is
it left??)' of reflectors, & on top of that there's anticipatory
compensation (knowledge of the road), however that works, etc.
A simpler case of this sort of thing is the top-level control of the
beerbug: the whole bug's nervous system can be regarded as controlling
for the bug having a high energy level (low energy levels trigger
`behaviors' that typically wind up having the effect that the energy
level gets raised), but the actual way in which the sub-systems in charge of
wandering, edge-following, and odor-hunting negotatiate to achieve this
end is pretty confusing. I certainly can't be sure that PCT ideas will
help in clarifying the workings of this kind of system, though I
consider it worth spending some time to try to find out. What my point (b)
says is that either these negotations will turn out to be castable in PCT
terms, with some benefit derived from this way of looking at them, or
the model bug will prove to be too dumb to be viable, and the real
bugs that are smart enough to keep themselves alive will have PCT-style
internals. Such is the claim, at any rate. Obviously, the jury has
barely begun to sit ....
(Bill Powers ???)
(since cognitivists aren't trying to understand behavior, I don't think
that
I seem to have mislaid the posting this is a reply to, but as far as
I can make out, sonja's limitation to the 8 joystick directions is in
no way crucial. She would not require deep modifications if she
was supposed to drive, say a hovercraft sled with thruster and rotator
engines, able to move in any direction (like the spaceship in
Asteroids). The reason these interactive AI gizmos effect control is
that what they were designed to to is achieve interesting results under
circumstances that change rapidly and unpredictably relative to the
amount of time it takes them to do anything significant (e.g. kill
a monster, as opposed to move a pixel to the left). The fact that
the targets move around unpredictably means that there are unpredictable
disturbances in the path from gross output to net result, except that
the disturbances that C&A focus on are high-level, distal ones
(where the object you're heading for actually is) rather than low level
proximal ones (how much torque you get for how much neural current).
The same dog, just barking in a different corner of the yard.
As for CGA HiRes, writing with setcolor(BLACK) doesn't effect erasure
(in CGA HiRes) , but the setwritemode(1) trick looks like what I was
looking for.
I had already drawn the gloomy conclusion that re-writing was the only
way to erase in Borland graphics, by looking at the NSCK code and seeing
that that seemed to be how Pat & Greg were doing it.
Avery.Andrews@anu.edu.au