[Avery Andrews 940913.1342]
(Bruce Buchanan 940911.22:30 EDT)
> The ascription of veridicality or not to the
>distal end of the neural causal chain would be, as I see it, a key
>function of the representational mechanisms. The notion that
>representations are only self-referential I still see as problematic, but,
>of course, the problem may only be with the way I see it!
Hmm. I've got some problems with this. It seems to me that the point
of the PCT view is not to say that representations refer to themselves,
but to say that representation is not one of the functions of perceptual
signals. Representation as we know it in everyday life is a kind of
relationship between perceptions: for example, perceptions of phonemes
or letters on the one hand, and dogs, cookies, etc. on the other.
Conventional cog sci takes the standpoint of a robot-designer who has
a format for her perceptions of the world, and a format for her perceptions of
the internal states of her robot, and wants to produce arguments that
the robot will work, in part by demonstrating that the world-states and
robot-states will covary in a certain way (the near-edge-of-table circuit
will be firing when the robot is near the edge of the table). But
living systems were not designed by people, and, unlike robots, they do
not come into existence subject to the acceptance by funding bodies of
arguments that they have a reasonable chance of working (well, nowadays,
maybe some of them do, but this is a very recent development!).
So representationality is not a *fundamental* issue for PCT, though it
may useful as to tool for thinking about how certain PIF's can help
in controlling the outputs of others (this PIF helps you attain the
perception 'full-glass of milk and no mess on the table' because it
'represents' the thickness of the stream coming out of the carton &
thereby the rate at which the glass is filling and the amount of
turbulence there is likely to be in it). But this is part of an
attempted story about why a certain control system works, not about what
it is, which is what PCT is about. And representationality
may not be an adequate tool for understanding living systems, and it
might also turn out to not be the only or the best way to think about
robots either.
All this may seem contradictory to some of the other stuff I've said,
but I don't think so: since 'representation' is naively a relation
between two different kinds of perceptions, it's a pretty suspect move
to extend it to the relationship between perceptions and the 'Dingen
an sich' in the external world that these perceptions are causally
connected to. Possibly a useful move for certain purposes, but not
something to bank too heavily on, I would have thought.
Avery.Andrews@anu.edu.au