Re: Phenomena Phirst (was Re: Are you
hungry?)
[Martin Taylor 2004.10.28.17.48]
From [Marc Abrams
(2004.10.28.1529)]
[Martin Taylor
2004.10.28.14.44]
[From Bruce Gregory (2004.1028.1428)]
Rick Marken (2004.10.28.1050)
This sounds like a few
different questions are being asked and ultimately all will go
unanswered on CSGnet.
Bruce Gregory is asking very plainly, and is trying to get a PCT
explanation for how our ‘perceptions’ are constructed.
All perceptions are not controlled.
I think you mean “not all perceptions are controlled”.
I’ll continue on that basis.
The
question Bruce is asking is whether or not perceptual construction is
_also_a controlled process.
Isn’t he asking whether it can be? And if it can, how that
construction would fit within the model? At least that’s how I
understood part of his contribution to the thread.
PCT
cannot currently answer that question, although the hierarchy gives
the illusion that it can.
Actually, I think a strict hierarchy does give a clear answer:
the construction of a perception is not controllable, although the
value (magnitude) of a perceptual signal is.
For the construction of a perception to be controllable, the
output of some elementary control unit would have to act on the
perceptual input function of a different ECU. The hierarchy has no
such connection. Nor does it have a connection that allows the output
of one ECU to serve directly as an input to a different ECU, though
the “imagination loop” allows the output of an ECU to
connect back to its own input.
This
is problematic for PCT because ‘perceptions’ are not formed or
constructed. They just are. They are ‘signals’. They can and do take
on any form we so choose them to take on. From ‘seeing’ a dog, to
hearing an airplane, to tasting a muffin.
All of which, in “classical” PCT (H or otherwise) have
the same form – the magnitude of something, which might be manifest
as neural impulse rate, the concentration of some chemical, a voltage,
or whatever is appropriate to the hierarchy under examination.
Of
course Bill has stipulated the hierarchy as a way of ‘perceptions’
being constructed, but this has no basis in fact or any research that
has been done to date.
This is partially correct, insofar as the inputs to any ECU that
does not connect directly to sensor systems or to passive transforms
of sensor signals must come from somewhere. In the classical
hierarchy, that “somewhere” is the result of “lower
level” processing. But PCT (H or otherwise) makes no assertions
as to the kind of pattern recognition or time- and context-sensitive
functions that might perform this processing in any one perceptual
input element.
What the “H” in HPCT does (in “strict” HPCT
at least), is assert that the different perceptual functions do not
connect to each other in loops in which the output of one perceptual
function feeds back to itself through processes in other perceptual
functions.
There’s a more relaxed form of HPCT that does allow for the
existence of such recursive loops, but only within perceptual
functions at the same level. I think that at least this much recursion
is needed to allow for the perception of categories (it allows for the
existence of “Flip-flops”, and perhaps not incidentally, I
think it allows also for the “labelling” kind of
“identifying” – the picture and the argument is at
http://www.mmtaylor.net/PCT/Mutuality/flip-flop.2.html).
The problem with analyzing structures in which the connections
among perceptual functions include loops is that such systems can
easily go chaotic. In fact, without proof other than that evolved
systems tend to do this, I suspect that if such loops do exist, they
will adapt or evolve to a state that is formally on “the edge of
chaos”. Such states are at once robust and sensitive, which is
really what you would like a perceiving system to be.
Removing the “H” from HPCT leads one into all sorts of
speculative realms. The most one can say from neurophysiology, I
think, is that the system as a whole is modular. That’s at least
compatible with “H”, but it’s compatible with more complex
structures, too.
The interesting question Bruce asked was about controlling (for)
the non-null value of a particular perception. In his examples and
those adduced by Rick, the perception in question was always(?) an
association of some kind – association of a name with a face, or of a
context with a name, or things like that. Could a person control for
perceiving such an association to have a positive magnitude? (I think
that’s a kind of PCT-language way to ask his question).
In strict HPCT, the answer to the question is, I think, “No,
that’s not a controllable perception”.
But it is within the capabilities of structures that have been
the subject of experiments by the PCT old hands. I’m not sure, but I
think it was Tom Bourbon who studied the effect of allowing control by
one ECU of the gain of another ECU. Now, in a flip-flop connection
(permitted in relaxed HPCT), changing the gain is a way of changing
the behaviour of the circuit. If the elements have high gain, in the
circuit shown at the cited URL, an association (a labelling) is
forced. If not, it is merely facilitated, and incoming data can affect
the result. So one could imagine this kind of a circuit among
perceptual input functions, with the gains of the interconnections
controlled by other ECUs.
That kind of control, though, would not account for the kind of
searching for the “correct” labelling that we experience
(“I know that face – now where did I see him”).
Marc’s contention is that PCT cannot account for this kind of
experience. I would agree that strict HPCT cannot. But I think that
PCT can deal with searching in the outer environment, and if we allow
for a “Bourbon-type” connection within a complex PCT
structure, it looks to me as though PCT ought to be able to deal with
searching for associations, as well.
Don’t ask me to prove what I say in this last paragraph or to
model it, though. As matters stand, it’s just my opinion, and
therefore more faith than science. But that doesn’t often stand in the
way of claims made here, does it!
Martin