[Martin Taylor 940829 10:15]
Avery Andrews 940827.1613
Since Martin Taylor seems to be posting (contrary to the threats
of a few weeks ago, I'll ask a few more questions about layered
protocal nodes. Could the choice of, say English vs. Fremch
be a choice between different protocol nodes? How about
speaking French versus writing it, or fingerspelling it,
or typing vs. cursive handwriting?
The easiest way I find (now) to think about LP nodes is to treat them as
if each one is a cluster of PCT ECUs that have much in common about their
lower-level support. The analogy to the "reference signal" to an ECU is
the "primal message" to a LP node--the state that the message originator
wants to perceive the recipient to be in, or, in the recipient, the state
the recipient is in when the recipient perceives the originator to be
satisfied (in a cooperative dialogue). Notice that the recipient may not
even perceive the primal message, according to this formulation, but in
a restricted situation in which the originator "wants the recipient to
understand," the primal message state includes that the recipient perceive
something the originator wants the recipient to perceive.
If you take this analogy (which I think is more than an analogy) between
LP nodes and PCT ECU clusters, then the answers to the rest of your question(s)
falls out automatically.
English V. French. The "top-level" primal message is that the originator
wants to perceive the recipient to be in some state. If this state is the
current perceived state, the LP (or simple PCT) model says that the message
sent will be NULL. If not, then the node (ECU cluster) will have output
(in LP theory, will send a virtual message at that level). How that output
is expressed is a matter for the lower levels that support those virtual
messages. It may be expressed in gestures, in language, in non-linguistic
sounds, or even, possibly in scents (pheromones?). That depends on all the
same factors as in normal PCT--how the system is currently reorganized,
what the contextual perceptions are that enter into all the supporting
PIFs, and so forth.
So, English output or French output represent different nodes, with possibly
some overlap among the ECUs that form part of the cluster. If a person
is imperfectly bilingual, the support below the language level may be the
same for both, if the person uses, for example, English phonology or syntax
with French words. In a perfect bilingual, the support may be in distinct
nodes all the way down to the levels at which articulation enters.
In the case of speaking and writing, the answer is the same, but the "choice"
is more likely to depend on contextual perceptual signals that contribute to
the perceived state--is the recipient present? is the recipient deaf? ...
Does this make sense to you?
Martin