Language variables

In linguistics, the Test is typically performed by repeating an experimental utterance while making a substitution for some identified part of it.

The locus classicus for the methodology of descriptive linguistics is Methods in Structural Linguistics. By these methods, language perceptions are identified and tested to verify that they are just those perceptions, control of which constitutes the given language. By these methods, the controlled variables are paired with written or iconic representations by means of which the linguist describes the language under consideration and by which another person can learn it, given appropriate prior training in the conventions of linguistics and in relevant physiological, acoustic, and socio-cultural phenomena.

I wrote an early exposition of the kinds of variables that constitute languages as my contribution to the Festschrift for Bill powers. A more recent exposition begins on p. 378 of my chapter in LCS IV.

It is often the case that control of a given variable appears to be poor. In general, each variable is controlled by setting references for a plurality of lower-level perceptions, not all of which need contribute to that input function for the higher level variable to be perceived and controlled. This ‘redundancy’ is functionally important in a noisy environment. When other inputs suffice for the higher level variable to be perceived and controlled, the higher-level system may reduce the strength of the reference signal for the given variable–the specification of how much of that variable must be perceived.

To illustrate, consider the pronunciation of the word to. Pronounced with high gain, as in he wandered to and fro, there is a clear burst of high-pitch sound after the t, the dorsum of the tongue is lifted toward the velum with the lips protruded, producing a syllable homophonous with too and two, having vowel formants at about 320 Hz and 800 Hz (for a typical man; all frequency ranges are shifted higher by the shorter vocal tract length of women and children). In He wanted to go the release of the t is as audible, the tongue is lowered and the lips in a lax, neutral position, producing first and second formants centered at something like 600 Hz and 1200 Hz. In I want to go, the word to may be pronounced the first way in very emphatic speech, may be reduced to the second pronunciation in less emphatic speech, and typically is even further reduced to what we can write conventionally as “I wanna go”, where t is entirely omitted after n and the tongue is in a lax central position producing formants centered at something like 500 Hz and 1300 Hz. This is reduction of gain rather than loss of control.

A second reason that control of a given language variable commonly appears to be poor is conflict between control of one variable and control, either concurrent or sequentially, of another. Limiting ourselves still to variables controlled for pronunciation of words, tongue twisters and spoonerisms are readily accessible examples of the latter. The tongue and other articulators are in position to produce acoustic effect A but must be moved to different positions as means of controlling acoustic effect B. We cannot ascribe these mispronunciations, nor their patterned and predictable character, to lag time changing the lower-level referents, because the same people can and do pronounce them correctly, often immediately after. The tongue cannot move instantly from one reference position to another. Physical properties of the oral cavity and of the musculature are in effect disturbance to control of B when the tongue starts from control of A. Furthermore, if B is controlled with equal or higher gain that can interfere with control of the prior segment A in an anticipatory way. (There is a very large literature on all of this.)

Pronunciation is controlled concurrently in two sensory modalities. We control the sound, yes, but we do so by adjusting the reference values for how it ‘feels’ to produce the given sound. We cannot correct the sound in real time. By the time it is audible it’s too late, the only way to change it is by repeating the utterance.

On pp. 383-386 of my chapter in LCS IV is a diagram for a PCT account of experimental work done by Katseff et al. in the phonology lab at UC Berkeley. I did not perform the test in that instance, not having the specialized hardware on loan from the Otolarangeology Department at USF as they did, but I did propose it on CSGnet in 1991-1992. They disturbed the acoustic result of pronouncing head so that it sounded to them more like hid. They resisted the disturbance by pronouncing it more like had. (See the formant data for ı, ɛ, æ in the table above.)

However, subjects did not completely resist the disturbance because the same higher-level system controlling a perception of head receives input not only from the acoustic control loop but also from the articulatory control loop. (You can verify this for yourself by reading this sentence while holding your breath, silently moving your lips and tongue. You may find yourself articulating especially ‘emphatically’–something the higher-level systems do e.g. in a noisy environment.) As the acoustic control loop changes the references toward had, the articulatory input is a disturbance to perceiving head. In the resulting conflict, neither loop was able to control completely. Control of articulation is means of controlling the sound, and other than in this extremely artificial experimental situation the two control loops do not come into conflict. Katseff et al. came to the same conclusion, that control of articulation ‘somehow’ interfered so that the disturbance of the sound was not completely ‘compensated’, but they did not have HPCT to explain why this was so and how it works.

The articulatory perceptions A are still within range for V2 but as they go farther up the signal for perceiving V1 gets stronger. The Sound perceptions S are still within range for V2 but as they go farther down the signal for perceiving V3 gets stronger.

The V1, V2, and V3 input functions are at the next level up. They provide input for perceiving words. That’s two levels up. Intending to perceive head, the subject controls both A and S perceptions to maximize the strength of the intended V2 signal at the phoneme level and to minimize the strengths of the V1 and V3 signals. The disturbance (not shown here) is what moves A and S apart from the intended V2 value of the sound S and the usual V2 value of the articulatory feel A which feels like pronouncing V2. If the experiment were to be prolonged, the reference A for articulation would become what it feels like to pronounce V2 and the resistance to the disturbance would be more complete. There the analogy to prismatic glasses might break down, because the glasses affect the entire visual field, and this equipment disturbs a limited part of the acoustic space of speech sounds.

Dear Bruce,

(Sorry to be posting this here, I tried private-messaging but it did not work)

My name is Carmen Amo Alonso and I am a PhD student in Control Theory at Caltech. I am very interested in applying control theory to linguistics, and it seems that PCT is the way to go. I am excited to provide a mathematical framework to these ideas using standard control theory and test them out computationally. I have read your book and these posts, and would love to discuss this further and in more detail. Please feel free to message me if you would like to discuss this any further! Very excited about this!

No problem with you posting here, Carmen, and welcome to the IAPCT Discourse forum. I’ll be glad to talk further outside the forum if you wish, but I hope you will have more to say here in the future. You can email

  • Bruce Nevin <bnhpct@gmail.com>
    I try to keep PCT-related email there. We can also use zoom or phone if appropriate.

Simulation and test goes beyond my computational chops (and evidently beyond my willingness to devote time to building programming skills) so your engagement is very important and exciting.

‘Standard control theory’ often incorporates some complications that have not been necessary for modeling behavior of living things, and in some cases are not plausible. In many cases, the set point is determined from outside the system, and this is in general not accessible in living things. The ‘controller’ is sometimes conceived as containing a model of relevant aspects of the environment, or that plus a model of an observer. A crucial principle of PCT is adopting the point of view of the control system and its purposes. Control engineering typically has the point of view of an observer or analyst in which the control system is understood as serving purposes that are posited from that external point of view. Unwitting equivocation across points of view (organism, observer, experimenter, analyst, designer) has sometimes confused discussion.

A 1999 post by Mary Powers is pertinent. Mary reiterated the obvious fact that stability is a prerequisite for survival, and the control systems that have evolved as and within living organisms, sans pathology, patently are stable. It is possible that means of stability in engineered systems mimic what has evolved, and in any case the deep preoccupation of control engineering with stability is necessary for a similiation.

I’m in the middle of a deep dive into the neurophysiology of the cerebellar system. Bill published a first essay at modeling this evolved system in Chapter 9 of Behavior: The control of perception (familiarly called B:CP). Knowledge of the neuroanatomy and of details of functioning has advanced greatly since the publications of (Nobel Prize winner) Eccles and his colleague Ito in the late 1960s, but understanding lags, in part due to myopic attention to one or another part of this distributed system, in part clouded by presuppositions. A deep dive certainly does mean over my head. I hope to resolve this to a tight presentation (in 30 minutes) that more capable minds can pursue aright.

Bruce sent me an e-mail pointing to his discussion with you. It’s rather long, but he thought you might be interested in my PCT take on the initiation and maturation of language, as discussed in Part 6 of my book-in-progress (Chapters II.11 to II.15).

There’s nothing explicitly computational there, since I have tried to keep the book as free of mathematics as I could (not always possible). But I think the progress from newborn baby to mature communication is clear enough, and since some of it is based around a hypothetical family of artificial characters with well-defined perceptual and action abilities and limitations, it might be amenable to developing and testing computationally.
A lot of it, or at least the underlying thinking, is based on several papers over the years since about 1984 on “Layered Protocol Theory”, which in about 1992 I discovered to be a special case of PCT involving two interacting players either or both of whom might be human or silicon.

You can find the book at https://www.iapct.org/publications/books/powers-of-perceptual-control/. If you would like to discuss anything about this (or other) topic, I can be reached off-line at mmt-csg@mmtaylor.net.

Hi Martin,

Thanks so much for your message! I am certainly interested in a control’s perspective on language acquisition. I am in the process of getting up to date with a lot of the PCT material (all of it new to me!), so will make sure to read this part. From what you said, it seems like mathematical formalisms are not explicitly discussed but can be “easily” postulated and tested out computationally? I figure that the ideas presented in your chapter are mostly consistent with experiments on language development in children. Is there a particular aspect or an immediate next step that you think we can try out to model and compare with experimental results? I believe this will (hopefully) make this case more compelling for people unfamiliar with PCT.

We can continue this conversation offline if you prefer. I would not want to clog up this discourse topic with all my questions if they are not relevant for this community. My email is camoalon@caltech.edu.

Thanks so much!

P.S. Thank you @bnhpct for the email!