Consider the ancient statement “you never step into the same river twice”. What does that mean, since I “know” that I could step into “Sandy River” on this year’s holiday and again into “Sandy River” on next year’s holiday?
When I go there the second time, for sure the water in the river will not be the same water as was flowing past my feet the first time, and I will not be the same as I was, if only because my stomach will hold the remnants of a different meal. But I perceive myself to be the same “me” and the river to be the same, albeit with perhaps a slightly eroded riverbank. “Me” and the “Sandy River” are labels for categories of slightly different instances, slightly different in details that contribute to my perceptions of myself and the river, but do not differ enough to cause me to perceive that the perceptions a year apart are of different entities.
The same is true of every controlled perception. The individual inputs to a perceptual function may change slightly from occurrence to occurrence, but the output of the perceptual function may not. So what does the output represent? In “classical” PCT it represents a magnitude of some perception, such as the brightness of a light, a separation between cursor and target in a tracking task, or something like that. It is a scalar value, whereas the set of all the perceptual values at that “level” together forms a vector.
If a perceptual function is a collector of “similar” input patterns, it is a category functions, and its output is the degree of similarity between the pattern characteristic of the category to which it is tuned and the input to which it is currently exposed. Different kinds of categories define the various levels of the perceptual control hierarchy.
I don’t remember whether I ever discussed here the importance of lateral inhibition in perceptual control. It doesn’t exist in the plain vanilla version described by Powers, because, as he said, although he was aware that it happens, he did not see how it fit into the simple eleven-level control hierarchy, or how it would improve the fit to actual nature. But even without treating perceptual functions inherently as category recognizers, lateral inhibition that forms flip-flops and polyflops to distinguish among category instances seems necessary (PPC I.9). It may be even more so in the development of basic perceptual functions all exposed potentially to every one of the millions of input from sensors (in humans, at least).
There are trillions of trillions of possible patterns of inputs from these sensors, but very few close repetitions of patterns that involve dozens or hundreds of sensors. The visual categories identified by Hubel and Wiesel, for which they received the Nobel Prize, are detectors of categories such as “on-centre-off-surround” or “edge at a particular angle moving at a particular rate”. The outputs of these detectors are the basic inputs to low-level visual category recognizers, whose outputs may be controllable perceptions.
All of this is, of course, PCT heresy, but the recognition seems inevitable that perceptual functions, each of which produces scalar output from vector inputs, have to be category recognizers whose output perceptual values represent how closely their input pattern seems to belong to that category.
Hi Martin, I can’t really see any heresy here, except:
the use of the term ‘detect’ which implies reality testing against a factual reality rather than the construction of input functions to compare with past perceptions and the capacity, at a higher level, or within an intrinsic system, for constructing these input functions in order to reduce error; of course there is an actual reality but from experience going down the road of reality testing takes us away from PCT. I’m sure that wasn’t your intention and just the use of language…
the broadening of categorisation beyond the category level in the hierarchy. I would say that controlling for the categorisation of input is exactly what you describe, but not the controlling of some other function of a set of categories upstream (once categorised) nor the controlling of the inputs below that converge to contribute to the function that specifies that category at a higher level. I would say that the specification of a controlled variable - that does not need to be accessible to a symbolic label in order to be controlled - is not ‘categorisation’ as such. But PCT does need to pay closer attention to how the functions that specify controlled variables are developed and neurally implemented and I imagine that lateral inhibition, amongst a range of other weighted functional ‘building blocks’ is one of them. Your work will be a new Bible for PCT research going forward… Remind me of its publication status?
As for the term ‘detect’, Powers uses it quite extensively in B:CP, especially and namely with the lowest level of perception. According to the glossary: “detect: A is said to be detected when a device creates a signal B such that B is an analog of A.” This probably means that a device (input function or sense organ) detects a strength of some external effect when it creates an analogical signal. In the definition of perception: “perception: A perceptual signal (inside a system) that is a continuous analog of a state of affairs outside the system.” Does it ensue from this that if the external effect is somehow following a pattern (e.g. changing regularly) then the input function detects that pattern when it creates a continuous analog?
Of course all input functions and even sense organs have been developed and “constructed” in the evolutionary and life histories, but are they still developed to detect something?
When I think of the terms ‘detect’ and ‘construe” they seem to form a dimension: in the lower levels perception is more like detection and in the higher levels more construction, but they never construct anything ex nihilo. They construct from something detected in the lower and especially lowest levels. (In the other way round even the detection in the lowest levels is kind of a construction, because it must convert the external effect to a quite different form and medium.)
Hi Eetu, I think what you’ve said all makes sense. I think the fact that Bill went out of his way to explain what might be occurring in PCT terms when a signal ‘is said’ to be detected tells us that it is necessary to deconstruct these terms. My guess is that this is necessary to prevent them being used literally or without reference to their role in the control of input. I’m definitely not saying that perception arises out of nothing because that would miss out half the closed loop!
Yes, categorizing is what every perceptual input function does, in the sense of being “a collector of ‘similar’ input patterns”.
No. Heresy would be to set aside well established neurological and physiological findings as not relevant to PCT. Continue in that direction and PCT becomes irrelevant.
Lateral inhibition sharpens perception of edges in the field of intensity perceptions. This is an example of the higher-level perception having a categorial relation to its lower-level inputs. Even at this level ambiguity can be resolved one way and then another (a catastrophe cusp) depending on strengths of lower-level inputs. I understand that this is evident in changing perceptions of where edges lie as a configuration with textured surface or with light and shadow is rotated, translated, etc. and that resolution of ambiguous or ‘vague’ input depends not only on strengths of lower-level inputs but also on input requirements yet higher, whence imagination ‘filling in’ missing detail.
But Bill proposed a Category level of perceptions intervening between lower-level perceptions with gradual values and higher-level perceptions with binary values.
Reading what he wrote and conversing with him about this my understanding is that he thought this was necessary in order to account for symbols and the appearance that language, logic, mathematics, etc. require that we control symbols as representations of other perceptions. A word, for him, was one among diverse inputs to a category perception (and could be input to more than one). He tacitly assumed that the relationship of a Program level to levels below would look something like the relationship between program code and the switching of bit values in a digital computer. I think it’s time for that digital machine metaphor to give up the ghost.
One immediate problem, for me, has been that I can find no subjective corollary of the instantiation of variables in a program, or of the existence of abstract variables except in a computer program or in an exercise in symbolic logic, algebra, etc. “For all x there exists a y”, etc. has a desiccated artificiality that attempts to ‘capture’ the fluid embrace of categorial ‘characteristics’ in a perceptual input function or associative invocation of a perception as memory or imagination. An icon or symbol ‘stands for’ or represents something, but in an algebraic or logical expression with uninstantiated variables x, y, etc. those variables represent nothing in particular.
At one stage, Martin, you were thinking of category perceptions as being ‘beside’ the perceptual hierarchy, operating at every level. This might be conceptually useful in a block diagram. Bill’s block diagram in Figure 15.3 in B:CP has a box labeled ‘MEMORY’ just above the memory/imagination switch, reference signal, and comparator, and neuroscience affirms that memory is stored at every synapse. Just as memory is distributed everywhere, so also categorizing occurs at every input function; both are pervasive, rather than localized in some module or level.
There is another basis for categorizing in the branching of a perceptual signal to diverse higher-level systems that might control it, and the contraparallel branching of error output to diverse lower-level reference input functions. In principle, one could do an associative cluster map for comparators on two levels showing all the branches from above to reference inputs below and all the branching of perceptual signals from below to above. A subset of comparators above and one below with many interconnections in common (as well as interconnections outside those subsets) would establish a kind of categorial kinship at the higher level (perceptions you control by these means) and at the lower (perceptions by means of which you control this or that or all of this subset).
In an algebraic or logical expression, the semantics of a function with variables x, y in its argument limits what those variables can represent, denote, or refer to. Analogously, a 'family of control systems whose sets of inputs and outputs intersect may limit what is plausible or ‘expectable’ in a novel situation…
We have no model of associative memory in PCT that I am aware of. A possible contributor to associative memory is the imagination connection, copying reference input to perceptual input in absence of perceptual input from below. Maybe that’s all there is to it, and the paleomammalian brain ever alert to challenges and opportunities keeps stirring the pot, strengthening those associative links through the cingulate gyrus and hypothalamus. Remove immediate need for motor control in the external environment and shut down those energy-hungry higher levels of control, then dream.
What does stand off to the side of the perceptual hierarchy is language, which subsists only as perceptions in the perceptual hierarchy, but among those controlled perceptions it is structured in ways that are not the same as the structure of the perceptual hierarchy. Are the structures of language perceived? Are they emergent in a self-organizing social process? Both? I think both, but not every user of language is consciously concerned with them so the question of their perceptual status is clouded. Certainly different schools of linguistics have different ways of representing the structures of languages, further obscuring matters.
Alluding and connoting are functions of associative memory in which one perception (an icon, symbol, word, etc.) evokes another from memory. Alluding and connoting are associative relationships that icons and symbols as well as words and constructs in language have to other perceptions (“other” meaning non-iconic, non-symbolic, or nonlinguistic, respectively). I’d have to brush up Peirce’s parsing of semiotics to be sure, but I think that denoting and referring are possible only to language and its more disciplined children (or perhaps younger siblings) logic and mathematics. Language, logic, and mathematics are complex perceptual constructs that no individual comprehends or employs in their entirety. Gregory Sampson says think of language as a tool, but not a tool like a screwdriver or a backhoe, but rather a tool like a steamboat understood and operated by many people who each grasp and use some but not all aspects. Wittgenstein and de Saussure used the analogy of games.
Analogies and metaphors are also processes of evocation from memory. They trade on categorial resemblances. Analogies and metaphors are not limited to language, and are more fundamental than signs, symbols, and language. Signs rest on analogy, symbols on metaphor, language uses both. There is strong evidence (discussion e.g. here) that sign language preceded and facilitated the development of spoken language. Many young parents now are using sign language to communicate with their prelinguistic infants, with beneficial consequences (see e.g. here, and references in this senior study).
If I had more time I’d make this shorter and more coherent, but other needs beckon.
All of this seems to me to be close to self-evident, but it is not in the common description of the canonical control loop, in which the perceptual variable is taken to be how much of this scalar environmental property is there, and in which the output influence on the environment alters how much of it there is.
The problem I see in describing PCT to a novice is that everything in the perceptual control hierarchy is built on this “How much” conception. Informationally, the issue is of partitioning the variations in sensory moment-by-moment outputs between metron and logon content. The classical control hierarchy is pure logon, but describing perceptual functions as pure category detectors implies that the information is pure metron. Neither can be true taken to the extreme, but some blend must be the case. How much and What is must both be applicable.
Bill acknowledged that lateral inhibition existed, but said he ignored it because he could not see how it would be used. In PPC, I discuss its necessity and several of its effects in Chapter I.9, including everything that makes a “category level” both unnecessary and misleading. I use the effects of lateral inhibition throughout PPC after that chapter, and especially after I introduce the stabilities of autocatalytic loops and networks at the start of Volume II.
As for abstract variables, I can see them at many levels, such as the form of a perfect circle, the strength of the democratic urge in a political party, the enjoyment of a social party and the effort having been employed by the hostess of than party, not to mention the concept of “hostess”. How about the frequently discussed “taste of lemonade”, the roughness of a texture, or the variety of springtime green? Abstract variables seem to me to be more readily found, at least in conscious experience, than are concrete variables that can be located in physical space.
Abstraction vs. generalization. A category generalizes over its members. An ideal exemplar of a category, taken to be the ‘Real’ identity of that category is an abstraction. These are the two ways that philosophers and psychologists have taken for defining categories: more or less close to the ideal exemplar vs. generalizing over common attributes, centripetal vs. centrifugal. The flip-flop model embraces inputs centripetally.
Abstract variables in this sense vs. uninstantiated (uninterpreted) variables within an algebraic or logical expression. Computer programs are applied mathematical logic. It is this kind of variable that I was referring to, and their distinction from categorizing in the perceptual hierarchy or putative category perceptions is an important part of why the computer metaphor is wrong.
∀(x) human(x) ≡ mortal(x)
The x repeated here is not an abstraction, it is a metalinguistic index asserting that the argument of the universal quantifier ∀, of the predicate human, and of the predicate mortal is ‘the same’.
The predicates human and mortal can each be claimed to be the name of a category. By this route, the name of a category can come to be called ‘an abstraction’. That is probably the basis for the assumption that a category is best defined by approximation to an ideal exemplar. We do like to attribute Reality to our talk.
Variables like « a political party, the strength of the democratic urge in same, a social party, the enjoyment of same, the effort having been employed by the hostess of same, the concept of “hostess”» are perceptual variables that we can talk about and claim to experience, and another person can say ‘I know what you mean’ (or not), but difficult to point to and mutually affirm experiencing ‘the same’ perception. Variables like « the “taste of lemonade”, the roughness of a texture, the variety of springtime green » are perceptual variables more amenable to ostensive demonstration and mutual affirmation of intersubjective agreement.
That’s why we think that levels of the perceptual hierarchy differ in abstraction. But even an intensity reported into the perceptual hierarchy at the periphery of the nervous system is an abstraction in this sense, and incidentally, because of its extreme localization, is impossible for two individuals simultaneously to experience other than by generalization over perceived properties of the shared environment.
The levels of abstraction that bothered Korzybski are linguistic. A noun that is derived from a verb, adjective, etc. is more abstract than a ‘concrete’ noun. The flight of the bird is more abstract than the bird.
We seem to have different ideas about the meaning to us of the word “abstraction”. Apart from its ability to provide a ground for continuing intellectual dispute, which I suspect neither of us is controlling for, I see little value in arguing about what concepts are and are not abstract.
A better question, because resolvable in principle, may be where you started, in the perception of algebraic operations, and how they might fit into the hierarchy, and of language in general, of which I see algebra as a form.
Here we come into the distinction between consciously experienced, everyday, “perceptions” and the non-conscious controlled variables of the classic hierarchy. Algebra is “logical”, “rational”, and clear-cut. Perceptions reorganized into the control hierarchy are not. They are categories with no intrinsically clear boundaries, even after sharpening by lateral inhibition. It is hard (for me) to find lateral inhibition (not logical inconsistency or alternation) anywhere in rational logical thought of the kind on which most mathematics is based.
Most clearly, although its frequently used formulae may become like common words in a language in which one is fluent, and be used in communication as components of more complex structures, nevertheless, the performance of algebra does what the non-conscious hierarchy does not do — produce novel relationships. Under what (to me) inconceivable circumstances would one learn to perceive as a category e^(sqrt(-1)*PI) = -1? But with the right basic assumptions of the underlying mathematics (all rationally produced), it becomes an obvious truth.
So far as I can see, none of this has anything to do with “abstraction” unless “abstraction” includes that “x” in an equation can be considered as an abstraction of the right kind of number with the right topological group properties. But it does have to do with conscious thinking, as opposed to control of perceptions in a reorganized hierarchy. The relation between conscious thinking and reorganization of the perceptual control hierarchy is a matter I’m trying to approach from a different angle in the new and evolving “Narrative Thought” chapter II.10 of PPC.
Yes. Mathematical and logical formulae are always ‘read out’ in ordinary language (whether or not some of the more brief expressions might be recognized at sight).
With thanks to my friend Tom Ryckman (his 1986 dissertation), Here’s Émile Borel, speaking in 1908 of some then current problems of the ‘transfinite’ in mathematics:
«Je ne mégarerai par en discussions métaphysiques sur le sens du mot ‘indéfiniment’: que l’emploi de cet mot soulève des difficultiés pour les philosophes, c’est un fait sans importance pour les rnathématiciens: il leur suffit de savoir qu’ils s’entendent parfaitement entre eux sans craindre aucune ambiguïté Lorsqu’un de nous dit qu’il considére la suite naturelle des nombres entires, chacun comprehend, et est assuré de comprendre la même chose que son voisin; c’est évidement là le seul criterium possible de la validité d’un langage, celui auquel on est toujours forcé de revenir. Car les prétendus-systèmes entièrement logiques reposent toujours sur re postulat de l’existence de la Langue vulgaire; ce langage commun à des millions d’hommes, et avec lequel il s’entendent à peu pris entre eux, nous est donné comme un fait, qui impliquerait un grand nombre de cercles vicieux, s’il fallait le créer ex nihllo.»
Borel, E. (1928). Leçons sur la théorie des fonctions. Troisième ed. Paris, Gauthier-Villars et Cie.
“I will not go into metaphysical discussions on the meaning of the word ‘indefinitely’: that the use of this word raises difficulties for philosophers is of no importance for mathematicians: it is enough for them to know that they understand each other perfectly without any fear of ambiguity; this is obviously the only possible criterion for the validity of a language, the one to which we are always forced to return. Because entirely logical so-called systems are always based on the assumed existence of ordinary language; this language common to millions of people, and with which they hardly understand each other, is given to us as a fact, which would involve a large number of vicious circles, if it were necessary to create it ex nihilo.”
Thanks for the quote, but I am not sure I agree that mathematical formulae are always or even usually read out in ordinary language, except possibly when communicating verbally with a mathematician of whose background knowledge the talker is uncertain. In written communication, the formulae would normally be used untranslated.
:::::::::::::::::::::::::::::
Going back to the thread title, one aspect of recognizing that perceptual functions define categories when they are sharpened by lateral inhibition is that our (my) long held assumption that the perceptual hierarchy builds only from the bottom up becomes untenable. Categories can be refined as well as used as building blocks.
Eetu pointed this out to me, and he may want to rejoin this thread. I initially opposed his idea of building down, but he persuaded me, and I recognized that I had used exactly that principle in the well reviewed but little noted “Psychology of Reading” (by my wife and me, Academic Press 1983, PDF available on request) in what I called “Three-Phased Learning” — learning gross patterns such as frequent words, polygrams and phrases, then learning that they are built from common smaller units that are reused in the same way in other words. In pretty well all languages I know of, the re-usable smaller units would be syllables, and in some the syllables can be formed from frequently used morphemes, letters, or phonemes.
But the baby learning to talk knows nothing of this breakdown, not least because s/he hasn’t learned enough language units to allow commonalities to emerge from the statistical noise. Most people who grow up hating mathematics do so (I surmise) in part because they don’t perceive the categorical relations among processes (I guess that’s formalized as group theory) and don’t need them for everyday communication. Maybe they also did not like their early math teachers?
Anyway, the refinement of categories downward as well as their combinations upward allows the control hierarchy to grow from some middle both up and down. I see an analogy with the recognition that negative counting numbers (integers) imply the existence of processes in the environment such as the taking on of repayable debt. Category refinement likewise depends on differences in the actions that best suit control of the resulting perceptual values. You can lead a cow out of a field, but you can’t do that with a negative cow (one your neighbour owes you).
MT: “Eetu pointed this out to me, and he may want to rejoin this thread.”
Thanks Martin for invitation. I will share now one ambiguous and very unfinished idea which I think could be somehow connected.
Think about the brain and nervous system of a new born baby. We may easily think that only the lowest hierarchical level is working as genetically inherited. If so, what do the other neurons do? Are they just waiting as a passive reserve until the reorganization system will take some of them to use as second level systems and then later others as third level systems and so on? If I am right (correct please if not) all the neurons in the baby’s brain are very richly connected to each other forming a dense network – much more dense than in an adult’s brain. This could mean that most the time the baby perceives/controls whatever it perceives/controls, the whole (or most of the) neural network is taking part to the activity.
So there is already a “hierarchy” from bottom to top – with one overall goal: to live my life – but it is homogenic and fuzzy and cannot yet do much in the new environment. The newborn has some genetically innate dispositions or reflexes but mainly its nervous network has developed – grown and reorganized – in the womb’s quite undisturbed environment according perhaps to just two principles: maximize internal connections and minimize internal conflicts.
The new and more hostile environment causes both intrinsic and extrinsic (perceptual control) errors and thus starts a new kind of reorganization which is sometimes described as reduction of complexity. The continuous and more or less homogenic network starts to tear into more independent parts vertically and possibly also horizontally. (I am a bit sceptic about the fixed levels of hierarchy and like to think is more as just “configurations and configurations of configurations all the way up”.) There happens much more disconnections then connections. (Possibly these disconnections can be partially explained by lateral inhibition and anti-Hebbian learning.) The reorganization processes can proceed to different directions, like the forest fire front lines as Martin somewhere metaphorically describes them. (Here is a practical paradox that every new structure is bound to cause internal conflicts in the whole system, causing distress – cf. Plooij’s results.)
Similarly like a child learns first words and only later to divide them to syllables and letters, a baby learns first large and clumsy movements and positions and only later refines the fine motor skills which can then be used also in other movements and positions. So learning new perceptions can happen both upwards and downwards – but if we accept my suggestion of the (possibly in a way virtual) highest goal as something like “live my life”, then it will always happen in the middle and perhaps in way more or less both top-down and bottom-up.
BN: “Some systems intermediate in the hierarchy may be genetically innate, as well as at lowest levels.”
Genetically innate intermediate systems are very possible and the innate reflexes like the moro and grasping reflexes are intermediate. But for me more important is that first some higher level system can be learned (more) bottom-up and then later below it develops new systems top-down way.
MT: “Most clearly, although its frequently used formulae may become like common words in a language in which one is fluent, and be used in communication as components of more complex structures, nevertheless, the performance of algebra does what the non-conscious hierarchy does not do — produce novel relationships.”
Also the use of ordinary language produces novel relationships, like in a way also music and other arts. The most important signifying units of languages are sentences and still larger wholes like paragraphs, chapters and books – narratives in a word. We know that the meaning of a word depends on its context in the signifying whole (and also on the pragmatic context). So how is understanding of the longer than a word expressions? Is there is sequence-level perceptual system for every possible expressions?
Sorry, I was too busy to send the message so the last sentences became a mess. There should read something like:
So how is it possible to understand expressions which are longer than a word? Must there be a separate sequence-level perceptual function for every possible expressions – infinite amount of them? Must we create a new input function for every novel relationship of words that we read or hear? That would make understanding very slow – like it is for me in English, but not so much in Finnish. The difficulty with foreign language feels though to be with words and not so much with the relationships between them.
A timely comment, as I am currently working on “narrative” and reorganization, and you point out that just as in the “classical” perceptual control hierarchy there is a hierarchy of conscious narrative structures, at least in English and (I presume) Finnish.
In my analysis, a “narrative fragment” is a basic unit, just as is a perceptual function in the control hierarchy. In the non-conscious control hierarchy, what corresponds to a conscious narrative fragment is an event (i do not use that word as a descriptor of the Powers “event level”). An “event” consists of a perceived change in the value of at least one perceptual variable, either because of some environmentally caused disturbance or because of an action by the perceiver. A “narrative fragment” is the conscious experience of an event in the non-conscious hierarchy. Language doesn’t matter unless you want to communicate with someone else (or form a structure tat you might later use for communication).
Right now, I am in the working on linking narrative hierarchic structures to reorganization of the non-conscious control hierarchy.
When logic leads into absurdity, look for false premises or (in this case) additional premises which have not been considered.
Language has its own structure which is maintained collectively.
For any given word, there is a field of other words about which it may be said and a different field of words that may be said about it. Mathematically, these are operators with arguments which may in turn be operators. They fall into classes: N with zero argument requirement (primitive ‘concrete’ nouns are not asserted about any other word), words that require one or more N words in their argument (e.g. On ‘fly’, Onn ‘eat’, Onnn ‘give…to’, perhaps Onnnn ‘push … from … to’), those that require one or more O in their argument (e.g. Oo ‘slow’, Ooo ‘cause’), and those that require N plus O in their argument (e.g. Ono ‘expect’, Oon ‘surprise’, Onoo ‘imply, infer, deduce’).
As each operator is asserted of its argument word(s), there are alternate ways to linearize them relative to one another. In the new context a repeated word may be reduced in form, even to zero phonemic content, and there are other conditions in which reductions may take place with no loss of information. It is in these reductions (as well as in the particular forms of vocabulary) that the idiosyncratic differences between languages arise. The operator-argument system has been found in every language that has been examined from this point of view.
Each operator with its arguments (and their arguments, down to N) is an assertion which has its associations with non-verbal perceptions. As more operator-argument relations are included the field of plausibly associated nonverbal perceptions is reduced.
The assumption that each word separately has its nonverbal associations is I think incorrect. To take the limiting case, when words of the N class are associated with nonverbal perceptions, apart from being in the argument of any operator, it is somewhat exceptionally and I think with some feeling of abstractness.
That abstractness is a function of potential operators over the given N being controlled in imagination. Among each class there are classifier words (the IS A relationship of early AI and ‘knowledge representation’ systems). Those that state taxonomies of N are most familiar, those among operators O less so (in those reductions that have the appearance of nouns—‘flight is an action’, and so on). The classifier words state overtly the clustering of operators whose argument sets intersect, and of words whose operator sets intersect. The abstractness of reference of an N word without any explicit operator over it is in an implicit invocation of the clustering of operators that can be asserted, and the further clustering of their co-arguments and alternative arguments, and yet further the ranges of operators over them. A lot of associative memory is structured on this basis.
[This has been revised and somewhat expanded since first posted.]
this is interesting and important. Your reply is quite dense. Could you suggest some more extensive introduction?
I understand that this is a same question which Aristotle approached in his theory of predication.
But I do not think this, however, answers my question: For to understand (and use) any linguistic expression with “Onnn*” or “Ono” or whatever form do we need to have or create a respective input function for just this expression?*
For example, let’s assume that we have input function for the words “cat”, “dog”, “sit”, “on”, and “mat”. Do we in addition have (to have) a hierarchically higher input function for the expression “cat sits on mat” and a different one for the expression “dog sits on mat”?
It resembles predicate logic, Aristotle’s ‘prior analytics’. However, logic is a discipline imposed on language, permitting only forms of argumentation that guarantee the truth of conclusions if the premises are true. (One false premise, and logic can’t say whether the conclusions are true or false.) Others in the same social environment as Aristotle and his teacher Plato developed rhetoric, forms of argumentation that are concerned with persuasion rather than truth. The mechanisms of logic are also cranked backward to disclose false premises for empirical investigation, which gets into the third discipline, Aristotle’s ‘posterior analytics’, and epistemology. There can be no doubt that all three branches have historic and prehistoric antecedents all over the world.
Logic has an a priori character due to its metalinguistic basis, whereas this result about language and information was arrived at empirically from analysis of the data of language. It accounts for the forms of language and their informational character. It is a theory of language and information on a mathematical basis which has been tested on some forty languages. Grammar and a theory of information says nothing about whether the information of an utterance is true or false. A false statement embodies false information; it’s still information. The analysis of argumentation in science is still on the docket for this approach.
Computational work by Stephen Johnson in large corpora of texts has shown that the appearance of the word classes is emergent from networks of word dependencies similar to the association networks and communication networks of sociological analysis. The word classes are useful as a descriptive tool but on that account they are likely to be misleading, and talk of perceptual input functions for them is especially misleading.
I do not believe that there is a category level with, for example, a category Oooo which takes perceptual inputs from the words give…to, take…from, look…for, and so forth. Each word has its argument requirement. That they they can be described as members of these classes N, Oo, etc. is a descriptive fact, not an antecedent cause. The appearance of these classes arises from the associations of nonverbal perceptions with the word-dependencies in discourses. There seems to be a circular argument here, until you factor in cross-generational time. For every infant the language they learn is a pre-existing fact of their environment. That question of first cause then goes back to the evolution of language from pre-language. If that’s a concern I’ll scan some relevant pages.
When a given word is recognized, some words are more strongly associated with it than others are as arguments under the word (or as operator over it). The strongest operator-argument associations with contiguous or nearby words are those that are controlled, in a ‘pandaemonium’ process implemented by flip-flop cross-connections. The recurrence of words and word-pairings, which is what makes discourse coherent, strengthens some signals at the expense of others, and this strengthening of fewer and fewer alternatives is cumulative during the course of hearing or reading in a given topic. This is sketched in my chapter in the Handbook.
No. Each word has its input requirements. The appearance of hierarchy is emergent from their use, and from associations established over a lifetime of use. In each language we have socially established conventions for reducing and streamlining a construction of word dependencies, eliminating or compressing repetitions and things that need not be said because they are obvious. It is here that almost all of what we call ‘grammar’ appears, the conjugations, declensions, paradigms, pronouns, subordinate clauses, attributive phrases, and so forth. This may obscure the word dependencies, and may introduce ambiguities, but the dependencies are all recoverable. As the reductions are all regular, they may each be undone, restoring the dependencies in a regular way.
My chapter is linked above. You might glance at each of these superficially at first to see which of them seems most accessible to you: