···
Bruce Nevin (2017.10.24.16:33 ET)–
But there is some “naturalistic” evidence that people come equipped with certain types of built-in perceptual functions that are used as the basis of learning new perceptions of that type. Perhaps the most obvious example is language, where people learn to perceive and control the words and grammatical structure of the language into which they are born – a language with very different words and grammars that the others that the kid might have had to learn. Chomsky, I believe, suggested that this was because all humans come with the mental capacity – called a language acquisition device (LAD) – that allows them to learn the particular language of their group. In PCT, this LAD is the built in sequence (word) and program (grammar) type perceptual functions. Â
BN: I have been inveighing against Chomsky’s LAD and its congeners for 26 years on this forum. Universal Grammar, innate language acquisition device, and kindred notions are neither necessary nor sufficient in an account of grammar, and too powerful to support a competent theory–akin to saying “God did it”. For a glimpse of the absurdities in Chomsky’s views, consider pp. 4-5 of the review at
http://brooklynrail.org/2016/09/field-notes/understanding-the-labyrinth-noam-chomskys-science-and-politics
BN: All that is biologically necessary is the cognitive capacity to control complex dependencies among different kinds of perceptions (some of them those constituting language), and the physiological capacity to control a sufficiently large system of phonological contrasts
RM: That’s basically what I said: In order to be able to control complex dependencies (as in grammars) among different types of perceptions you have to be able to perceive these complex dependencies; that is, in order to control programs, like grammars, you have to be able to perceive programs.Â
RM: I believe that the neural architectures that make it possible to perceive programs are built in by evolution. I don’t believe it is possible to develop the these architectures in the 3 or 4 years that it takes for kids to learn to talk. So I agree with Chomsky that humans almost certainly come equipped with an innate ability to do language. He calls this innate ability a language acquisition device (LAD), but this says little more than that humans have an innate ability to learn language.Â
RM: PCT Is a bit more specific about what it is that is innate in humans that makes it possible for them to learn language. PCT says it’s the ability to perceive programs – particular networks of contingencies between lower level perceptions. That’s why I say that the ability to perceive (and therefore control) programs is the PCT equivalent of the LAD. The PCT version is a bit more specific about what makes it possible for humans to do (control) language and why even our cousin primates can’t do it (not very well, anyway).Â
RM: I believe PCT would say that the evolutionary brain change that made complex language communication possible was the development of the brain structures that made it possible for humans to perceive (and thus control) programs. Of course, the ability to perceive programs makes it possible to perceive and control any program, not just language. So I believe that once these program perception structures evolved, humans became capable of not only doing language – any language – but also complex activities that require the ability to control for the occurrence of a particular network of contingencies, such as controlling for the logistics (a network of contingencies) involved in hunting in bands or building skyscrapers.
BestÂ
Rick
Â
(differences of sound/articulation that make a difference between one word and another. Higher primates may have the former, but they lack the latter. No one has yet discerned regularities in the utterances of cetacians that might function as words. I emphasized the phrase “biologically necessary” above because language is a product (and a means) of collective control over many generations of people living in communities.
As to Bill’s “juice” example in B:CP, our alphabetic habits prejudice us. The requirements for written records, even for phonemic writing, are not the same as the requirements for speech. It is not at all clear that the perceptions that are controlled as means of uttering a word like “juice” correspond to letters or to segmental phonemes, and there is a considerable body of work contradicting that presumption.Â
/Bruce
–
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery
On Thu, Oct 12, 2017 at 5:33 PM, Richard Marken rsmarken@gmail.com wrote:
[From Rick Marken (2017.10.12.1730)]
Rupert Young (2017.10.11 10.00)
RY: Are we talking about the same thing, how would an agent learn to
control a new perception?
RM: I think we learn to control new perceptions, not new types of perceptions. For example, we learn to control new word perceptions , where the specific words we learn depend on the language community into which we are born. What I believe is built in by evolution is the perceptual functions that let us learn to perceive and, thus, control, these new word type perceptions.Â
RM: An example of the type of perceptual function that might be built-in to perceive words is shown in Figure 11.3 in B:CP. In that figure the circuit that implements the perceptual function produces a specific word perception (“juice”). What I think is built -in is the sequence of reverberation loops that make it possible to perceive any new word. What would be learned is which phoneme-type input perceptions should go into such a sequence perceiving network to produce a new word perception.Â
Â
RY: Are you aware of any specific research to
support this?
RM: Not really. The notion that the world of experience is made up of a hierarchy of different types of perceptual variables is, as far as I can tell, unique to the PCT model of behavior. But there is some “naturalistic” evidence that people come equipped with certain types of built-in perceptual functions that are used as the basis of learning new perceptions of that type. Perhaps the most obvious example is language, where people learn to perceive and control the words and grammatical structure of the language into which they are born – a language with very different words and grammars that the others that the kid might have had to learn. Chomsky, I believe, suggested that this was because all humans come with the mental capacity – called a language acquisition device (LAD) – that allows them to learn the particular language of their group. In PCT, this LAD is the built in sequence (word) and program (grammar) type perceptual functions.Â
Â
RY: Are "types" really anything more than a useful way for
an observer to classify perceptions (akin to “races”), which are
just forms of the general principle of output of perceptual
functions?
RM: Perhaps. But they a central hypothesis of the PCT model, a hypothesis that people might go out and start testing if they could get over their inclination to study controlling as though it were the behavior of a sequential state S-R device. I have done some research testing the notion of hierarchical levels of different types of perceptual variables. Some of it is described in the attached paper; you can demonstrate it to yourself in this demo: http://www.mindreadings.com/ControlDemo/Hierarchy.html
Â
RY: Anyway, it would still be necessary to learn specific perceptual
functions within those types wouldn’t it?
RM: I think what would mainly have to be learned is what inputs go into the function. I just can’t believe that a function that produces, say, the perception of a grammatical sentence in some language, can be constructed from scratch in a few years. I think the functional connections are there (as in the sequence detecting function in Figure 11.3 of B:CP) ; what must be learned are the inputs to the functions.Â
Â
RY: For example, is a baby
born with the perceptual function which provides the ability to
perceive the word “rambunctious”?
RM: Yes, I think so.Â
Â
RY: With usual reorganisation, changes, to the system parameters, moves
the system closer to being able to control more efficiently
(intrinsic error reduces). But what would be changed in this case?
If the parameter being changed is the address then the change could
result in a different memory being accessed that has nothing to do
with what was being controlled? So how would reorganisation work in
this case?
RM: Actually, if you look at the diagram of the memory addressing system proposed in B:CP you will see that what is being addressed is the reference signal to a lower level system that is part of the means used by the higher level system – the one sending the address signal – to control its perception. So reorganizing the way the reference for the lower level system is addressed is functionally equivalent to reorganizing the parameters of the output function of a control system as the means of getting it to control better.
RY: As I understand Bill’s arm reorg system it is about the
strength of the output of functions. It starts off with random
weights (gains) on output connections to 14 lower systems. Through
reorganisation the strengths of those weights change with 13
reducing relative to the one that has effects that result in better
control.
RM: I believe that in that model the 14 weights were the weights of an impulse response function that constitutes the output function of the control system being reorganized. The value of all 14 weights were varied randomly based on the size of the error in the control system. The result was a nice negative exponential shaped impulse response function that was continuously convolved against the error signal to produce the output that produces the best control (lowest time varying error). Reinforcement (strengthening) would not have worked and was not involved.
RY: Therein lies the rub! I don't see that this can be done manually
except for some basic functions, so some form of learning would be
required.
RM: The “givens” are presumably the different types of perceptual variables that would be found, via PCT research, to be the types that are controlled by humans. Powers hypothesized what these “given” types would be in B:CP. Of course, once these types are identified it’s going to be a difficult job trying to figure out how to build the perceptual functions that implement them.Â
Â
RY: I'm also not convinced that "type" is a meaningful term
except to the observer. After all what makes one function different
from another apart from the variable it is controlling?
RM: OK, so you are not convinced of the correctness of the PCT model of behavior as the control of a hierarchy of different types of perceptual variables. And you shouldn’t be since it is still almost pure speculation and has hardly been tested at all. And it won’t get tested until a lot more researchers quit studying control systems as S-R devices and start studying them as what PCT says they are: perceptual control systems.Â
RY: I did that a couple of years ago here, https://www.youtube.com/watch?v=QF7K6Lhx5C8
RM: That’s terrific. Could you send me the code for that when you get a chance.Â
RY: But that is on the output side. I think similar learning is also
required on the input side, to learn perceptual functions.
RM: I agree that we learn to perceive things; I just believe that the structures that allow us to learn the new perceptions are given. But I think you can build a simple demonstration of perceptual learning using what I think may be one of the simplest perceptual function “givens”; a weighted linear combination of inputs. So how about this; build a perceptual function for the balancing robot that is a linear combination of two inputs; orientation to gravity (the gyro sensor) and orientation to visual upright (if you can get it). Have the system reorganize the weights of this perceptual function until control of this perception is as good as you can get it. See whether the result is that the perception becomes all gyro, all visual or some proportional combination of both.Â
RY: Do you
see any reason why that is not practical within living systems,
given that we have years of development available to us to build up
these perceptual functions? To learn something new we first need to
learn to perceive it before (or at the same time) we can improve the
performance.
RM: Of course, we learn new perceptions; I just don’t think we learn the perceptual functions that create these perceptions. The simple perceptual function I suggested is the kind of “given” I am imagining: a weighted sum like p = ag+bv. So the learning involves varying the weights, a and b, of the existing perceptual function. I think the existence of a functional architecture like this is even more essential to learning things like programs (grammar) and principles (of good writing style, for example). I think it’s unlikely that the nervous system could develop, through random trial and error, the neural architecture for a perceptual function that would perceive the degree, for example, to which you have “control of the center” in chess in the few years it takes for a reasonably bright child to learn to perceive this principle. Â
BestÂ
RM: I think the question of how perceptual functions are
learned has to be preceded by research aimed at
determining whether perceptual functions are learned. I
now incline toward the idea that they are not; that the
types of perceptual functions we have are built up by
evolution.
RM: So what has to be learned is how to vary
actions appropriately in order to produce intended
results. So some version of a reorganization model, which
varies the parameters of control functions rather than the
strength of particular outputs produced by these
functions, and does so as the means of improving a control
system’s ability to control a perceptual variable, seems
like the best approach to control system learning.Â
RM: I would also suggest that the first thing to do
when building a PCT-based robot is to figure out the
“givens” of the system – the types of perceptual
variables to control and the hierarchical arrangement of
these variables –
RM: Another possibility is just to try building a
reoganization system into a simple robot that continuously
tunes up its existing control systems. This would be a
good exercise in developing a reorganization system that
works in a system dealing with the real world and not just
a software model of that world, as in the arm demo in LCS3
Rick
–
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery