End of behaviourism (was Re: Weight-transpose control model)

[Martin Taylor 2008.10.18.10.32]

[From Bill Powers (2008.10.18.0242 MDT)]

Note also that while the perceptual signals will approach the reference signals, the outputs and environmental variables will not show any orderly relationships. It occurs to me that this demonstration may be the final disproof of the principle of behaviorism, because the regularities in these collections of control systems are not detectable through observing their behavior -- their outputs.

Much as I applaud the intention, I don't think you can make this argument. Your "stimuli" and "responses" are more akin to the individual sensory nerves and muscle fibres than they are to the stimuli and responses of behaviourist psychologists, which are very complex functions of those peripheral manifestations.

"Behaviour" might be "pressing a button" in response to "Seeing a familiar face", not "twitching this muscle fibre" in response to "that optical nerve fibre firing". I don't think the muscle fibre output vector is likely to have much obvious relationship to the input nerve fibre vector, whereas an experimental subject may well press a button obviously consistently in response to seeing a picture of a familiar face.

I think you already have enough evidence to disprove "the principle of behaviourism". This study doesn't add to it, so far as I can see.

Martin

[From Bill Powers (2008.10.18.0943 MDT)]
Martin Taylor 2008.10.18.10.32]--

Much as I applaud the intention, I don't think you can make this
argument. Your "stimuli" and "responses" are more akin to the
individual sensory nerves and muscle fibres than they are to the
stimuli and responses of behaviourist psychologists, which are very
complex functions of those peripheral manifestations.

Nothing in this demonstration designates the hierarchical level being
modeled, but that's not the point. The main point is that we are
seeing a collection of control systems successfully matching their
perceptual signals to N independently adjustable reference signals,
when there is no regular pattern of immediate environmental effects
of outputs, or of the outputs themselves, that can be identified by
an observer as "the behavior" of any of the control systems.

This is not the "behavioral illusion" in which a regularity in the
output seems to be lawfully related to a set of disturbances. There
are no disturbances acting in this demo (because it runs too slowly
to show any interesting real-time effects). The effect we see here is
entirely different.

There are N environmental variables and N control systems. Each
control system perceives an aspect of the collection of environmental
variables, a particular weighted sum selected at random. Each control
system affects its own perceptual signal by acting on all ten
environmental variables at once, through a set of weights that is the
transpose of the input matrix of weights (10 systems x 10 weights).
Every environment variable, therefore, is being affected by all 10
control systems simultaneously, and its state is not determined by
any one of them.

The purpose of this experiment was to see if using the transpose
would always permit independent control by all the systems, for any
number of systems (well, up to 500) and any random collection of N
input weights for each system. That has been verified. In the
appendix of the new book, Richard Kennaway verifies it by direct
analytical mathematics, which is conceptually superior to my way and
gives the same answer, luckily for me.

But a side-issue is the fact that it is impossible for an observer to
see what any control system in this set of N systems is doing. This
is because each environmental variable is affected by ALL N SYSTEMS.
In order to see what is being controlled, the observer would have to
perceive the N variables through the same set of N N-tuples of
weights that the control systems are using. Only then would it be
seen that N different aspects of the collection of variables are
being controlled. Since the input weights are selected at random, the
chances of any observer perceiving through the same weights are
approximately zero.

It is possible that the N control processes could be unraveled using
the proper experimental approach. But that approach would not involve
letting any observer assume that his own ideosyncratic perceptions
simply "pick up" (a la Gibson) the variables are are actually under
control, or the actions that are actually effective in doing the
controlling. Behaviorism is founded on that kind of naive realism,
and this experiment shows why that can't be valid.

Best,

Bill P.

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com
Version: 8.0.173 / Virus Database: 270.8.1/1731 - Release Date: 10/17/2008 7:01 PM

[From Rick Marken (2008.10.18.0950)]

Martin Taylor (2008.10.18.10.32)--

Bill Powers (2008.10.18.0242 MDT)

Note also that while the perceptual signals will approach the reference
signals, the outputs and environmental variables will not show any orderly
relationships. It occurs to me that this demonstration may be the final
disproof of the principle of behaviorism, because the regularities in these
collections of control systems are not detectable through observing their
behavior -- their outputs.

Much as I applaud the intention, I don't think you can make this argument.

It may not be the best disproof of behaviorism but it certainly shows
why psychologists get such noisy results in their experiments and have
to resort to averaging over subjects to see if it can be concluded
whether an experimental manipulation had any effect at all. I think I
will use a version of this demo to make precisely this point in a
paper I am writing about the failure of the General Linear Model of
statistics (which is basically behaviorism) as shown be the fact that
experimental (environmental) variables rarely pick up more than a
small amount (say, 30%) of the variance in the behavior under study.

This new paper will be an extension of a paper I just had accepted for
publication (it's called "You say you had a revolution: Methodological
foundations of closed-loop psychology") in the _Review of General
Psychology_ (NB. Bill and Gary Cziko, persistence pays off, though I
still think it should have appeared in _AmericanPsychologist_). I
think Bill's point about the control demo will be very useful for this
new paper.

Best

Rick

ยทยทยท

--
Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2008.10.18.0943 MDT)]
Martin Taylor 2008.10.18.10.32]--

From some other comments I've received, I see that I forgot to
mention that there is no reorganization in the weight-transpose
model. Control is slow when you get to more than 10 or 20 systems
simply because the poor CPU is being time-shared and the number of
elements in the matrices goes up as the square of the number of systems.

Best,

Bill P.

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com
Version: 8.0.173 / Virus Database: 270.8.1/1731 - Release Date: 10/17/2008 7:01 PM

[Martin Taylor 2008.10.18.12.59] to Bill Powers

As often happens when we seem to disagree, either I'm not getting your point or you aren't getting mine. I do, however, agree with Rick [Rick Marken (2008.10.18.0950)], if he inserts "often" or "usually" before "have to resort":

It may not be the best disproof of behaviorism but it certainly shows
why psychologists get such noisy results in their experiments and have
to resort to averaging over subjects to see if it can be concluded
whether an experimental manipulation had any effect at all.

[From Bill Powers (2008.10.18.0943 MDT)]
Martin Taylor 2008.10.18.10.32]--

Much as I applaud the intention, I don't think you can make this argument. Your "stimuli" and "responses" are more akin to the individual sensory nerves and muscle fibres than they are to the stimuli and responses of behaviourist psychologists, which are very complex functions of those peripheral manifestations.

Nothing in this demonstration designates the hierarchical level being modeled, but that's not the point. The main point is that we are seeing a collection of control systems successfully matching their perceptual signals to N independently adjustable reference signals, when there is no regular pattern of immediate environmental effects of outputs, or of the outputs themselves, that can be identified by an observer as "the behavior" of any of the control systems.

That's also my starting point.

This is not the "behavioral illusion" in which a regularity in the output seems to be lawfully related to a set of disturbances. There are no disturbances acting in this demo (because it runs too slowly to show any interesting real-time effects). The effect we see here is entirely different.

Yes, I understand.

There are N environmental variables and N control systems. Each control system perceives an aspect of the collection of environmental variables, a particular weighted sum selected at random. Each control system affects its own perceptual signal by acting on all ten environmental variables at once, through a set of weights that is the transpose of the input matrix of weights (10 systems x 10 weights). Every environment variable, therefore, is being affected by all 10 control systems simultaneously, and its state is not determined by any one of them.

Right.

The purpose of this experiment was to see if using the transpose would always permit independent control by all the systems, for any number of systems (well, up to 500) and any random collection of N input weights for each system. That has been verified. In the appendix of the new book, Richard Kennaway verifies it by direct analytical mathematics, which is conceptually superior to my way and gives the same answer, luckily for me.

Nice to know that theory and practice agree, as they should, since the simulation exactly represents the maths, or vice-versa. Another useful illustration is the demonstration that convergence becomes very slow when the matrix nears degeneracy. That one could be very useful in discussions of conflict, and could be extended to show how matters change when the number of reference values N is reduced with respect to the number of environmental variables V affected by the N outputs and affecting the N inputs.

But a side-issue is the fact that it is impossible for an observer to see what any control system in this set of N systems is doing.

That word "impossible" is the point I think needs to be established. That's what I was trying to get at.

This is because each environmental variable is affected by ALL N SYSTEMS. In order to see what is being controlled, the observer would have to perceive the N variables through the same set of N N-tuples of weights that the control systems are using. Only then would it be seen that N different aspects of the collection of variables are being controlled. Since the input weights are selected at random, the chances of any observer perceiving through the same weights are approximately zero.

True in the simulation case. Not true when the experimental subject and the observer have developed through the same (approximately) evolutionary reorganization, and have personally reorganized in a similar cultural environment, and when the experimenter and subject both believe the subject is trying to do what the experimenter asks. In that case, if the subject is seen by the observer to be "pressing that particular button", there's a high probability that the subject actually has an intention to press that particular button. It is, however, true, that the observer cannot determine how the different muscle twitches of the the subject are affecting the different tactile and optical nerve impulses at the subject's input.

Even when the subject and observer don't have a common evolutionary background (such as a human and a pigeon), nevertheless it is not unreasonable to assert that if the pigeon pecks at a particular button consistently when a light flashes red but not when the light flashes green, the pigeon not only can tell the difference, but is induced to peck at that button (response) because the light flashed red (stimulus). The human wouldn't use the same muscles, but would do the same thing (press the button on "red"). The myriad low-level controls would be of no concern to that behaviourist psychologist. The arguments against the S-R explanation can't rest on the size of the low-level vectors of input and output.

In your simulation, the lowest-level reference values are randomly chosen, and the observer has no option but to try an independent random choice. As you say, the likelihood of the two being highly correlated is vanishingly small. In a real experiment with real humans, that is not usually the case. The low-level reference values are selected to help control some higher-level perceptions, and that higher-level vector of controlled perceptions has far fewer independent values than does the vector of muscle twitches or of nerve firings.

Rick's observation is particularly pertinent here.

It is possible that the N control processes could be unraveled using the proper experimental approach. But that approach would not involve letting any observer assume that his own ideosyncratic perceptions simply "pick up" (a la Gibson) the variables are are actually under control, or the actions that are actually effective in doing the controlling. Behaviorism is founded on that kind of naive realism, and this experiment shows why that can't be valid.

I'm arguing that "can't be valid" does not follow from this experiment, because for the most part observers are likely to be accurate in assuming that if they see someone performing an action that they would perform for the inferred purpose, then the person has that purpose. If I see someone ringing a doorbell, I am likely to be correct in assuming that they want someone to open the door, even if I have no idea why they were pressing the bell-push with their left big toe. Someone brought up in central New Guinea would not be able to make that jump, which is, after all, not logical, but only probabilistic and based on the common results of reorganization in the bell-ringer and me.

The New Guinea native might be able to make a lot of observations of people ringing doorbells, and discover that on a high proportion of times someone would open the door. The data would be noisy (as Rick says), but the correlation would nevertheless be valid. Perhaps the New Guinea native might even infer that someone who rings a doorbell wants the door to be opened, but that could only be a consequence of believing the statistical correlation to be meaningful. Substitute a telepathic alien for the New Guinea observer, and that being might conclude that the imminent appearance of someone at the door induced nearby people to ring the bell (though the alien would have to conclude that there must also be other reasons, since many bell rings happen when nobody is about to come to the door).

My point is that your random set of reference values is not uncorrelated with the set of reference values that a behaviourist experimenter might infer, because of the common evolutionary and cultural reorganization they both share. In real life, the low-level reference values are not random in that sense. Neither is the behaviourist experimenter interested (usually) in muscle twitches and nerve firings. The interest is in higher-level perceptions and actions (and the concept of higher and lower level perceptions and actions is as old as experimental psychology, older than behaviourism).

As I say, I'm probably missing your point.

Martin

[From Bill Powers (2008.10.18.1444 MDT)]

Martin Taylor 2008.10.18.12.59] --

But a side-issue is the fact that it is impossible for an observer
to see what any control system in this set of N systems is doing.

That word "impossible" is the point I think needs to be established.
That's what I was trying to get at.

Since the input weights are selected at random, the chances of any
observer perceiving through the same weights are approximately zero.

True in the simulation case. Not true when the experimental subject
and the observer have developed through the same (approximately)
evolutionary reorganization, and have personally reorganized in a
similar cultural environment, and when the experimenter and subject
both believe the subject is trying to do what the experimenter asks.

What you say is true because of all those "if"s. I'd grant you your
conclusion provisionally, but the claim isn't settled until you've
gone back and cleaned up all those dangling assumptions. I don't know
how similar one person's evolutionary history is to that of another,
or how similar the reorganizations of two people in the same culture
are (say, Barack Obama and John McCain), or whether both experimenter
and subject agree on what the conditions of the situation are. When
you can tell me the right answers to those questions, we can agree on
the conclusion. Until then, it's just a not-very-informed guess and
I'd demand pretty good odds before making any bets on it (and also
would demand that I have a large number of chances to betd, swo the
odds mean something).

In that case, if the subject is seen by the observer to be
"pressing that particular button", there's a high probability that
the subject actually has an intention to press that particular
button. It is, however, true, that the observer cannot determine
how the different muscle twitches of the the subject are affecting
the different tactile and optical nerve impulses at the subject's input.

Well, you'rt seeing the resulgt of my pressingf pargifcutlar
buttoons. What was muy intention?

What the weight-transpose model does is show us a possibility that
has not been taken into consideration. When this possibility is
considered from within the collection of control systems, it tells us
that even though we may experience successful control of a large
number of variables, we have no way to determine whether ANY of those
variables has objective existence. Considering the same possibility
from the standpoint of an external observer, we have no way of
determining whether the variables we see in our environments that
don't look as if they're under control are concealing multiple
functions of those variables that are being controlled by the other
person. If all you could see of the action in the weight-transpose
model were the outputs and the environmental variables, you would
never know that anywhere from five to 500 variables were being
controlled right before your eyes.

Even when the subject and observer don't have a common evolutionary
background (such as a human and a pigeon), nevertheless it is not
unreasonable to assert that if the pigeon pecks at a particular
button consistently when a light flashes red but not when the light
flashes green, the pigeon not only can tell the difference, but is
induced to peck at that button (response) because the light flashed
red (stimulus).

"Peck", "button", "red", "green", "flash", "induce", and "because"
are human perceptions. It is quite unreasonable to assume that such
things exist in the perceptions of a pigeon. Convenient, yes. Reasonable, no.

The human wouldn't use the same muscles, but would do the same thing
(press the button on "red"). The myriad low-level controls would be
of no concern to that behaviourist psychologist. The arguments
against the S-R explanation can't rest on the size of the low-level
vectors of input and output.

But they can rest on the impossibility of the behaviorist's being
able to perceive things being controlled by a pigeon. Once in a
while, we human beings may get a low-probability glimmer of what
matters in a pigeon's world, but the rest of the time, we have no
idea why a pigeon turns right instead of left, coos rather than
clucking, or crosses the road instead of flying away. We are as blind
to the pigeon's world of controlled perceptions as we are to the
controlled perceptions of the weight-transpose model (if we have to
judge only from watching its behavior and its effects on
environmental variables, as behaviorists are supposed to do). Is it
true that because a behaviorist sees nothing being controlled by a
pigeon or a person, nothing is in fact being controlled?

It is possible that the N control processes could be unraveled
using the proper experimental approach. But that approach would not
involve letting any observer assume that his own ideosyncratic
perceptions simply "pick up" (a la Gibson) the variables are are
actually under control, or the actions that are actually effective
in doing the controlling. Behaviorism is founded on that kind of
naive realism, and this experiment shows why that can't be valid.

I'm arguing that "can't be valid" does not follow from this
experiment, because for the most part observers are likely to be
accurate in assuming that if they see someone performing an action
that they would perform for the inferred purpose, then the person
has that purpose. If I see someone ringing a doorbell, I am likely
to be correct in assuming that they want someone to open the door,
even if I have no idea why they were pressing the bell-push with
their left big toe. Someone brought up in central New Guinea would
not be able to make that jump, which is, after all, not logical, but
only probabilistic and based on the common results of reorganization
in the bell-ringer and me.

What is "likely" is not an argument, but a statement of faith. One
prefers to think that one's assumptions about inferred purposes are
likely to be right. Well, who wouldn't? But what if I prefer to
assume that there is a lot more to behavior than we have suspected on
the basis of a too-simple, too homocentric, view of reality? Does
that mean that we both now have to use my assumption? What are the
rules here? Does the person who states his assumptions first win?

I'd rather find some other basis for exploring the nature of
perception, control, and reality. I'm waiting for the compelling
argument, the argument I can't resist, that I can see no way out of.
We can't find arguments like that by picking premises and scenarios
that lead to the conclusions we want, even though this is a favorite
mode of argument among philosophers.

My point is that your random set of reference values is not
uncorrelated with the set of reference values that a behaviourist
experimenter might infer, because of the common evolutionary and
cultural reorganization they both share.

"Not uncorrelated with" is to me the same as saying "for all
practical purposes, irrelevant to." If the correlation of A with B is
0.001, you can say that A is not uncorrelated with B. Heck, you can
say that, even if the correlation turns out to be zero. Zero is a
point on the scale of correlations. All you're saying is that the
experimenter will be able to determine some reference values in the
subject because the experimenter will be able to determine some
reference values in the subject. Saying the same thing twice doesn't
make it true. How do you know they share evolutionary and cultural
reorganizations? Because if they didn't, it wouldn't be true that the
eperimenter is inferring correctly, and since he is inferring
correctly they must share common cultural and evolutionary
reorganizations. What other basis can you have for assuming that
reorganizations can be shared at all?

Best,

Bill P.

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com
Version: 8.0.173 / Virus Database: 270.8.1/1731 - Release Date: 10/17/2008 7:01 PM

[Martin Taylor 2008.10.18.23.09]

[From Bill Powers (2008.10.18.1444 MDT)]

Martin Taylor 2008.10.18.12.59] --

But a side-issue is the fact that it is impossible for an observer to see what any control system in this set of N systems is doing.

That word "impossible" is the point I think needs to be established. That's what I was trying to get at.

Since the input weights are selected at random, the chances of any observer perceiving through the same weights are approximately zero.

True in the simulation case. Not true when the experimental subject and the observer have developed through the same (approximately) evolutionary reorganization, and have personally reorganized in a similar cultural environment, and when the experimenter and subject both believe the subject is trying to do what the experimenter asks.

What you say is true because of all those "if"s. I'd grant you your conclusion provisionally, but the claim isn't settled until you've gone back and cleaned up all those dangling assumptions. I don't know how similar one person's evolutionary history is to that of another, or how similar the reorganizations of two people in the same culture are (say, Barack Obama and John McCain), or whether both experimenter and subject agree on what the conditions of the situation are.

No, you don't KNOW. And that's been my point all along.

I just object to what ought to be probabilistic statements being asserted as certainties or impossibilities. You take me to task for not asserting certainties and impossibilities, and for making probabilistic statements. I think that without logical proof based on certainty of starting point, probabilistic statements are all you can make.

If you had said something like "It would be very unlikely that an observer would be able to determine all the perceptual input functions by observing the outputs and environmental variables when the references are randomly chosen" then I wouldn't have put my oar in. But you said "It occurs to me that this demonstration may be the final disproof of the principle of behaviorism, because the regularities in these collections of control systems are not detectable through observing their behavior -- their outputs." I thought that to be stronger than the evidence warrants, for reasons I stated.

Of course the experimenter or observer does not KNOW what is going on in the real-real world. Nor, assuming the real world is more or less as we perceive it, can the observer KNOW that someone pushing the dorrbell button intends the doorbell to ring, or for someone to open the door. It's just a pretty good bet, based, as I said, on evolutionary and cultural similarity between observer and observed.

As for the similarity of one person's evolutionary history to another's, we probably have 4.5 billion years of common ancestry, and no more than one or two hundred thousand years of different ancestry, no matter where in the world we live. That doesn't leave a lot of room for evoutionary variation, compared to the overall similarity. Personal reorganization is likely to have a lot less in common within that commonality of evolutionary similarity, but if two people grow up in the same cultural milieu, they are quite likely to have a lot in common, at least in respect of control of social variables, such as language and conventions of interaction.

In that case, if the subject is seen by the observer to be "pressing that particular button", there's a high probability that the subject actually has an intention to press that particular button. It is, however, true, that the observer cannot determine how the different muscle twitches of the the subject are affecting the different tactile and optical nerve impulses at the subject's input.

Well, you'rt seeing the resulgt of my pressingf pargifcutlar buttoons. What was muy intention?

I don't KNOW, but I'd make a guess that it was to suggest to me that you wished to show that if I had been observing you I would have thought you were typing unusually badly for some purpose. I would further have assumed the purpose was to make some point in the argument. What that point might have been, I cannot guess. Nor could I guess which muscles you were using in what sequence, which was the real point of the bit you quoted.

On the other hand, if I had observed you downing half a bottle of whiskey before typing that line, I would probably assume a different intention, and would probably also assume that your control systems had become a little out of whack. Again, that kind of inference depends on a certain degree of commonality, or at least on having observed or heard of the effects of overconsumption of alcohol.

Even when the subject and observer don't have a common evolutionary background (such as a human and a pigeon), nevertheless it is not unreasonable to assert that if the pigeon pecks at a particular button consistently when a light flashes red but not when the light flashes green, the pigeon not only can tell the difference, but is induced to peck at that button (response) because the light flashed red (stimulus).

"Peck", "button", "red", "green", "flash", "induce", and "because" are human perceptions. It is quite unreasonable to assume that such things exist in the perceptions of a pigeon. Convenient, yes. Reasonable, no.

It is equally unreasonable to assume that the same perceptions exist in the perceptions of another human. Not "unreasonable" but "equally unreasonable", which to me means "usefully reasonable". Nevertheless, the assertion "that if the pigeon pecks at a particular button consistently when a light flashes red but not when the light flashes green, the pigeon not only can tell the difference, but is induced to peck at that button (response) because the light flashed red (stimulus)" is eminently reasonable, whatever the qualia might be of the pigeon's perception. We are talking about the observer's perception and what the observer says about it, not about what the pigeon's consciousness contains.

I'm arguing that "can't be valid" does not follow from this experiment, because for the most part observers are likely to be accurate in assuming that if they see someone performing an action that they would perform for the inferred purpose, then the person has that purpose. If I see someone ringing a doorbell, I am likely to be correct in assuming that they want someone to open the door, even if I have no idea why they were pressing the bell-push with their left big toe. Someone brought up in central New Guinea would not be able to make that jump, which is, after all, not logical, but only probabilistic and based on the common results of reorganization in the bell-ringer and me.

What is "likely" is not an argument, but a statement of faith.

Oh, and the assertion of "can't be valid" is not? Are you asserting that I am indeed NOT likely to be correct if I assume that someone I see ringing a doorbell is doing so because he wants someone to open the door? It's a probabilistic statement, not a statement of faith. Statements of what is "likely" are hardly ever statements of faith, whereas assertions of "what must be" are often statements of faith.

I'd rather find some other basis for exploring the nature of perception, control, and reality. I'm waiting for the compelling argument, the argument I can't resist, that I can see no way out of. We can't find arguments like that by picking premises and scenarios that lead to the conclusions we want, even though this is a favorite mode of argument among philosophers.

Me, too. All I'm saying is that you haven't found it in this simple simulation. So I don't know why you seem to be the pot calling my kettle black. You can believe all you want in your own logic that says you have found the perfect argument, but that neither makes it so, nor makes it convincing to others. You often say that you most distrust your own logic when it leads to a conclusion you want to be true. But not this time, apparently.

My point is that your random set of reference values is not uncorrelated with the set of reference values that a behaviourist experimenter might infer, because of the common evolutionary and cultural reorganization they both share.

"Not uncorrelated with" is to me the same as saying "for all practical purposes, irrelevant to." If the correlation of A with B is 0.001, you can say that A is not uncorrelated with B. Heck, you can say that, even if the correlation turns out to be zero. Zero is a point on the scale of correlations.

I do wish you wouldn't resort so readily to sophistry. You actually aren't saying anything relevant to the argument. You know very well that the implication from my preceding argument is that the correlation may well be far from irrelevant, even though the conclusion drawn by the experimenter may often be wrong.

All you're saying is that the experimenter will be able to determine some reference values in the subject because the experimenter will be able to determine some reference values in the subject. Saying the same thing twice doesn't make it true.

Nor does saying something is impossible make it so. Even if you buttress the argument by pointing out that I do not say things are impossible that you would like to be so. In any case, that's far from what I said. The argument is that IF (but not "only if") the experimenter has a similar perceptual organization to the subject, then the experimenter might well be able to discern regularity in a situation analogous to your simulation.

How do you know they share evolutionary and cultural reorganizations? Because if they didn't, it wouldn't be true that the eperimenter is inferring correctly, and since he is inferring correctly they must share common cultural and evolutionary reorganizations. What other basis can you have for assuming that reorganizations can be shared at all?

That people (and other animals) appear able to communicate is one. I expect that when you reflect a little you will be able to think of many, many, others.

For the argument to succeed, the experimenter doesn't have to infer correctly. All that matters is that the inferences succeed in conforming to observations sufficiently often to get the kind of noisy data that satisfies behaviourist experimenters. In other words, the pattern of outputs at a low level results in something the experimenter can call a "button press" sufficiently often when the pattern of inputs results in something the experimenter calls a "red light".

Martin

PS. I've been breaking my self-imposed vow of silence because CSGnet interests me more than what I ought to be doing. In this case, I thought I could make a quick and simple interjection, and never considered the probability of getting into a long discussion.

If you choose to believe that your simulation makes an irrefutable argument, make the argument in a place where it might do some good. Maybe I'm wrong, and you do have an irrefutable argument. I'm not convinced, but then it isn't me that has to be convinced of the correctness of PCT and the wrongness of S-R type behaviourism. So I'm now going to try to avoid temptation and be quiet again. I have only 6 days to complete the work before I go to present it.

[From Bill Powers (2008.10.19.0722 MDT)]

Martin Taylor 2008.10.18.23.09 --

I do wish you wouldn't resort so readily to sophistry.

I will try to do it less readily.

I didn't realize that the red-flag term was "impossible" rather than
the subject-matter I was talking about. Would "Has an extremely low
probability of success" have been better? If so, consider the text amended.

Martin, I don't have any final conclusions here. I working toward
something and haven't got there yet. The weight-transpose demo does
away with behaviorism in a way that's almost trivial -- there are
many other ways to do it with PCT, because behaviorism tries to
explain behavior without invoking any models, any "intervening
variables." In a variable environment with unpredictable disturbances
and inaccurate physiological functions (i.e., the real environment
and real organisms), you simply can't account for behavior on the
basis of what you can observe happening outside an organism. The
nearest you can get is a set of low correlations that are little
better than coin-tossing except when applied to large populations --
and then only a little better.

But behavorism was never a serious problem. The problem I'm trying to
approach is that of what we can know and how much we can rely on it.
Good old epistemology.

The weight-transform model establishes a very interesting fact. It is
possible to control perceptual patterns by acting on the outside
world even when the perceptual input functions are organized at
random. All that is necessary is to construct an output function that
is the transpose of the input function.

In other experiments (the ArmControlReorg is one that you have a copy
of), it is shown that something like the required output function can
be created by the E. coli (random-walk) principle of reorganization:
the necessary output function can become organized without any
knowledge of the form of the input function or the environment. Not
only that, but the output function can be organized when the only
variations in the signals and variables are caused by random
disturbances of the perceptions in the N systems -- and then, when
variations in the reference signals commence, control will be as good
as if practice had happened with those reference signal variations
present. So what is learned by a system like this is not a behavior
or behavior pattern, but a control system. And this learning can be
done strictly by observing error signals. No knowledge of the
environment is required.

This happens in an unstructured environment, an environment made of
variables that don't interact with each other outside the control
systems. The next question, which I haven't yet tackled, is "What
happens when there are external laws that make some external
variables depend on others?" I assume that this will introduce
constraints that mean not all possible input functions will work. If
random-walk reorganization still works, it will have to work around
the constraints in the external world, in effect taking them into
account. How will the effects of the external constraints show up in
the final organization? Will it become necessary to add
reorganization of the input matrix, and if so, why? I'm still trying
to see how to organize that kind of experiment.

Farther off in the distance is the question of what will happen when
we introduce another independent set of control systems into the same
environment: another organism that works the same way by working on
the same external variables.

Somewhere in there, I think we are getting to the point where we
might think of letting our programs start interacting with the real
world, through sensors and actuators. I have an A/D converter that
can sense 16 voltages, and another converter that can detect an array
of several million light intensities in three wavelength bands. I
have some servos -- three or four is all -- that my computer can use
to apply forces to things. This is a pretty feeble start, but we'll
see how far it can go, some day.

I know this is pretty boring for anyone thinking in terms of robots
or organisms, but I'm trying to build a causeway across the river by
laying down stones we can walk on. I'll worry about what's on the
other side of the river when I can get there and look around. I hope
it's the right river.

Best,

Bill P.

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com
Version: 8.0.173 / Virus Database: 270.8.1/1732 - Release Date: 10/18/2008 6:01 PM

[Martin Taylor 2008.10.19.16.02]

Aaargh!! I was intending to be quiet for the next month or so, till I return from the UK. But...

[From Bill Powers (2008.10.19.0722 MDT)]

Martin Taylor 2008.10.18.23.09 --

The weight-transpose demo does away with behaviorism in a way that's almost trivial -- there are many other ways to do it with PCT,

Yes, there are, and they are (so far as I can tell) more defensible.

But behavorism was never a serious problem. The problem I'm trying to approach is that of what we can know and how much we can rely on it. Good old epistemology.

The weight-transform model establishes a very interesting fact. It is possible to control perceptual patterns by acting on the outside world even when the perceptual input functions are organized at random. All that is necessary is to construct an output function that is the transpose of the input function.

If I understand your simulation aright, the environment is a unitary transform. In it there is nothing to discover, and the reference values are irrelevant to it. A more complex environment might not be so easily controlled with a simple transpose.

In other experiments (the ArmControlReorg is one that you have a copy of), it is shown that something like the required output function can be created by the E. coli (random-walk) principle of reorganization: the necessary output function can become organized without any knowledge of the form of the input function or the environment. Not only that, but the output function can be organized when the only variations in the signals and variables are caused by random disturbances of the perceptions in the N systems -- and then, when variations in the reference signals commence, control will be as good as if practice had happened with those reference signal variations present. So what is learned by a system like this is not a behavior or behavior pattern, but a control system. And this learning can be done strictly by observing error signals. No knowledge of the environment is required.

This happens in an unstructured environment, an environment made of variables that don't interact with each other outside the control systems. The next question, which I haven't yet tackled, is "What happens when there are external laws that make some external variables depend on others?" I assume that this will introduce constraints that mean not all possible input functions will work. If random-walk reorganization still works, it will have to work around the constraints in the external world, in effect taking them into account. How will the effects of the external constraints show up in the final organization? Will it become necessary to add reorganization of the input matrix, and if so, why? I'm still trying to see how to organize that kind of experiment.

Before reading this, I had written a paragraph saying much the same about what comes next. I had also written that at least some of the interactions should have the form of pseudo-objects in the environment. A pseudo-object has the property that an output that affects it will affect thereby a selected subset of the input variables (i.e. when a pseudo-object moves, its elements move, too). Two pseudo-objects should not be able to pass through each other, Something akin to gravity would be helpful, as would a "ground object" through which pseudo-objects cannot pass under the influence of gravity, since we reorganize in a world of consistent physical laws.

Two or three decades ago, there was a series of articles (maybe two or three) in which someone simulated the development of organization in the retina exposed to naturalistic scenes with naturalistic movements. He found that a randomly organized initial system developed the same kinds of structures as are found in real retinae (on-centre-off-surround, oriented line, and so forth). I think the learning principle was Hebbian, but I don't remember the details. I think the journal might have been Kybernetica, or maybe Acta Psychologia. Maybe someone reading this could provide a reference.

Farther off in the distance is the question of what will happen when we introduce another independent set of control systems into the same environment: another organism that works the same way by working on the same external variables.

Somewhere in there, I think we are getting to the point where we might think of letting our programs start interacting with the real world, through sensors and actuators.

I'm surprised you put that in the more distant future. It have thought that it would be an immediate possibiity.

I have an A/D converter that can sense 16 voltages, and another converter that can detect an array of several million light intensities in three wavelength bands. I have some servos -- three or four is all -- that my computer can use to apply forces to things. This is a pretty feeble start, but we'll see how far it can go, some day.

Have you thought of buying a Lego Mindstorms kit? It might be pretty nifty for that kind of work, and it gives you a cheap, but full version of Labview. I bought an earlier version two or three years ago because of the Labview, but I did play with making a couple of their demo robots (wheeled carts that can sense a little and do a little). I was planning on trying some PCT-structured operations but never got around it it.

I know this is pretty boring for anyone thinking in terms of robots or organisms,

Not in the least.

but I'm trying to build a causeway across the river by laying down stones we can walk on. I'll worry about what's on the other side of the river when I can get there and look around. I hope it's the right river.

I rather think you are at a passable ford. The stones would be nice for building a solid bridge, but I rather suspect that the bridge can be built from both ends.

Now I really(?) will sign off for a month.

Martin