actions and beliefs

[From Rick Marken (2010.02.06.1130)]

Bruce Gregory (2010.02.06.1711 UT) --

Rick Marken (2010.02.06.0901)--

Bruce Gregory (2010.02.06.1552)--

If the stone's behavior was intentional, I think we could construct an HPCT
model to describe it.

How do you know it's not?

Good point. I'll start working on PCT model right now. I'll let you know if
I encounter any problems.

You apparently didn't get my point, which was that there is a way to
tell that the stone's behavior is _not_ intentional. And that way has
nothing to with whether or not you have problems developing a PCT
model of the stone's behavior. It's possible to develop a PCT model
of behaviors that are intentional and unintentional. But you can save
a lot of time and effort by first testing to determine whether or not
the behavior is intentional. This is done using some version of the
test for the controlled variable. I will leave it as an exercise for
you to think of how you might test to determine whether or not the
behavior of a stone falling to the ground is intentional (hint:
William James knew how to do it).

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bruce Gregory (2010.02.06.1945 UT)]

[From Rick Marken (2010.02.06.1130)]

Bruce Gregory (2010.02.06.1711 UT) –

Rick Marken (2010.02.06.0901)–

Bruce Gregory (2010.02.06.1552)–

If the stone’s behavior was intentional, I think we could construct an HPCT
model to describe it.

How do you know it’s not?

Good point. I’ll start working on PCT model right now. I’ll let you know if
I encounter any problems.

You apparently didn’t get my point, which was that there is a way to
tell that the stone’s behavior is not intentional. And that way has
nothing to with whether or not you have problems developing a PCT
model of the stone’s behavior. It’s possible to develop a PCT model
of behaviors that are intentional and unintentional. But you can save
a lot of time and effort by first testing to determine whether or not
the behavior is intentional. This is done using some version of the
test for the controlled variable. I will leave it as an exercise for
you to think of how you might test to determine whether or not the
behavior of a stone falling to the ground is intentional (hint:
William James knew how to do it).

In fact, I did test for the controlled variable. I conjectured that the rock was controlling for the perception that it was accelerating at 32 ft/sec/sec. I then introduced an obstacle (a table) to see if the rock would attempt to restore its preferred rate of acceleration. Unfortunately the rock had no way to propel itself to the the edge of the table. As a result it reorganized and set its reference level to 0 ft/sec/sec.

Bruce

[From Bruce Gregory (2010.02.06.2000 UT)]

[From Bill Powers (2010.02.06.0850 MST)]

Bruce Gregory 92010.02.06.1408 UT) --

BP: So pleasure is the amount of dopamine released? It certainly doesn't feel like that, does it? Not that I know what dopamine feels like.

BG: So vision is the the result of photons falling on the retina? It certainly doesn't feel like that does it? Not that I know what neural signals arising in the retina feel like. Come on Bill, I'm sure you can do better than that.

BP: Yeah, I guess I can. Does injecting dopamine anywhere in the brain, say the forebrain or the cerebellum, feel like pleasure, or does it have to be in a particular place? Light, after all, doesn't affect the brain much unless it lands on the retina. Does dopamine cause pleasure when injected in any place where dopamine neurotransmitters are found, or in only some of those places?

BG: It is difficult to locate subjects who are willing to let you inject dopamine into their brains at random locations. The situation may improve if the unemployment rate remains high for any length of time.

BP earlier:What is it in the brain that is "expecting" reward, and what does "expecting" mean in terms of a brain model?

BG: Do you know that it feels like to expect that the drink of water will refresh you on hot day? The brain predicts the reward associated with some action. Based on this prediction it initiates an action to achieve that reward. (In PCT, a reference level is set and a control circuit carries out the action.)

BP: I thought you were talking about some other brain model, and just from what you wrote above, I'd say you are. Can you sketch a diagram of a system like the one you describe, showing how the prediction is made, and what happens after that to initiate the action that is required for achieving the reward? How is that action arrived at? If I understand what you're proposing, it's quite a popular model in neuroscience, involving analysis of sensory information, prediction of outcomes, and planning actions. It's about the only model I've seen used in that field so far. It's not PCT.

BG: The brain remembers what it perceived in the past and what actions it initiated. There is some evidence that these memories reactivate the premotor cortex and that the motor outputs are inhibited. What sort of evidence? Victims of certain brain disorders compulsively pick up tools they encounter and attempt to use them. Compulsive behaviors in general suggest that recalling a perception can initiate the same motor activities (washing hands, checking see that door is locked) that accompanied the earlier perception. (Compulsive behaviors tend to involve fear so it is conjectured that the amygdala plays a role in these repetitive behaviors.)

An expectation then is a reactivation of pathways that led to a reward in the past. The motor actions are then initiated (presumably they are controlled by negative feedback) unless they are inhibited by the prefrontal cortex.

"Expecting" that a drink of water will be refreshing is called "imagining" in PCT, and it entails internal generation of perceptual signals rather than deriving them from sets of lower-order perceptions. If you plan to get a drink of water, it's in order to obtain that imagined perception, only for real. The actions that will be needed to get that experience can't be predicted with any accuracy; you'll do what's needed when you actually start to get the drink. Who took the water glass again? How long is my daughter going to be washing her hair in that sink? Who's hogging the bathroom? I have to leave for work -- maybe I'd just better get it there. That's PCT. We plan perceptions; outcomes, not actions.

BG: I agree that the actions cannot be predicted with any accuracy. I don't think anyone believes they can. But what causes the internal perceptual signals to be generated? How does the brain decide to generate those particular internal perceptual signals rather than others?

BG: I agree. My point is that the feeling plays any role in the action. A non-feeling HPCT system works exactly the way a feeling HPCT system works. If I am wrong, please tell me.

BP: You're wrong. An HPCT model with feelings would include controlling for the physiological sensations resulting from acting. For example, we could say that a robot can sense the charge on its battery, so when it has to act very strenuously, it will feel worn out and hungry. It will need to rest -- reduce its level of activity in general -- and eat some electricity. That's actually been build into some robots, though not by me. I might be able to design one that seeks the aid of a human. If it can't find one, it will feel distressed, and emit whatever signals it has learned or was designed to emit for summoning humans. You can put things like that either in a robotic way or in a human way, but the organization is the same. A change in the goal-perception requires appropriate changes in the state of the physiological system, which are sensed. That's what emotion is, in PCT.

O.K. So there is a hierarchical control system that controls the physiological perceptions generated by action. Is this the same hierarchical control system that control external perceptions or a different one? If the latter case, how do the two hierarchies interact?

BG earlier: Reorganization is a fundamental feature of the model, emotion, as far as I can tell is not. The model incorporates conflict, but conflict occurs whether or not you are aware of it. Is that not so?

BP earlier: You're still writing with no knowledge of what I have written about emotion. Stop guessing and read it. Nothing you're saying has any relationship to it.

BG: I am sorry Bill, but I read the attached paper (thanks). It is very clear and, as far as I can tell, completely consistent with what I have been saying about the model. If I am mistaken, there must be studies where the physiology underlying emotions play a role in the predictions made by the model. Are there such studies?

BP: No. I haven't programmed emotions into any models yet. The design I have so far seems workable, but I can't prove it is. What's the status of the other theories you propose or prefer? And what's all this insistence that the model has to make predictions? I haven't demonstrated any models that make predictions.

BG: I'm one of those old-fashioned types who only trust models that make predictions. You can always build a model that fits a data set. For example, the intentional design model does a great job of explaining what we already know. Unfortunately it does not predict anything which means that no conceivable new data can disprove it. Karl Popper taught me that is a no-no.

BG: If not, I stand by my claim that emotions play no role in the predictions of HPCT. This is not a criticism of HPCT, which works perfectly well without emotions. In my view your "theory of emotions" is a story. It's nice, but it isn't necessary.

BP: Emotions play no role in the demonstrated models of HPCT, because no emotions have been included in those models, not because they couldn't be included. Including emotions would result if you included sensing the physical state of the system along with sensing other controlled variables. I don't anticipate any problems with giving a model emotions. I've just been working on other things. There's a lot of unfinished business in PCT and I'm only one person. I'm not the only one who could do it. I'm just not as interested in emotions as other people seem to be, not that I don't have pretty ordinary emotions.

BG: HPCT is purely a control model. I could tell a story in which a thermostat is frustrated when there is a persisting difference between its reference level and the temperature of the room. But that would not improve the prediction of the model (the thermostat will run the furnace continually until the latter runs out of oil. At which point the thermostat will still leave the switch to the furnace "on").

BP: In PCT no form of prediction is needed, though once in a while it can help, and in certain types of system (like automatic aircraft landing systems) a prediction of future states is itself the controlled variable: the action is changed to keep the prediction in a particular state, such as the aircraft's touchdown point on the runway.

If you design a dumb one-level system, what you will get is a dumb one-level system. Why not design a smart thermostat, if that's what you want?

BG: my point was not that the thermostat is dumb but rather that there is no need to invoke the idea of feelings. Now that you have explained that the system is monitoring and controlling its internal state I can see how emotions play a role in the HPCT model.

That's a non-sequitur. The feelings of emotions are side-effects, just as the position of your elbow is a side-effect of reaching for something. But the changes in somatic state that the feelings report are necessary to provide the appropriate physiological backing for the motor control systems. The attachment should make this clearer.

BG: Again, does the physiology play a role in the predictions? If not, the model works without reference of the physiology. I could be wrong, of course. There could be such models that I am simply unaware of.

BP: Bruce, you can't conclude that because I haven't explicitly put some controlled variable into the model, it couldn't be done. You haven't even tried to imagine how it could be done, perhaps because you don't want it done. Is that it? Is the problem that you don't want emotions reduced to something a machine could have? Do I have to drop everything else and do it for you? Hmm. I hadn't thought that perhaps you're sneaky enough to be needling me until you get me to do it. Well, it's not going to work. Probably not.

BG: No, I expect Rick will get around to that one day.

BG: To be more accurate, I don't believe that you can understand behavior using nothing but a hierarchy of control systems.

BP: Thanks, that's right in line with my definition of belief. Beliefs are always about something imagined, not something in actual perception. You're imagining, for reasons I can only guess at, that we can't understand behavior using nothing but a hierarchy of control systems. I imagine that we can, and have been trying to do it for some time. What are you trying to do, other than telling me I shouldn't be doing this?

BG: I am trying to describe a system that does not involve a potentially unlimited number of higher levels to establish goals. Since infants learn to do things, I am reluctant to postulate that they are controlling at the level of principles and above. Call me old-fashioned. But I now see that you and I agree that there is a highest level. (Although my highest level is not quite as high as yours.)

BG: The model has a built in "out" that makes it unfalsifiable. The highest level in the hierarchy proposed to model a behavior has a reference level. What is the source of this reference level? A still higher level.

BP: I guess you've been away every time the subject of the top level has come up, which has been often. Infinite regress, of course, always threatens. But you don't have to worry about that here, because above the top level in the human hierarchy there is nothing but a bone called the skull. There's no room for any more. Since there is a highest level (whether or not I've found it), some explanation has to be found for the reference inputs to the highest-level comparators, an explanation other than a signal from a higher system.

Actually, this same consideration holds from conception onward. There is always a highest level of organization that has become active at any given point in life. Frans Plooj has studied the way the levels of control come into being in both chimpazee and human babies. They apparently are well-described, and in the right sequence, by my proposed levels of control. I haven't seen his latest findings on the top levels, but most of the others, from sequences on down, are well-observed. At least the proposed levels fit what is observed (Hetty Plooij, Frans' late wife and collaborator, called the fit "uncanny"). The Plooijs accumulated their chimpanzee data before hearing of PCT.

BG: That's great. I suspect that the chimpanzees hierarchy cuts off well before principles.

When a level is the highest one that has currently become organized, where do its reference levels come from? First, we have to realize that zero is an admissible setting for a reference signal: it means that the system should keep the perception in question at zero. So the absence of a reference signal tells the system to avoid having the associated perception, which is what we observe at first in babies and young children. But "fear of strangers" and other such avoidances goes away when whatever level is involved begins to work better and the baby can recognize objects, tastes, sounds, movements, daddy, rules, and so on.

BG: Perhaps my higher levels have reference signals set at zero and that is why I am unaware of them.

There are also reference signals that might be inherited -- think of the bower bird with its compulsive desire to see a fancy nest being built. It clearly couldn't inherit the movements needed to build such a nest; it has to learn how to control its perceptions to make them match the inherited blueprint.

BG: Agreed.

Reference signals might come from reorganization when there is no higher-order source working yet. They might be established at random, just to see what will happen. I'm sure you could add to the list of possibilities, if you cared to.

BG: I'll be glad to. They might come from memory. In fact, I'd go so far as to suggest that this is one reason we have such an elaborate memory system. Evolution did not select for it because it had nothing better to do, i should think.

BG: I am not saying that this model does not describe behavior, I am simply saying that I think there are other ways to set the highest level in a working hierarchy. I am not sure why you find this so objectionable. You have often said that the models developed so far do not test your conjectures about the higher levels.

BP: You're conflating two ideas. I haven't tested conjectures about higher levels (though others have) because I don't know how to simulate them. I've barely got to the level of relationships, and skipped events to get there. As my remarks above should make clear, I have never said that the highest reference levels have to be set by higher control systems and have conjectured at length, though evidently not sufficient length, about other possibilities. I have never found the idea of other sources of reference signals objectionable. What did I say that made you think that? Or is it just that you think I'm too dumb to realize that there isn't any system higher than the highest level of systems?

BG: Can you tell me who has tested conjectures about higher levels? That would be very interesting. I'm sorry if I misjudged you. My concern is that the HPCT seems to be controlled by principles or something above principles. I find human behavior not quite as lofty as you do. (But see above. My reference levels may be set to zero.)

BG: If you want my proposal, here it is. The organism looks at its environment at attaches "reward labels" to what it sees. These labels are based on its prior experience. The system then controls the perception associated with the highest expected reward. This oversimplified model obviously needs development and expansion to account for the "delayed gratification" mechanisms associated with the prefrontal cortex.

BP: Up to a point I agree. The way I have frequently put this is to say that certain perceptions (once the necessary input function has become organized) are sought because achieving them corrects intrinsic error (I trust you remember how that is connected with reorganization in PCT). Those perceptions are remembered and selected as reference signals, telling the system "Have that perception again." The organism will then do, or learn to do, whatever is needed to create that perception when intrinsic (and perhaps other) errors occur again.

BG: Great we don't differ then.

If an outside observer has control of something that the organism needs in order to achieve that perception, that observer can create contingencies in the environment such that whatever is needed will be provided as a "reward" only when the organism exhibits movements or behaviors that the observer wants to see. The observer, ignorant of the underlying control processes, interprets the reward as if it is causing the behavior to occur, not realizing that the behavior is being produced by the organism as a means of controlling the level of the rewarding thing it perceives.

BG: I couldn't agree more. The organism's goals determine what it finds rewarding, not the environment.

I don't think the system chooses the "highest expected reward." It's just trying to get some specific thing it already wants. If it doesn't already want the thing the observer offers, it won't try to get it. A reward is just a controlled variable.

BG: Since the organism wants many things at the same time, it must have a way of establishing priorities. That is the only point I was trying to make.

As to delayed gratification, I think that way of putting it is based on a misinterpretation. Of course we sometimes delay getting something now in order to get something we want more later; for example, I delay going through a door until I have completed opening it. There are certain sequences that a wise person controls because they work better than other sequences.

BG: There are a host of experiments on delayed gratification in young children. Most of them simply involve something like getting two cookies if you wait five minutes as opposed to one cookie if you don't wait. What is interesting about these studies is that the ability to delay gratification correlates highly with success in school and later life.

But the time delay is an irrelevant side-effect. What we should be paying attention to is levels of perception (which I get the impression you don't believe in).

BG: Am I not sure what I said that led you to conclude this. I do think perceptions at the level of principles and above are pretty hazy, but there is no question that we can and do perceive sequences and patterns.

We need to control some things as a means of controlling others. While we're children, not too sure of how causation works, we may try to have our cake and eat it too, but soon learn that this doesn't work; we can do one of those things but not both.

BG: Not all children learn this, unfortunately. You will find this hard to believe, I am sure, but there are adults who believe they can reduce taxes and deficits at the same time.

As we grow up we learn about higher and higher levels of the world (as it seems), and among the things we learn are strategies and principles, which organize sequences and categories and logic and such stuff. We learn that there are things to strive for that can be done only if we select certain at the lower levels to avoid conflicts; don't spend that money now because it's going to pay for going to college. This isn't a moral issue or a character issue or a duty or a way of being responsible, it's just a matter of distinguishing between lower-order and higher-order goals. Naturally, if you put one goal aside now in order to achieve a higher one, the result is a delay in correcting an error, but that's only because the higher-order goal is going to be achieved later. If it could be achieved right now, why wait? Just to enjoy the internal conflict? There's no special virtue in delaying gratification; sometimes it's kind of stupid to do that. Some people seem to do it just to tantalize themselves, or prove they are good sensible conservatives or Puritans.

BG: As I said, this ability proves to be important for success at many levels. Some people never achieve it and pay a high price for their inability to delay gratification.

BG: Can an HPCT model account for the same behaviors as this "model"? I'm sure it can. Whatever perception the organism controls has a reference level established by a higher level in the system. You are committed to what I call a "pure control" model. I am simply suggesting a "hybrid control" model with the outside world helps to establish the goals that an organism pursues. I don't believe that this suggestion is nearly as radical as you seem convinced that it is.

BP: I think it would be very nice of the outside world to help us establish goals, but I don't think it can do that. How can it know what goals would fit in with all the other goals we have? More to the point, how can it reach inside a brain and set the value of a reference signal? Perhaps I'm missing something here. How can anything outside the skin help establish a goal in the brain? Oh, wait, I forgot about the bower bird -- heredity can do that with some basic built-in goals. But then it's not just "helping", it's just establishing the goal. Most inherited goals, which we see as infantile reflexes, are soon reorganized away.

BG: Really? I would hate to think that goals related to survival would be reorganized away. The external world "reaches inside the brain" via memory. In this way it helps establish goals and therefore helps set the value of reference signals. I doubt that children decide "what they want to be when they grow up" as a result of random reorganization. But I could be wrong.

BP earlier: Why don't you try a PCT analysis of the bidding experiment? That would be much more interesting than having me making guesses. I could make up stories about why the results came out as they did, but I don't know how I'd test them. How did you test yours?

BG: As you can see, I adopted Rick's liquidity model. The authors differ, but I don't think we need to pursue this further. There is always a deus ex machina in the form of a higher level that sets the topmost reference level.

BP: Is that what you think? I don't.

BG: Fine, we agree.

Bruce

[From Bruce Gregory (2010.02.06.2004 UT)]

[From Bill Powers (2010.02.06.1113 MST)]

Bruce Gregory (2010.02.06.1425 UT) –

BG: I’d be curious to know what you would say about the behavior of the 9/11 hijackers. Clearly they were controlling their perceptions of flying into a World Trade Center tower. I would not think this action was based on anything that might be labeled as “hypothetical, tentative, probabilistic, unreliable or unproven.” Was it knowledge? Or was it something else?

It was belief. Some of them thought that they would wake up in Paradise with 72 virgins to ravish. Most of the others, probably, imagined that they were doing the work of Allah. They all had what you call stories, which are imagined things, not reports of observations. I think those stories involve beliefs about things that are hypothetical, tentative, and so forth. Flying into the World Trade Center towers was a means to an end. The end was something they imagined would happen as a result. In fact that’s an excellent example, because you can only imagine what will happen after you’re dead.

BG: My point was that most people would not call these beliefs tentative or provisional. Misguided, yes. Tentative, no. I doubt they could change them on short notice.

Bruce

[From Rick Marken (2010.02.06.1500)

Bruce Gregory (2010.02.06.1945 UT)--

Rick Marken (2010.02.06.1130)--

. I will leave it as an exercise for
you to think of how you might test to determine whether or not the
behavior of a stone falling to the ground is intentional (hint:
William James knew how to do it).

In fact, I did test for the controlled variable. I conjectured that the rock
was controlling for the perception that it was accelerating at 32
ft/sec/sec. I then introduced an obstacle (a table) to see if the rock would
attempt to restore its preferred rate of acceleration. Unfortunately the
rock had no way to propel itself to the the edge of the table. As a result
it reorganized and set its reference level to 0 ft/sec/sec.

Well, it's nice to have the old Bruce Gregory back;-)

Your test is on the right track except that you just applied only one
abrupt disturbance, which seemed to be effective, indicating no
control, except that, as you note, there is a possible alternative
explanation, which is that the disturbance only appeared to be
effective because, coincidental with the application of the
disturbance, the stone's reference for acceleration went from 32
ft/sec/sec to 0 ft/sec/sec. Unlikely but still a possibility. That's
why an important part of the test is to apply many disturbances or, if
possible, continuously varying disturbances, while monitoring the
hypothetical controlled variable to see if it varies in concert with
these disturbances.

If the hypothetical controlled variable does vary along with the
disturbances, that increases your confidence that the the system is
not purposive. If the hypothetical controlled variable does not vary
along with the disturbances -- or if there is only a very weak
relationship between the variance in the hypothetical controlled
variable and the disturbances applied to it -- it increases your
confidence that the system is purposive. For example, you could drop
the stone into beakers containing solutions of different viscosities
and calculate the stone's acceleration (the hypothetical controlled
variable) in each. If the acceleration in each solution is always
proportional to the viscosity then there is more evidence that the
stone is not controlling acceleration.

Of course, you could argue (as I'm sure you would) that the stone is
changing it's reference for acceleration each time it goes into a
beaker of solution of different viscosity, consistently lowering the
reference in the high viscosity liquid and raising it in the low
viscosity liquid. But that would just show that you are an intentional
agent that resists disturbances to your opinion of PCT;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.02.06.1555 MST)]

Bruce Gregory (2010.02.06.2000 UT) --

Bruce, I get a strange impression that you're reading my posts through a short slot in a big sheet of paper, so you can see only part of one sentence at a time and immediately forget what came before it. And if I ask you what seems a hard question you ignore it entirely. What's going on here? I don't want to indulge in one of our flaming arguments again, but it truly seems to me that you've dug into a position and are firmly determined not be be budged from it, using any debating strategy that comes to mind, fair or foul. Consider the following exchange:

> [From Bill Powers (2010.02.06.0850 MST)]
>
> Bruce Gregory 92010.02.06.1408 UT) --
>
>>> BP: So pleasure is the amount of dopamine released? It certainly doesn't feel like that, does it? Not that I know what dopamine feels like.
>>
>> BG: So vision is the the result of photons falling on the retina? It certainly doesn't feel like that does it? Not that I know what neural signals arising in the retina feel like. Come on Bill, I'm sure you can do better than that.
>
> BP: Yeah, I guess I can. Does injecting dopamine anywhere in the brain, say the forebrain or the cerebellum, feel like pleasure, or does it have to be in a particular place? Light, after all, doesn't affect the brain much unless it lands on the retina. Does dopamine cause pleasure when injected in any place where dopamine neurotransmitters are found, or in only some of those places?

BG: It is difficult to locate subjects who are willing to let you inject dopamine into their brains at random locations. The situation may improve if the unemployment rate remains high for any length of time.

That is clearly a dodging the question, by trying to turn it into a bitter sort of joke. There's an obvious contradiction here that you would have seen yourself if you weren't intent on -- I don't know on what. One proper answer to my question would be something like "I don't think anyone knows the answer to that." And because that answer is true, it's not known whether it's the dopamine that causes pleasure, or only a small volume of the many places in the brain where the dopaminergic synapses are located. Neuroscientists leap to conclusions across vast gulfs of missing data, a phrase I have used before and keep seeing more justification for using. You cite them completely uncritically.

Here's another example: cutting off debate by simply declaring a position and reciting some conventional nonsense without half thinking about it:

> BP: I thought you were talking about some other brain model, and just from what you wrote above, I'd say you are. Can you sketch a diagram of a system like the one you describe, showing how the prediction is made, and what happens after that to initiate the action that is required for achieving the reward? How is that action arrived at? If I understand what you're proposing, it's quite a popular model in neuroscience, involving analysis of sensory information, prediction of outcomes, and planning actions. It's about the only model I've seen used in that field so far. It's not PCT.

BG: The brain remembers what it perceived in the past and what actions it initiated. There is some evidence that these memories reactivate the premotor cortex and that the motor outputs are inhibited. What sort of evidence? Victims of certain brain disorders compulsively pick up tools they encounter and attempt to use them. Compulsive behaviors in general suggest that recalling a perception can initiate the same motor activities (washing hands, checking see that door is locked) that accompanied the earlier perception. (Compulsive behaviors tend to involve fear so it is conjectured that the amygdala plays a role in these repetitive behaviors.)

That is really lousy evidence full of random guesses and giant leaps to unjustified conclusions. Nobody knows what any specific behavior has to do with parts of the brain because every specific behavior is just one level in a whole hierarchy of levels. The motor activities you mention -- picking up tools, washing hands, checking the door -- are not motor activities, but consequences, controlled results of motor activities. If you understand PCT you already know that, but you seem able to ignore what you know whenever it is convenient.

Oh, hell, I'm so mad about this post of yours that I just don't want to go on with this, and I won't. Escalation ends NOW.

Bill P.

[From Bruce Gregory (2010.02.06.0054 UT)]

[From Bill Powers (2010.02.06.1555 MST)]

Bruce Gregory (2010.02.06.2000 UT) --

Bruce, I get a strange impression that you're reading my posts through a short slot in a big sheet of paper, so you can see only part of one sentence at a time and immediately forget what came before it. And if I ask you what seems a hard question you ignore it entirely. What's going on here? I don't want to indulge in one of our flaming arguments again, but it truly seems to me that you've dug into a position and are firmly determined not be be budged from it, using any debating strategy that comes to mind, fair or foul. Consider the following exchange:

BG: I will refrain from pointing out the questions I've asked you that you seem to have ignored. Difficult as it may be to believe, I am not arguing with you; I am simply trying to understand your position, which is considerably more developed than mine. I do not question your data or your models. You attribute motives to me that I simply do not understand. If anything your description of me seems to apply at least as accurately to yourself.

> [From Bill Powers (2010.02.06.0850 MST)]
>
> Bruce Gregory 92010.02.06.1408 UT) --
>
>>> BP: So pleasure is the amount of dopamine released? It certainly doesn't feel like that, does it? Not that I know what dopamine feels like.
>>
>> BG: So vision is the the result of photons falling on the retina? It certainly doesn't feel like that does it? Not that I know what neural signals arising in the retina feel like. Come on Bill, I'm sure you can do better than that.
>
> BP: Yeah, I guess I can. Does injecting dopamine anywhere in the brain, say the forebrain or the cerebellum, feel like pleasure, or does it have to be in a particular place? Light, after all, doesn't affect the brain much unless it lands on the retina. Does dopamine cause pleasure when injected in any place where dopamine neurotransmitters are found, or in only some of those places?

BG: It is difficult to locate subjects who are willing to let you inject dopamine into their brains at random locations. The situation may improve if the unemployment rate remains high for any length of time.

That is clearly a dodging the question, by trying to turn it into a bitter sort of joke. There's an obvious contradiction here that you would have seen yourself if you weren't intent on -- I don't know on what. One proper answer to my question would be something like "I don't think anyone knows the answer to that." And because that answer is true, it's not known whether it's the dopamine that causes pleasure, or only a small volume of the many places in the brain where the dopaminergic synapses are located. Neuroscientists leap to conclusions across vast gulfs of missing data, a phrase I have used before and keep seeing more justification for using. You cite them completely uncritically.

BG: I'm not surprised that you keep seeing more justification for your beliefs about neuroscientists. The phenomenon is called confirmation bias. It is a lot easier to identify in others than it is to identify in ourselves. You ire is appropriate when it comes to the failure of neuroscientists to realize the importance of negative feedback control. Unfortunately, your frustration leads you to condemn a great body of research that you seem to know little about.

Here's another example: cutting off debate by simply declaring a position and reciting some conventional nonsense without half thinking about it:

> BP: I thought you were talking about some other brain model, and just from what you wrote above, I'd say you are. Can you sketch a diagram of a system like the one you describe, showing how the prediction is made, and what happens after that to initiate the action that is required for achieving the reward? How is that action arrived at? If I understand what you're proposing, it's quite a popular model in neuroscience, involving analysis of sensory information, prediction of outcomes, and planning actions. It's about the only model I've seen used in that field so far. It's not PCT.

BG: The brain remembers what it perceived in the past and what actions it initiated. There is some evidence that these memories reactivate the premotor cortex and that the motor outputs are inhibited. What sort of evidence? Victims of certain brain disorders compulsively pick up tools they encounter and attempt to use them. Compulsive behaviors in general suggest that recalling a perception can initiate the same motor activities (washing hands, checking see that door is locked) that accompanied the earlier perception. (Compulsive behaviors tend to involve fear so it is conjectured that the amygdala plays a role in these repetitive behaviors.)

That is really lousy evidence full of random guesses and giant leaps to unjustified conclusions. Nobody knows what any specific behavior has to do with parts of the brain because every specific behavior is just one level in a whole hierarchy of levels. The motor activities you mention -- picking up tools, washing hands, checking the door -- are not motor activities, but consequences, controlled results of motor activities. If you understand PCT you already know that, but you seem able to ignore what you know whenever it is convenient.

BG: I thought I had said the the brain initiates motor activity which is controlled by negative feedback (PCT). Apparently you don't like this description. You have a very special way of talking in which what everyone else calls motor activities you say are not motor activities. If that terminology disturbs you, please tell me how you would prefer me to express the idea (unless you believe that the idea is wrong). As for random guesses and giant leaps to unjustified conclusions, I learned those tricks from a master. How can you say that nobody knows what any specific behavior is? You dismiss a vast body of research. Why? Because it conflicts with your model. We know there is a primary motor cortex. We know where it is and we know its connections to the frontal lobes and to the cerebellum in considerable detail. You don't need to be familiar with this data, but to dismiss it without so much as a by your leave makes you sound like a global warming denier. (This is the coldest winter in years. Washington D.C. is under two feet of snow. These climate scientists don't know their ass from a hole in the ground.)

Oh, hell, I'm so mad about this post of yours that I just don't want to go on with this, and I won't. Escalation ends NOW.

O.K. I have no idea what you are so riled up about, but I will honor your wishes.

Bruce

[From Bill Powers (2010.02.06.1650 MST)]

Bruce Abbott (2010.02.06.1100 EST) –

BP earlier: I wouldn’t use those
common-language terms or say “because” as you say Skinner would
have done when it’s a non-sequitur. OK, the delivery has followed presses
in the past. What does that have to do with pressing the lever this time?
Could it be that the rat has learned what action to produce in order to
create a given perception?

BA: Do you really mean “action”?

BP: Let me see. Do I? In this case I think I do, although even though the
kind of action is pretty much determined, the amount needed (which could
include none or some in the opposite direction) is unpredictable. But
perhaps I should have said something more like “which perception to
vary as a means of causing food to be delivered”

But I was focused primarily on the “non-sequitur” part. An
event occurs somewhere and is experienced. What does that have to do with
pressing the lever this time? Trying to work out an answer to that
question is the basic problem for understanding behavior, isn’t it? A
large chunk of an animal’s internal organization lies between the
external event in the past and the present behavior, starting with the
sensory nerves and proceeding through some number of levels of
organization, with memory getting in there, as well as reorganization. To
say merely that the lever is pressed “because” pressing it
produced food in the past packs and hides too much in that one word,
“because.” And anyway, we know that in general the action must
be varied, not repeated, to get the same result as before. Sometimes
repeating the action works, especially in simplified circumstances as in
a laboratory, but most of the time it doesn’t. It’s the surprisingly
appropriate variations we really have to account for, not the repetitions
that work only under special circumstances. If we account for the first,
that accounts for the second, too. The reverse is not true.

BP earlier: That’s how we would
replace the “because” statements in PCT-compatible language.
There’s nothing about past events that can affect present behavior in the
slightest, unless there was some change in the physical system to alter
the relationship of actions to perceptions. Events don’t cause anything;
they just happen.

BA: PCT and reinforcement theory agree that past events affect present
behavior.

BP: Dammit, no they don’t. PCT says that is the wrong interpretation.
It’s true that after a past event, behavior may have changed, but that’s
not an indication that the event affected the behavior. In fact, the past
event may have disturbed something that was under control, and it was
that perturbation that resulted in opposing effects from behavior – but
any event that disturbed the same variable in the same way would have
resulted in the same opposing behavior, because it’s the change in the
controlled variable, not the disturbance, that is sensed and
opposed.

BA: They do so by affecting the
organism’s present organization. In PCT, past behavior that has failed to
correct error in a controlled variable fails to slow reorganization. Past
behavior that has succeeded in correcting that error does slow
reorganization, more-or-less freezing in the current, relatively
successful organization.

BP: Wait a minute. Now I realize that when you say “changes
behavior” you don’t mean changing the amount or direction of a
behavior, but altering the manner in which the system acts when the
environment changes in some way – changing to a differnet form of
behavior. You’re talking about reorganization.

But now you seem to be saying that past events affect the organism’s
present organization. I don’t agree with that, either. Past events
stimulate the sensory endings, and direct physical effects like cuts and
bruises aside, that is all they do. They can’t reach inside the brain and
alter the organization, can they? It’s the internal reorganizing
processes that detect the state of some variable and institute
reorganizations as a result of detected error, and that, as you say,
“freeze” the organization that was present when, for any reason
whatsover, the detected error is corrected. If the same error occurs
again because of a change in the environment, reorganization might come
up with a completely different behavior, because all that matters is that
the error gets corrected, not how it gets corrected.

BA: In reinforcement theory,
behavior that produces certain types of events changes the internal
organization of the organism, so that under similar conditions such
behavior is repeated.

BP: Yes, and that is why that theory is wrong. It is not behavior that is
repeated, but certain consequences, regardless of what behavior is
required to produce them.

BA: Either way, our current
organization is a function of certain past events, including perceptible
effects of our own behavior.

BP: I hope you really concentrate on this idea and see what’s wrong with
it. I’ve been trying to get this point across for so many years that I’ve
lost count. Repeating behavior does not, NOT, normally cause the same
consequences to repeat. That happens only under special circumstances.
Normally what has to be done is to vary the behavior in just the way
needed to generate the same consequence as before. And that is what
organisms can do. Only a negative feedback control system can do that. A
stimulus-response system can’t. A system that is reinforced for repeating
a behavior can’t. There is no one “perceptible effect” of any
given behavior. The environment is always there, varying in itself and in
our relationship to it. That means that to produce the same effect twice,
behavior must alter in just the way required to cancel out any changes in
the environmental influence on the effect. If you repeat the behavior,
the environmental influences will cause a different effect to appear.
That isn’t what we observe. We observe that the same effect repeats in
spite of the environmental changes, because the behavior changes just the
way it has to to preserve the same NET effect.

BP earlier: Terms like
expectation are essentially useless to us unless you can express the same
meaning in PCT terms.

BA: Is it useless to ask whether the psychological phenomena such terms
refer to are real? If they are judged to be real, do they not need to be
explained? Is it then useless to ask whether they can be accounted for
within PCT?

BP: All right, tell me what the phenomenon is that is indicated by the
word “expectation”. Expectation is the name of a phenomenon.
What phenomenon? What do you experience that you call
“expecting” something? If you can tell me that, we can look for
a PCT description of it, without having to go through the word
“expectation” on the way.

BP: What is in the model is a
perceptual input function, a comparator with a reference level, and an
output function. In a hierarchical model there are many of these things,
connected in a specific way. Unless you can connect
“expectation” to something in this model, you’d be better off
finding out what the term indicates, and starting at that level. Just
saying “expectation” doesn’t explain anything.

O.K., what you refer to as the model above is only that portion of the
system that does not loop through the environment. You’re free to do
that, of course, but for what purpose?

Because that is the only part that is in the organism. What is outside it
is the ENVIRONMENTAL feedback function. The model of the environment is
easy to construct because we can do that without the organism
present.

To deny me the point I was
making? The models we construct always include the effect of the system’s
output on the controlled variable. The model system doesn’t need to
“expect” what effect its behavior will have on the controlled variable,
its behavior just has a certain effect, which was built into the system
by the programmer. The system doesn’t need to speculate about how its
behavior might affect a certain perception, and is never surprised on
those occasions when something unexpected happens instead. Rick’s models
to date haven’t needed to include expectation because the expectation
part of the model takes place inside Rick: He expects the model’s actions
to closely follow the actions of the participant during the experimental
run, and if the results violate that expectation, he revises the model.
The final, successful version has the correct relationships built into
it.

BP: If the model includes reorganization, it doesn’t need to know
anything about the feedback function, not even its sign. Because we start
the weightings at zero and make parameter changes small, starting in the
positive-feedback direction makes the errors increase, so reorganization
comes into play immediately, before fatal runaway can happen. Only
negative feedback will make the errors smaller, and that is what the
final result is. In the ArmControlReorg demo that you’re very familiar
with, it often happens that there are moments of instability, but the
reorganizing process never seems to have any problem getting through
them.

When reorganization is not included in the model, then of course the
designer has to specify the environmental feedback function.

BA previously: So what would
distinguish a system that developed expectations from one that did not?
Perhaps a crucial test would be to observe what the system did if the
expectation were violated.

BP also previously: I wouldn’t start there, because I wouldn’t know how
to tell if there were an expectation at all. Maybe systems don’t ever
develop expectations – how would you know? The only way to find out what
you’re talking about is to settle down and look at something you expect,
and take the experience apart into its components.

BA: You’ve made a prediction,
based on the information you have at hand (including relevant past
experience), about when the train will arrive. I don’t know that you have
necessarily imagined the train arriving on time, although of course you
might. That’s one way of expressing the prediction. You might also
express it in words. It’s a perception, one way or the other, of a
relationship between the clock and the train’s arrival, although not a
controlled perception.

BP: That’s not what I had in mind. How about taking an actual
circumstance you remember, or finding one going on right now, and looking
at the experience you call “expecting” something? What would
you say you are “expecting” right now? How does that activity
appear in your conscious experience? Is the expected thing happening, or
is it being imagined? Maybe you’d have to wait until you’re clearly
expecting something to occur, but the point is to observe what the
phenomenon of expecting is while it’s happening. There’s no need to
theorize about it if you observe it.

BA previously: If you suddenly
reversed the relationship

between mouse and cursor movements, a system without an expectation
would

simply continue to act as it did before, and control would simply
fail.

BP: There’s a partial definition of expectation. What is the expectation,
such that when it’s missing, control would fail?

The recognition by the system that its actions are not having the
required effect on the CV.

BP: All right, I’ll change the question: what form does this
“recognition” take?

BA previously: A system that
“expected” the cursor to move as before (based on previous
experience) would find its expectation violated and presumably take
action to sort out the problem.

BP: So how does it know that the expectation has been violated? Is the
expected result present in some form, being perceived or at least having
an effect? Is the perception of the actual action being compared with the
expected action?

BP: Is that how control systems
change their behavior to counteract errors? If you venture a guess as to
how this expectation thing results in taking action, and what kind of
action it would take, and what the problem is that needs to be sorted
out, you would have a useful model, perhaps, in which the term
expectation wouldn’t even appear.

You’re referring to reorganization, of course. I covered that possibility
below.

BP: No, I thought I was referring to ordinary control.

I’m beginning to suspect that the phrase “change their
behavior” is being used one way by you and a different way by me. I
mean, for example, changing 10 pounds of pull to 15 pounds of pull, or to
-10 pounds of pull meaning 10 pounds of push. I mean changing the same
behavior to different states. You seem, here and there, to mean changing
from one category of behavior, like pushing and pulling, to a different
one like twisting. Is that why my reference appeared to involve
reorganization?

BA previously: Although this
seems like a simple enough test for expectation, one might have
difficulty distinguishing between true expectation and reorganization. As
in the case of expectancy, in reorganization the violation of the usual
relation between mouse movement and cursor movement would bring about a
change in the system’s organization; if successful, reorganization would
restore the negative feedback relation and control over the CV would

recover.

Well, we won’t know if any of that is relevant until you describe the
phenomenon we’re referring to with the word “expect”. To test
whether expectation is occurring you first have to say what phenomenon
you’re talking about. Pretend I’m the man from Mars who has no idea what
the word expect, or any of its synonyms, means. Just look at what you
experience when you’re expecting something, and describe what you find.
I’m not asking for generalizations, just observations.

BP: Did you ask your
participants what thoughts they had when they first encountered the
reversal? How does reorganization target only the system whose control
has broken down? Attention seems to have something to do with that, but
it’s still an undeveloped aspect of the PCT model.

BP: Since I was one of the participants I don’t have to ask. When I was
trying to prepare for an expected reversal, I controlled much worse, with
lots of false compensations. I did the best when I just carried out the
tracking and reacted calmly but quickly when I found the cursor running
away, reversing my own system. This happened quickly enough and
consistently enough that Rick and I agreed that this must have been an
example of control through changing parameters in a lower system (the
sign of the output function gain, perhaps), rather than changing a
reference signal. I don’t know what I actually did to produce the
compensating internal reversal. A typical reversal episode started with a
very good positive-exponential runaway, followed by an abrupt return of
the tracking error to a low value. The change wasn’t random, so that
would rule out reorganization as we now think of it.

Interestingly enough, even though I am a highly experienced tracker, when
I try very long runs in a simple tracking task, spontaneous reversals
show up, which have the same form as when the program reverses the sign
of the effect of the mouse – only of course, both reversals originated
in me. The spontaneous reversals are disconcerting; it takes a
perceptible time to figure out that something is wrong and fix it, which
implies a higher-order process. And I don’t sense how I fix it. I just
do.

BA previously: Expectation may
be a high-level process involved in planning actions, drawing on
means-end relations learned during previous experience, worked out
logically, or perhaps communicated to us by others. (“You want to
get to the bank? Take Third Street to Maple and turn left.” You then
follow those directions because you expect that they will take you to the
bank.)

BP: That’s more like it. I would say you follow the directions as the
only means you know of getting to the bank, and in the background are
hoping that you’re remembering them right or they were given right. There
might be some sense of expectancy, but I don’t know how that would change
if the destination is a bank or a grocery store. A little more work and
we could just drop the term expectancy, except as a description of a
side-effect of doing all this.

BA: What is the point of having a sense of expectancy, if it plays no
role in behavior?

I didn’t say that. I said that the word expectancy (or the phrase
“sense of expectancy”) refers to a phenomenon and that we need
to examine the phenomenon, not the word, to see what it is and represent
it in PCT terms. If we understand what the phenomenon is and can describe
it directly, we can stop just alluding to it with the word
“expect.” We still haven’t done that.

BP: Did you really say
“planning actions”?

BA: Did you really say “actions”? (See my comment near the beginning of
this post) (;->

That’s still problematic for me. Following the directions given is an
action of a higher-level system,

Au contraire, following the directions given is a consequence of some as
yet unnamed action. In order for following directions to happen, what
must the behaving system be able to do?

but carrying out that
action is done by setting references for a set of controlled variables,
which ultimately are carried out via a complex set of variable means. In
the past I’ve tried to distinguish between behavioral acts, which are
controlled performances, from actions, which are the variable means by
which such acts are produced. Drawing a circle is a behavioral act,
carried out by variable means.

The distinction between “act” and “action” is too
confusing and when it was first invented, had nothing to do with
controlled variables. It’s much clearer to call controlled variable
controlled variables. Look at the mess Skinner made of this idea with his
concept of the operant.

BA previously: Expectation seems
less likely to be involved in habitual activities, although then we do
behave “as if” we had them.

BP: The “as if” part is in the observer’s imagination. Throw it
out.

BA: In the case described, that’s my point: it’s unnecessary. This seems
to be a place where Bruce Gregory’s “stories we tell ourselves” comes
into play.

BP: We wouldn’t need to disbelieve the claim of an expectation if we knew
what the phenomenon was well enough to propose a model of it. All I’m
saying is that we don’t need the “as if” part. If an
expectation exists, it does; if not, it doesn’t. Same idea as testing for
a controlled variable. If the variable proves to be under control, it’s a
controlled variable. Otherwise it isn’t.

BA: But then there are those
other cases, where real expectation may be involved. Let’s not throw the
baby out with the bathwater, even if it’s a rather unwelcome
baby.

BP: If you’ll just try doing what I’m suggesting there will be no need to
throw the poor baby away. To what does the term “real
expectation” refer? I’m not doubting that you can find it, or them,
I’m only saying you haven’t done that yet. When you do it, I think you
will find out what part of PCT it corresponds to. I don’t want to suggest
what part that is.

Best,

Bill P.

P.S. This might also help. When I say A affects or influences B, I am
proposing that there is a direct causal link from A to B. Both terms mean
that there are other influences acting at the same time, so we can’t
predict the outcome without knowing what all the other influences are. If
there are no other influences, we say A determines B. And when we say A
controls B, we mean still something different.