[From Bill Powers (2010.02.06.0850 MST)]
Bruce Gregory 92010.02.06.1408 UT) --
BP: So pleasure is the amount of dopamine released? It certainly doesn't feel like that, does it? Not that I know what dopamine feels like.
BG: So vision is the the result of photons falling on the retina? It certainly doesn't feel like that does it? Not that I know what neural signals arising in the retina feel like. Come on Bill, I'm sure you can do better than that.
BP: Yeah, I guess I can. Does injecting dopamine anywhere in the brain, say the forebrain or the cerebellum, feel like pleasure, or does it have to be in a particular place? Light, after all, doesn't affect the brain much unless it lands on the retina. Does dopamine cause pleasure when injected in any place where dopamine neurotransmitters are found, or in only some of those places?
BP earlier:What is it in the brain that is "expecting" reward, and what does "expecting" mean in terms of a brain model?
BG: Do you know that it feels like to expect that the drink of water will refresh you on hot day? The brain predicts the reward associated with some action. Based on this prediction it initiates an action to achieve that reward. (In PCT, a reference level is set and a control circuit carries out the action.)
BP: I thought you were talking about some other brain model, and just from what you wrote above, I'd say you are. Can you sketch a diagram of a system like the one you describe, showing how the prediction is made, and what happens after that to initiate the action that is required for achieving the reward? How is that action arrived at? If I understand what you're proposing, it's quite a popular model in neuroscience, involving analysis of sensory information, prediction of outcomes, and planning actions. It's about the only model I've seen used in that field so far. It's not PCT.
"Expecting" that a drink of water will be refreshing is called "imagining" in PCT, and it entails internal generation of perceptual signals rather than deriving them from sets of lower-order perceptions. If you plan to get a drink of water, it's in order to obtain that imagined perception, only for real. The actions that will be needed to get that experience can't be predicted with any accuracy; you'll do what's needed when you actually start to get the drink. Who took the water glass again? How long is my daughter going to be washing her hair in that sink? Who's hogging the bathroom? I have to leave for work -- maybe I'd just better get it there. That's PCT. We plan perceptions; outcomes, not actions.
BG: I agree. My point is that the feeling plays any role in the action. A non-feeling HPCT system works exactly the way a feeling HPCT system works. If I am wrong, please tell me.
BP: You're wrong. An HPCT model with feelings would include controlling for the physiological sensations resulting from acting. For example, we could say that a robot can sense the charge on its battery, so when it has to act very strenuously, it will feel worn out and hungry. It will need to rest -- reduce its level of activity in general -- and eat some electricity. That's actually been build into some robots, though not by me. I might be able to design one that seeks the aid of a human. If it can't find one, it will feel distressed, and emit whatever signals it has learned or was designed to emit for summoning humans. You can put things like that either in a robotic way or in a human way, but the organization is the same. A change in the goal-perception requires appropriate changes in the state of the physiological system, which are sensed. That's what emotion is, in PCT.
BG earlier: Reorganization is a fundamental feature of the model, emotion, as far as I can tell is not. The model incorporates conflict, but conflict occurs whether or not you are aware of it. Is that not so?
BP earlier: You're still writing with no knowledge of what I have written about emotion. Stop guessing and read it. Nothing you're saying has any relationship to it.
BG: I am sorry Bill, but I read the attached paper (thanks). It is very clear and, as far as I can tell, completely consistent with what I have been saying about the model. If I am mistaken, there must be studies where the physiology underlying emotions play a role in the predictions made by the model. Are there such studies?
BP: No. I haven't programmed emotions into any models yet. The design I have so far seems workable, but I can't prove it is. What's the status of the other theories you propose or prefer? And what's all this insistence that the model has to make predictions? I haven't demonstrated any models that make predictions.
BG: If not, I stand by my claim that emotions play no role in the predictions of HPCT. This is not a criticism of HPCT, which works perfectly well without emotions. In my view your "theory of emotions" is a story. It's nice, but it isn't necessary.
BP: Emotions play no role in the demonstrated models of HPCT, because no emotions have been included in those models, not because they couldn't be included. Including emotions would result if you included sensing the physical state of the system along with sensing other controlled variables. I don't anticipate any problems with giving a model emotions. I've just been working on other things. There's a lot of unfinished business in PCT and I'm only one person. I'm not the only one who could do it. I'm just not as interested in emotions as other people seem to be, not that I don't have pretty ordinary emotions.
BG: HPCT is purely a control model. I could tell a story in which a thermostat is frustrated when there is a persisting difference between its reference level and the temperature of the room. But that would not improve the prediction of the model (the thermostat will run the furnace continually until the latter runs out of oil. At which point the thermostat will still leave the switch to the furnace "on").
BP: In PCT no form of prediction is needed, though once in a while it can help, and in certain types of system (like automatic aircraft landing systems) a prediction of future states is itself the controlled variable: the action is changed to keep the prediction in a particular state, such as the aircraft's touchdown point on the runway.
If you design a dumb one-level system, what you will get is a dumb one-level system. Why not design a smart thermostat, if that's what you want?
That's a non-sequitur. The feelings of emotions are side-effects, just as the position of your elbow is a side-effect of reaching for something. But the changes in somatic state that the feelings report are necessary to provide the appropriate physiological backing for the motor control systems. The attachment should make this clearer.
BG: Again, does the physiology play a role in the predictions? If not, the model works without reference of the physiology. I could be wrong, of course. There could be such models that I am simply unaware of.
BP: Bruce, you can't conclude that because I haven't explicitly put some controlled variable into the model, it couldn't be done. You haven't even tried to imagine how it could be done, perhaps because you don't want it done. Is that it? Is the problem that you don't want emotions reduced to something a machine could have? Do I have to drop everything else and do it for you? Hmm. I hadn't thought that perhaps you're sneaky enough to be needling me until you get me to do it. Well, it's not going to work. Probably not.
BG: To be more accurate, I don't believe that you can understand behavior using nothing but a hierarchy of control systems.
BP: Thanks, that's right in line with my definition of belief. Beliefs are always about something imagined, not something in actual perception. You're imagining, for reasons I can only guess at, that we can't understand behavior using nothing but a hierarchy of control systems. I imagine that we can, and have been trying to do it for some time. What are you trying to do, other than telling me I shouldn't be doing this?
BG: The model has a built in "out" that makes it unfalsifiable. The highest level in the hierarchy proposed to model a behavior has a reference level. What is the source of this reference level? A still higher level.
BP: I guess you've been away every time the subject of the top level has come up, which has been often. Infinite regress, of course, always threatens. But you don't have to worry about that here, because above the top level in the human hierarchy there is nothing but a bone called the skull. There's no room for any more. Since there is a highest level (whether or not I've found it), some explanation has to be found for the reference inputs to the highest-level comparators, an explanation other than a signal from a higher system.
Actually, this same consideration holds from conception onward. There is always a highest level of organization that has become active at any given point in life. Frans Plooj has studied the way the levels of control come into being in both chimpazee and human babies. They apparently are well-described, and in the right sequence, by my proposed levels of control. I haven't seen his latest findings on the top levels, but most of the others, from sequences on down, are well-observed. At least the proposed levels fit what is observed (Hetty Plooij, Frans' late wife and collaborator, called the fit "uncanny"). The Plooijs accumulated their chimpanzee data before hearing of PCT.
When a level is the highest one that has currently become organized, where do its reference levels come from? First, we have to realize that zero is an admissible setting for a reference signal: it means that the system should keep the perception in question at zero. So the absence of a reference signal tells the system to avoid having the associated perception, which is what we observe at first in babies and young children. But "fear of strangers" and other such avoidances goes away when whatever level is involved begins to work better and the baby can recognize objects, tastes, sounds, movements, daddy, rules, and so on.
There are also reference signals that might be inherited -- think of the bower bird with its compulsive desire to see a fancy nest being built. It clearly couldn't inherit the movements needed to build such a nest; it has to learn how to control its perceptions to make them match the inherited blueprint.
Reference signals might come from reorganization when there is no higher-order source working yet. They might be established at random, just to see what will happen. I'm sure you could add to the list of possibilities, if you cared to.
BG: I am not saying that this model does not describe behavior, I am simply saying that I think there are other ways to set the highest level in a working hierarchy. I am not sure why you find this so objectionable. You have often said that the models developed so far do not test your conjectures about the higher levels.
BP: You're conflating two ideas. I haven't tested conjectures about higher levels (though others have) because I don't know how to simulate them. I've barely got to the level of relationships, and skipped events to get there. As my remarks above should make clear, I have never said that the highest reference levels have to be set by higher control systems and have conjectured at length, though evidently not sufficient length, about other possibilities. I have never found the idea of other sources of reference signals objectionable. What did I say that made you think that? Or is it just that you think I'm too dumb to realize that there isn't any system higher than the highest level of systems?
BG: If you want my proposal, here it is. The organism looks at its environment at attaches "reward labels" to what it sees. These labels are based on its prior experience. The system then controls the perception associated with the highest expected reward. This oversimplified model obviously needs development and expansion to account for the "delayed gratification" mechanisms associated with the prefrontal cortex.
BP: Up to a point I agree. The way I have frequently put this is to say that certain perceptions (once the necessary input function has become organized) are sought because achieving them corrects intrinsic error (I trust you remember how that is connected with reorganization in PCT). Those perceptions are remembered and selected as reference signals, telling the system "Have that perception again." The organism will then do, or learn to do, whatever is needed to create that perception when intrinsic (and perhaps other) errors occur again.
If an outside observer has control of something that the organism needs in order to achieve that perception, that observer can create contingencies in the environment such that whatever is needed will be provided as a "reward" only when the organism exhibits movements or behaviors that the observer wants to see. The observer, ignorant of the underlying control processes, interprets the reward as if it is causing the behavior to occur, not realizing that the behavior is being produced by the organism as a means of controlling the level of the rewarding thing it perceives.
I don't think the system chooses the "highest expected reward." It's just trying to get some specific thing it already wants. If it doesn't already want the thing the observer offers, it won't try to get it. A reward is just a controlled varible.
As to delayed gratification, I think that way of putting it is based on a misinterpretation. Of course we sometimes delay getting something now in order to get something we want more later; for example, I delay going through a door until I have completed opening it. There are certain sequences that a wise person controls because they work better than other sequences.
But the time delay is an irrelevant side-effect. What we should be paying attention to is levels of perception (which I get the impression you don't believe in). We need to control some things as a means of controlling others. While we're children, not too sure of how causation works, we may try to have our cake and eat it too, but soon learn that this doesn't work; we can do one of those things but not both. As we grow up we learn about higher and higher levels of the world (as it seems), and among the things we learn are strategies and principles, which organize sequences and categories and logic and such stuff. We learn that there are things to strive for that can be done only if we select certain at the lower levels to avoid conflicts; don't spend that money now because it's going to pay for going to college. This isn't a moral issue or a character issue or a duty or a way of being responsible, it's just a matter of distinguishing between lower-order and higher-order goals. Naturally, if you put one goal aside now in order to achieve a higher one, the result is a delay in correcting an error, but that's only because the higher-order goal is going to be achieved later. If it could be achieved right now, why wait? Just to enjoy the internal conflict? There's no special virtue in delaying gratification; sometimes it's kind of stupid to do that. Some people seem to do it just to tantalize themselves, or prove they are good sensible conservatives or Puritans.
BG: Can an HPCT model account for the same behaviors as this "model"? I'm sure it can. Whatever perception the organism controls has a reference level established by a higher level in the system. You are committed to what I call a "pure control" model. I am simply suggesting a "hybrid control" model with the outside world helps to establish the goals that an organism pursues. I don't believe that this suggestion is nearly as radical as you seem convinced that it is.
BP: I think it would be very nice of the outside world to help us establish goals, but I don't think it can do that. How can it know what goals would fit in with all the other goals we have? More to the point, how can it reach inside a brain and set the value of a reference signal? Perhaps I'm missing something here. How can anything outside the skin help establish a goal in the brain? Oh, wait, I forgot about the bower bird -- heredity can do that with some basic built-in goals. But then it's not just "helping", it's just establishing the goal. Most inherited goals, which we see as infantile reflexes, are soon reorganized away.
BP earlier: Why don't you try a PCT analysis of the bidding experiment? That would be much more interesting than having me making guesses. I could make up stories about why the results came out as they did, but I don't know how I'd test them. How did you test yours?
BG: As you can see, I adopted Rick's liquidity model. The authors differ, but I don't think we need to pursue this further. There is always a deus ex machina in the form of a higher level that sets the topmost reference level.
BP: Is that what you think? I don't.
Best,
Bill P.