[From Bruce Abbott (980802.1000 EST)]
Bill Powers (980801.111p MDT) --
Bruce Abbott (980801.1200 EST)
Since the alternative condition can only be present or not present, the
"degree to which it is present" is either 0% or 100%.
True, but that's not what I said. I said "degree to which the rat _keeps_
the alternative present," which is not either 0% or 100%.
"Keeps", then, means "on the average?" OK. Forget it.
"On the average?" "Forget it?" You're talking through your hat, Bill.
It's just a measure of the quality of control, and it's not an average. If
a variable is under control, it will be kept near its reference value,
despite disturbances acting on the variable. In the case of a dichotomous
controlled variable, the variable is either at reference or it is not.
Percentage of time spent at the reference value reflects how well the system
is controlling, especially since in the absence of control action, the
imposed disturbances keep that percentage close to zero. Furthermore, all
measures of rate I have seen you employ depend on taking a time sample and
computing change over that delta t. The percentage measure under discussion
can be computed in exactly the same way, at any point within the session.
How do _you_ propose to measure the effectiveness of control over a
dichotomous variable? Do you have some magical measure of control
performance up your sleeve that does not involve taking samples over time?
I'd like to hear about it.
That's what I had in mind. Another, correlated measure, would be the
average error, which would decrease with "willingness." Latency to press
the lever and reinstate the alternative condition is another measure; the
latency determines how much time is spent in the imposed condition before it
is replaced by the alternative.
Yes, and this could be modeled as a control system with an integrating
output function.
Sure. So error in some system is cumulating more rapidly when the animal
presses rapidly than when it takes its time. Now we have to identify the
control system in which the error cumulates and its controlled variable.
Task difficulty does not vary across the manipulations, as I pointed out in
my post. What is "perceptual uncertainty"?
How do you know that the task doesn't get harder for the rat when the
signal's relationship to the shocks becomes more ambiguous? The increase in
delay before switching to the signaled condition may well reflect the
difficulty the rat is having in perceiving that the signaled condition
exists or doesn't exist. Why do you just dismiss this possibility?
The animal receives extensive training under the signaled and unsignaled
conditions, each of which is associated with a different correlated stimulus
(e.g., houselight on, houselight off). It remains easy for the rat to tell
which condition it is in across manipulations of the characteristics of the
signaled condition, and the action required to maintain the signaled state
does not change.
Perceptual uncertainty is the inability to perceive clearly whether a given
signal is a sign that a shock is about to occur; in other words, the
perceptual signal is small compared with the noise because the relationship
of signal to shock is unreliable.
I haven't dismissed this possibility. It could well be a reason why the
signaled condition becomes less desirable to the rat as the signal-shock
relationship departs from the nominal one.
But you still can't say how the rat is experiencing all this. It is
unlikely that the experiences could be cast in terms meaningful to a human
being.
I can't say what any other _human being_ is experiencing, nor can you. I was
trying to present the proposal in familiar terms, as a way to get the
concepts across.
You call the presses of the lever "responses." To what events are these
actions "responses?" We can easily verify that they are actions: we can see
the rat producing them. But how would we verify that each action is a
response to something?
Sorry, I intended no such implication. How about "discrete actions"?
Is this a report of a drastic change of policy on your part, or are you
just humoring me?
Drastic change in policy? What on earth are you talking about? I'm not
humoring you. You were quite right to correct my usage here.
Yes, I understand that point. That's why I think that the situation is more
complex -- increasing the time spent in the signaled condition when the
circumstances there are made less desirable will not make the experience
there any better.
A bad cup of coffee cannot be improved by increasing how
much of it you drink. Organisms whose systems were built to behave in this
way would soon perish.
You declared that there would be no good effect of increasing the time
spent in the signaled condition, and then went on to deliver an analogy
based on that assumption. But if the assumption is wrong, the analogy is
irrelevant.
Of course. And if the assumption is right, the analogy is valid.
Suppose that being in the signaled condition enables the rat to defend
itself against most of the effects of shocks. Then the more time that is
spent in the non-signaled condition, the more shocks will occur that the
rat can't defend against. So increasing the time in the signaled condition
will definitely have an improving effect on what the rat experiences.
The only problem with that analysis is that the data do not support it.
That suggests to me that it is time to examine other alternatives.
But let's take a good look at my coffee example above. How does PCT deal
with this well-known result?
I'd like to try it, but things here are starting to get a bit hectic again
(deadlines an all) so it might be a while before I can get started.
You don't want to do it. OK, I'll put it on my list.
I didn't realize that your invitation was a demand for immediate action.
But go ahead, put it on you list. It will be interesting to see how you are
able to develop a model of a proposal you do not understand, and which you
do not believe I can translate into code.
I don't believe that the rat is deliberating its options; the actual process
as I envision it is considerably simpler, but I wanted to state the case in
a way that we human beings could relate to.
Why, if human beings don't do it this way, either? What you're doing is
called anthropomorphizing. And in this case, you're doing so in terms of an
illusion many people have about how often they actually make any decisions.
That illusion being what?
What's relevant are the
proposed underlying processes -- the various perceptions associated with the
signaled and unsignaled conditions yield evaluations of "good" and "bad" as
you suggested in a recent post and in B:CP. The organism is organized so as
to prefer "better" over "worse," and will spawn appropriate control systems
so as to obtain the former over the latter.
No. New control systems are "spawned" to correct intrinsic error. Are you
using "spawned" in the Unix sense, or the salmon sense? Or the reorganizing
sense? They're all different.
I believe that we have a general capacity to bring control systems into
being on the fly, as needed, at levels of control above those required for
controlling limb positions, velocities, accelerations, and so on (those are
hardwired).
You can make anything sound like a choice situation just by picking the
right words. That doesn't mean that there's actually any choice being made.
No. Everything is a choice situation.
I can't interpret that comment. Do you mean "No, it doesn't mean that
there's actually any choice being made"? Or do you mean, "No, I disagree,
and I assert that everything is a choice situation"? Or did you leave out a
"not"?
Option 2.
So during training, the driver is presented with the choice of turning the
wheel left or right to turn the car left or right, and learns to choose one
of them? That, of course, is not how reorganization works, so you're
proposing a new model of learning, perhaps by some system that already
knows how to carry out the operation you call "choosing." Perhaps you had
better describe your model of how a system "chooses."
You've chosen an example that would involve very little if any deliberation
of alternatives. It's a case where a random choice might be best,
especially given the time constraints. If that choice leads to a worsening
of contions, that leaves turning the wheel the other way as the only
reasonable alternative. But even a random choice is still a choice -- a
selection of one course of action over other possible ones. It may not
require any deliberation at all, especially after the performance has become
habitual.
Why must a decision have been made at some point? Why couldn't the system
have picked one course of action at random, ignoring all the ones not picked?
Picking from among alternatives is choice. I think you're reading too much
into my use of the phrase "making a decision." It need not involve any
deliberative process, although it may.
That's gobbledegook. What is "controlling more strongly?" If you mean loop
gain, say so. Ir you mean higher reference level, say that. If you mean
higher gain in the input function, say that. If you mean more powerful
output function, say that. Controlling "more strongly" means nothing.
Loop gain.
So are you saying that the higher system senses the relative loop gains of
the two or more lower systems involved, and selects the lower system with
the higher loop gain? How does it do that? Is this an example of "and then
a miracle occurs"? How can the system distinguish between a lower system
with a high loop gain and another lower system with the same loop gain but
a higher reference signal? And since the higher system _contributes to_ the
reference signal in the lower system, what keeps it from increasing or
decreasing the "strength" of control in the lower systems? And if the
higher system can adjust the loop gain of the lower systems, what "decides"
which lower system will be given the higher gain?
Excellent questions. Some while back I wrote a computer program (and
published it on CSGnet) in which an "e. coli" bacterium found its way to a
nutrient source by means of a biased random walk (reorienting its direction
of travel at random, but doing so more often when nutrients were decreasing
than when they were increasing, as a result of its motion through the
nutrient solution). A second-level system varied the gain of this system
according to the amount of nutrient currently stored by the bacterium. At a
certain level of stored nutrient, the sign of the gain reversed and the
bacterium would then avoid rather than seek the greatest concentration of
nutrient. That's the sort of model I have in mind.
That an organism finds something "attractive" does not mean that the object
has the property of attractiveness. I am asserting no such model as you
suggest.
Then why use language that _does_ have that meaning to most people? It
makes a great deal of practical difference whether we assume that the
environment does the attracting, or whether desirability is in the brain of
the beholder. Those who put attractiveness in the environment include
rapists who say "She was asking for it." So this is not a trivial issue.
It's a word everyone understands. What do you suggest as an adequate
replacement?
That implies a control system that is sensing the degree of relative
attraction and controlling for -- what? Is there any "attraction" to sense
out there in the first place? How does the organism perceive that one
alternative is more favorable than the other? Favorable in terms of what
perceptual variables and reference levels?
See your own discussions of "pleasure/pain" or "good/bad."
All right. There is nothing about "choosing" in those discussions, or
comparison of "better" and "worse." You're trying to force outmoded,
old-fashioned concepts onto PCT. And I am resisting.
Of course there is. The reorganizing system selects (chooses) alternatives
at random; those that lead to "better" get retained; those that don't, don't.
And controlling for them how? By
varying the gain of lower systems? Would that really work?
No, by controlling for those options that yield the best outcome in terms of
the organism's perceptions of what feels best.
So you think that the organism is always comparing alternatives, evaluating
their relative advantages and disadvantages, and choosing which action to
take by imagining which outcome would feel best? I don't think that. You'll
have to prove it to me.
That's not quite it (the process need not be conscious or deliberative); the
organism is simply so organized as to control for those alternatives, among
a possible set, which by certain criteria built into the organism, generally
lead to a perception of "better" over "worse."
My proposal is only a
first suggestion, based on the fact that higher gain yields more vigorous
action at a given level of error.
That is the wrong way to analyze it. Higher gain yields lower error and
essentially the same degree of action, at a given setting of the reference
signal, with a given disturbance magnitude, and with a given environmental
feedback function. Do the algebra. Prove it to yourself.
Geez, Bill, have you forgotten the context of my analysis? You're talking
about the end-state, when error has been minimized, and I'm talking about
the disturbed state, in which, at higher gain, a given level of error will
produce stronger action. There is nothing "wrong" with my analysis.
You know this, Bruce, when you're thinking with your PCT-aware persona. In
this whole argument, you're acting as if you've never heard anything of PCT
but the words.
Blarney. See above.
Do you realize what an elaborate system you're proposing here?
Yes.
You're just falling back on
all the tired old psychological concepts, the very concepts that drove me
away from wanting to be a psychologist, probably before you were born.
Enlighten me. What concepts are you talking about?
On
the one hand you express a desire to be known as a PCT-savvy scientist, but
on the other you keep saying things that show a failure to have
internalized the concepts of negative feedback control.
On what grounds do you make this claim? What things did I say that "show a
failure to have internalized the concepts of negative feedback control?
Specific examples, please.
Regards,
Bruce