[From Bill Powers (940530.1840 MDT)]
Martin Taylor (940530.1700) --
Welcome back! I hope you're all rested and ready to go.
My impression is that people work according to a subjective
cost-benefit analysis (which translates directly into error
[MMT] Sorry, I don't see the direct translation here. Could
The actions used in controlling x relative to reference level x*
cause disturbances of other controlled variables. For example, the
effort used to control x might reduce the energy supply or increase
lactic acid concentration. The error induced in other control
systems that maintain energy supply at some positive level or lactic
acid concentration at a low level is a "cost" of controlling x.
In general, controlling one variable entails actions that disturb
other controlled variables. The resulting error is the cost. If a
side-effect of action doesn't disturb any variable that is under
control, there is no cost.
This [unclogging drains] sounds like the classical prisoner's
dilemma situation. If everyone cooperates, everyone wins, but
a "defector" wins more. But if everyone "defects" everyone
loses. There have been various studies of this situation in
simulated evolutionary games
It's not a classical prisoner's dilemma situation. That classical
situation is high simplified. It does not include retaliation by
means outside the immediate interaction, and a host of other real-
life considerations. The "studies" of which you speak are just
let's-pretend mathematical games with no relationship to real
"Control" of the other by disturbing a CEV related to a
perception the other is thought to control will work. "Force"
applies only when the controller and the controlee have
different references for the same perceptions relating to the
same CEV. What happens in the "sign here or see what we can do
to your kid" scenario is that a conflict is set up between ECSs
within the controlee. There's no force involved.
The force is there; it just isn't obvious in the surface
relationship. If there were no force involved, the person being
coerced into signing would just say "No, thank you. I will take my
kid and leave now." In order to carry out the scenario, it is
necessary to have total physical control over the individual being
coerced so that all the simple means of control are ruled out. There
would be no conflict if the person were free to leave and send his
child to safety. It ALWAYS comes down to brute force. When you just
think about the abstract relationships, you tend to ignore the fact
that the people involved are strapped down.
The result of this kind of upbringing is that most people
don't know why they want to do right and avoid doing wrong: it
just comes down to how they feel about it, to a matter of
For the individual, yes. But [what] do we mean by society's >moral
Society's moral imperatives are the imperatives that individuals
accept as their own reference levels and attempt to get others to
accept. Some members of a society can force acceptance of moral
imperatives by others (at least the appearance of acceptance) by
holding the threat of physical force over them. In a relatively free
society, where the threat of physical force behind moral imperatives
is relatively low, people make up their own minds, with the result
that a lot of different moral imperatives can be found within the
Hence we tend to preferentially minimize conflict with those
with whom we associate most: historically, family members,
neighbours, working associates, our "tribe." This minimization
tends to set up social conventions that are relatively rigid
within tribes (including especially religious groups), but that
differ between tribes, and thus leads to an increased
likelihood of conflict across tribal boundaries as compared to
what might have been the situation if each individual were an
isolated non-tribal entity.
Well said. "Rigid" conventions, of course, imply strong reactions to
their violation. Physical force again.
It feels like "conscience" or "the way God told us to behave,"
but it boils down to mutual reorganization leading to long-term
stable states within interacting groups of people.
Another source of "conscience" is the operation of control systems
above the level where one is customarily aware. Of course the
etiology of those systems may be something like what you describe.
It is quite possible that, as Rick says, a global summed
error can be experienced as an emotion ...
I prefer the explanation that an emotion is a set of perceptions
that result from the error signals putting the body into a state of
preparedness to act (or to avoid acting). In PCT, error signals are
It [imagining playing cricket] didn't work very well. I was
very bad when it came to the real thing. It's one thing to
image the muscle feelings, but it's another to act when the
muscles have atrophied over months of disuse!
Interesting point. The control systems may still be organized
correctly, especially given some rehearsal in imagination, but the
physical limits of control have narrowed. When you leap for the
ball, everything works perfectly except that the muscles can't
deliver the amount of output required by the error signal.
This change of receiver's state is the information received
from the message, but the sender never can be sure either of
the initial or the final state of the receiver, and hence can
never know whether the message "sent" (i.e. intended) is the
one that was received.
A problem that plagues all people trying to explain their theories
to each other. A lot of language is required to get a little
information across -- and know that it actually possibly got across.
Think of adaptation as referring to the reorganizing system,
perhaps. It is presumably not optimum from an evolutionary
point of view to obtain the best control of every controllable
perception, since that would both waste a lot of resources and
generate a lot of internal conflicts (also wasting resources).
Having a dead zone of "don't care" error can be a good way of
avoiding a lot of problems.
I'm just reporting what two texts have to say about adaptive
control. They agree that adaptation is just another control problem,
a nonlinear one since it works by altering system parameters. At
bottom we're talking about the same thing. I just think that the
formal approach is vastly overcomplicated, especially considering
what it has been able to accomplish (other than keeping people happy
who love complex mathematics).
You can't perceive your degree of uncertainty about whether the
last full moon was last Tuesday, or whether it is now raining
in New York, or whether your future great- grandchildren will
To be uncertain about something you have to want to know about it. I
am not uncertain about whether it's raining in New York because I
don't care whether it's raining in New York, one way or the other.
The fact that something is unpredictable doesn't mean that anyone is
uncertain about it. While I do admit to feeling uncertain in some
circumstances, most of the time I am neither certain nor uncertain;
certainty is not a dimension being controlled, in most cases.
My problem can be stated as "given f, how closely will p be
likely to match r, given some statistical character of d." In
this, I take "f" to subsume all the different functions that
appear in a full diagram of the control loop (perceptual,
output, and feedback functions, at a minimum).
That is how I intended "f" to be understood. If you knew exactly
what d was going to do and had a good deterministic model, you could
state exactly how closely p would track r. So as I said it all comes
down to the ability to predict d (and r). Given a deterministic
model and prior knowledge of d, the only uncertainty in r - p is
intrinsic system noise. If you want to predict behavior, you have to
be able to predict the disturbance and the reference signal. I am
not interested in that kind of prediction of behavior. I only want
to predict _relationships_ between disturbances and behavior, which
is much easier and doesn't depend on predicting disturbances.
You cannot design a control system to handle disturbances of
arbitrary statistical character, and neither can evolution.
You, and evolution, design for adequate control for
disturbances of the kind of character you expect to need to
This is not correct. If I design a control system that will
counteract a step disturbance without overshoot in as little time as
possible, given the properties of the system components, it will
"handle" all disturbances it could possibly handle as well as can be
done. The design of a control system does not require guessing what
kinds of disturbances will appear. And yes, I have read what is said
about this subject in the two texts on adaptive control. I think
they're approaching the problem the wrong way.
There is very little difference between a control system optimized
to resist disturbances and one optimized to make a controlled
variable track reference signals. Most of the alleged difference
shows up in engineering models specifically designed so there is a
difference, which strikes me as a poor design. If a control system
is designed to control as well as possible given a step disturbance,
it will automatically be optimized for all disturbances containing
only lower frequencies, regardless of their waveshapes.
You have mentioned optimizing a control system to resist a
disturbance within a given frequency band. I will bet you that the
same physical components optimized for the most rapid possible
resistance to a step disturbance will do just as well for the
limited-band disturbance, and will naturally do a lot better for all
other waveforms of disturbance. You're working under some
assumptions that I don't think are true.
The materials and possibilities of organizing them available on
Earth would have permitted evolution to design an organism that
would survive a 10-ton meteorite hit, but what would have been
I doubt this contention most strenuously. Evolution can not
accomplish anything whatsoever that you happen to imagine. It has
neither foresight nor hindsight and it has no way to wonder what
would have happened if it had done something differently a million
years before. It isn't an agency. Nor is it omnipotent.
How do you "make sure" that the perceptual signal accurately
represents anything? What you are controlling is the
perceptual signal, but you do it through an external variable
that at high levels is very complex, with lots of uncertainty.
I was speaking of designing an artificial system, where you can use
physical measuring instruments to determine the state of a physical
variable, and also to determine the state of a perceptual signal
representing it, and make sure the representation is accurate.
And as you know, I disagree about high-level perceptions containing
"lots of uncertainty." Uncertainty in perception goes with extreme
conditions, which are far from the most common kind. High-level
perceptions do not reflect the complexity of the world they
represent. They are simple, like all perceptual signals.
At low levels, you can design eyes with 1-metre pupils to
provide accurate position descriptions in dim illumination, but
you sacrifice the ability to escape tigers in thick brush. So
you (God, evolution) design 5 mm pupils that are usually good
enough, in bright light, and can be easily carried around. You
can't have perfect perceptual representations of the external
world. And that's part of what the IT argument is about. How
good is "good enough?"
That's a rhetorical device, not a possibility. Evolution can't
change its mind and go back to redesign previous forms so as to
allow for some radical change in present forms. It always builds on
what already exists, and since that isn't planned, evolution
probably misses many very advantageous designs. We're not talking
about an optimized system here; just one that works as well as it
works, neither better nor worse. I don't believe that evolution
"optimizes" anything. Natural selection works with a meat cleaver,
not a scalpel.
... it was assumed in 1972 that fixed alerting criteria were
operating in parallel for far more perceptual signals than
could at any moment be actively controlled or monitored.
If there are alerting criteria, what is compared against them? There
must be perceptual signals, mustn't there? Isn't generating a
perceptual signal "monitoring?" The result of comparison must be
some indication of an error. But error signals, in the PCT model,
are not perceived. Are you sure you aren't just describing control
systems that are working without conscious attention?
It's an interesting design, but it seems more theory-driven than
phenomenon-driven. In mathematical fact, you can have as many
different perceptual signals coexisting as you like, as long as you
don't try to control them. But in a real system, how many alerting
signals must we in fact account for? Three? Three million? It makes
a great deal of difference, and this is a point that mathematics
I don't want to get back into a lot of arguments about
mathematically conceivable universes. There is only one actual
universe, and I would rather concentrate on that one.