Bomb in the Hierarchy Simulation

[From Bjorn Simonsen (2005.03.11,11:35 EST)]

From Bill Powers (2005.03.10.1158 MST)

You asked Martin. May I give a comment?

Have you had any thoughts about how positive feedback
can exist in a system that controls logical variables?

I guess I am special. I am not particular engaged in positive feedback. And
I have a problem with positive feedback in logical variables.

I think upon logical systems as if the Output functions are _on_ for many
different error values and it is _off_ for just as many other values. And I
think there are more and more values, the higher up in the hierarchy we
come.

Just a aphorism.

I think positive feedback can exist in a system that controls logical
variables when I am in the Imagining mode. And I will give you an example.
Place yourself on a board, 1.0m *5m* 0.15m, and balance from one end to the
other. Explain how you control your perceptions.
Get some help and place yourself on the same board 300 meters above ground
and balance from one end to the other. Explain how you control your
perceptions. I guess the same forces acts on you both places.

bjorn

[From Bill POwers (2005.03.11.0620 MST)]

Martin Taylor 2005.03.11.00.30 --

With a flat gain of less than 1, in your example you would have the recurring impulse decaying toward zero, as I said.

Yes, a leaky integrator, as I said. I think we must always think in terms of leaky integrators rather than algebraic multipliers, since all physical processes take time to happen (except when physicists start exercising their imaginations). There is no time in algebra.

Best,

Bill P.

[Martin Taylor 2005.03.11.09.00]

[From Bill POwers (2005.03.11.0620 MST)]

Martin Taylor 2005.03.11.00.30 --

With a flat gain of less than 1, in your example you would have the recurring impulse decaying toward zero, as I said.

Yes, a leaky integrator, as I said. I think we must always think in terms of leaky integrators rather than algebraic multipliers, since all physical processes take time to happen (except when physicists start exercising their imaginations). There is no time in algebra.

I don't think you and I have any difficulty or disagreement with this. The only reason I stepped in was because I thought maybe Bjorn had a problem.

The reason I thought that was because of his mention of the threshold of unity gain for the explosion to occur. We know that any control loop with an integrator output function will have a gain greater than unity at a sufficiently low frequency, so to talk about such a control loop would not have helped Bjorn to understand.

I accept that the example is irrelevant to a discussion of control loops as they are used in PCT, but I'm not sure it's irrelevant in trying to bridge the gap between your understanding and Bjorn's. It depends on where Bjorn is coming from wither it's helpful or confusing.

I'll bow out of this now.

···

--------------
To Bjorn,

On the matter of two-way control, many of our muscle systems work in opposing pairs, don't they? Why shouldn't it be the same thing for the internal control processes that need to control around plus and minus reference values?

I take it that you have no problem with positive and negative errors when the reference value is far from zero?

Martin

[From Bill Powers (2005.03.11.0620 MST)]

Bjorn Simonsen (2005.03.11,11:00 EST)–

Pardon me for my ignorance. A
new world opened for me when Norbert Wiener

a.o. opened my eyes for purposive behavior and I discovered a galaxy when
I

in 97-98 joined CSG. Since then I, with my simple-mindedness, have
thought

that organisms just controlled one-way perceptual
systems.

Consider a perception of aiming a rifle at a target. You want the sights
to be lined up on the center of the target. If the sights are pointed at
some angle to the right of the target, there is (shall we say) a positive
error, and this error leads you to swing the gun to the left. If the
sights are pointed at an angle to the left of the target, in the same
coordinate system this is a negative error and it leads you to swing the
gun to the right, which is the opposite, or negative, of the previous
direction of movement. So we can represent this as a single bidirectional
(two-way) control system in which perceptions (angle from target to
direction of aiming) can be positive or negative, and in which actions
can also be positive ( swing to the right) or negative (swing to the
left).

You can also set positive or negative reference conditions for the angle
between the sighting direction and the target. If you decide to
“lead” a target moving to the right, you want a reference
signal that specifies a positive angle between the direction of the
sights and the direction to the target, so the reference signal is
positive. It will be matched by a perceived angle of the aiming point to
the right of the target (positive). If the target is moving to the left,
you want a negative reference signal – that is, a signal that specifies
a negative angle, an angle to the left, between the aiming direction and
the target direction. Of course this requires, in the nervous system, two
reference signals, both inherently positive of course, which signify
either positive or negative angles in external space. I “leave it as
an exercise for the reader” to draw a diagram of how this two-way
(meaning positive and negative acting) control system would have to be
implemented using neurons and nothing but inherently positive
signals.

Let me make the most of this
opportunity and talk about Renshaw cells. I

have tried to understand how a Renshaw cell worked beyond
(Wooldridge

1963)'s "“Renshaw cells” are apparently specialized to
emit inhibitory

substance at the end of the outgoing impulse-conducting
fiber", but I have

not been more wise till last year (if it is correct what I think
today).

It is different transmitters in the synapses that bias how the target
cell

eventually responds to an incoming message. This biasing of neuronal

signaling is known as neuromodulation and the biasing will i.a. make
the

actual signal enhanced or blunted.

The input to a cell from a Renshaw cell is a message. It has the direct
effect of lowering the frequency of firing of the receiving cell. To a
first approximation, its effects on firing rate subtract from the effects
of ordinary excitatory inputs. But it’s not quite that simple.

The effect of any incoming signal depends on where on the cell body it
synapses (these are my general impressions, not authoritative facts). If
it synapses on a dendrite, its effects on the output firing rate are like
addition or subtraction. If it synapses near the place where the axon
leaves the cell, the effects are more like gain changes – a multiplier
(amplifier) or divider (attenuator) applied to all outgoing signals
caused by other inputs. The term for a gain-changing effect is
“modulation.” As you say, it is the nature of the
neurotransmitter emitted at the synapse that determines whether the
effects are addition or subtraction, amplification or attenuation.
Renshaw cells, as I understand them, are specialized to emit inhibitory
neurotransmitters.

Let me express your words with
mine.

Note that a “negative error” simply means an error signal
that indicates

that r - p is negative (in other words, p is greater than r). That
requires

that p be excitatory and r be inhibitory.

A negative error simply means a blunted effect on the on the recipient
cell.

The problem here is that of distinguishing (for positive signals) between
modulation and addition. If a is the effect of input 1 and b is the
effect of input 2, the modulation case wioth a being excitatory is
represented by

output = a * b (b excitatory) or a/b (b inhibitory).

while the addition case is represented by

output = a + b (b excitatory) or a - b (b inhibitory).

In both of these cases, we can say that an inhibitory “b”
signal “reduces” or “inhibits” the effect of the
“a” signal. But we can say that only because those terms are so
vague. There is a big difference between subtraction and division. A
variation in the “a” signal will appear as an equal variation
in the output if the effect of “b” is subtractive. However, if
“b” is a divisor, then the same change in the “a”
signal will appear as a smaller change in the output if “b”
gets larger.

Let me again express your word with mine.

When we say that both p and r
are positive signals, we think of the

frequency of action potentials. This must be a positive number (or
zero).

All signals are positive signal in the sense that their magnitudes are
expressed as a frequency of firing, and frequencies can’t go negative.
However, the effects of the signal on the output frequency can be
either positive or negative, depending on the kind of neurotransmitter
that is involved. The expression “blunted effect” is not
precise enough. Are you talking about modulation or addition?

If p>r, the error signal is
negative and it will have a blunted effect on the

Output function. And when the Output function has a blunted effect,
the

muscles will relax according to the r which ask for a lower
p.

I recommend using mathematical or technical terms rather than imprecise
words from ordinary language. If p > r, in the diagram I drew
yesterday, the “Actuator for negative errors” receives the
(inherently positive) error signal, and it causes negative-going actions
(swinging the gun to the left).It is the excess of excitation over
inhibition, or p - r (sic), that produces the error signal. If p alone
would produce 100 impulses per second from the comparator, and the effect
of r is to produce -90 impulses per second, then the error signal is 10
impulses per second. That’s a naive way of putting it, but close
enough.

If p < r, the opposite of the above case, then there would be no
signal in the line labeled “Actuator for negative errors”, and
the signal in the other error line would be equal to r - p

I have problems with your next section (I am sorry),

If this system is part of a
negative feedback loop, positive feedback will

happen if all the plus and minus signs in the diagram are
interchanged.

Then runaway can happen in either direction: toward more negative or
more

positive errors.

With my words.

If we have a negative feedback loop in a simulation , the p will
approach

r. If we in the same feedback loop change the effect of p from minus to
plus

and change the effect of r from plus to minus, then the effect on the
output

function will be positive (|p|>|r|). And this positive error
will have an

enhanced effect on the output function. The muscles will be still
more

tightened. Now the p will be even greater. and we have a positive
feedback.

Yes.

This was in a
simulation.

How can this happen in a living
organism? The transmitters (proteins) don’t

change, do they? How can the plus and minus signs in a living
organism

change?

Look at Rick Marken’s simulation, in which the sign of the feedback
function reverses without warning. This is done at an instant when the
output is crossing zero, so there is no sudden jump in the cursor
position to warn you. The cursor simply starts moving opposite to the way
the mouse moves instead of the same way. That is sufficient to create
positive feedback. No change in the controlling nervous system is
required.

The result is just what you would expect: an exponential runaway
condition. A little error leads to greater error which leads to even
greater error. But after about half a second, the controller suddenly
regains control. How can this happen? Only if something INSIDE the
controller has reversed sign, to compensate for the external reversal
which still exists. So clearly it is possible for a sign somewhere in the
control system to reverse. Again, I leave it as an exercise for the
reader to show how such a reversal could be produce by a neural signal
from a controller that controls for negative feedback. Just assume that a
neural switch can be implemented by a signal that biases a neuron into or
out of a state where an input can cause an output. Just a simple circuit
design problem.

I can imagine illness and or
injury in the brain will result in positive

feedback in a functioning control system. But there are limits for how
much

muscles and glands can tighten or secrete. Near these limits I guess

reorganization will eliminate the positive feedback loops.
(?)

I think you’re overlooking the fact that environmental effects can
reverse direction. Consider focusing a camera. If the image is blurred,
which way should you twist the lens to make it sharper? That depends on
whether the current focal distance is too great or too small: both will
produce blurring of the image. You have to be able to try a direction of
twist, and if it makes the blurring worse, reverse it to make the image
sharper. Similarly for manually tuning an AM radio. Which way should you
turn the tuning knob to make the sound get louder? That depends on
whether the current frequency setting is too high or too low: either will
reduce the sound volume. Obviously we have to have the ability to reverse
the relationships inside the control system in such cases – quickly and
frequently.

I think it’s also obvious that introducing such ideas into a theory that
people seem to have trouble grasping in its simplest form would have been
foolish.

Best,

Bill P.

[From Bill Powers (2005.03.11.0806 MST)]

Martin Taylor 2005.03.11.09.00--

I don't think you and I have any difficulty or disagreement with this. The only reason I stepped in was because I thought maybe Bjorn had a problem.

I agree.

Best,

Bill P.

[From Bjorn Simonsen (2005.03.11,20:15 EST)]

**[From Bill Powers (2005.03.11.0806 MST)]

Martin Taylor 2005.03.11.09.00–
I don’t think you and I have any difficulty or disagreement with this. The
only reason I stepped in was because I thought maybe Bjorn had a problem.
I agree.
Lucky the fellow with problems. He will make a huge step, when the problem is away.**

It reminds me about the time I was a college teacher. Every month of may, the students had their problem with the end-of-year examination. Most of them made a huge step toward the University and we teachers had to start teaching at the same level we started every year, two months later.

By the way, thank you for your comments. I have met the concepts “one and two way systems” for the first time on this list and next week I think I will put a two way system in my balance simulation. But most of all I appreciated to read Bill’s “Quantitative Analysis of Purpose Systems:…” I think his Quasi Static Analysis is a nice way to explain some people a fundament before I explain them PCT. And the four Blunders are Blunders among many people still in 2005.

[From Rick Marken (2005.03.11.1630)]

Bjorn Simonsen (2005.03.11,20:15 EST)

By the way, thank you for your comments. I have met the concepts "one and two
way systems" for the first time on this list and next week I think I will put
a two way system in my balance simulation. But most of all I appreciated to
read Bill's "Quantitative Analysis of Purpose Systems:...." I think his Quasi
Static Analysis is a nice way to explain some people a fundament before I
explain them PCT. And the four Blunders are Blunders among many people still
in 2005.

It was that paper, even more than B:CP, that launched my career as a control
theorist. I was already duplicating the experiments described in that paper
(which was published in 1978) using BASIC and game paddles on an Apple II.
This was before the 1979 Byte series came out. So it must have been in early
1979 that my career in PCT began. I guess I've really only been at this for
about 25 years :wink:

Thanks for reminding me, Bjorn. Obviously, I appreciated the "Quantitative
Analysis of Purpose Systems" paper, too!.

Best

Rick

···

---
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bjorn Simonsen (2005.03.12,12:25 EST)]

From Rick Marken (2005.03.11.1630)
Thanks for reminding me, Bjorn. Obviously, I appreciated the "Quantitative
Analysis of Purpose Systems" paper, too!.

You are originally a cognitive psychologist. I haven't studied cognitive
psychology well enough to explain the difference between PCT and cognitive
psychology. My limited knowledge say me that:

  1. Cognitive psychology is a S-R theory where S are cognitions and R are
the responses.
  2. Cognitive psychology is a science that covers research domains like
memory, attention, perception, knowledge, reasoning, creativity and
problem solving. 2) is a copy from Wikipedia.
  3. New cognitions are learned.
  4. Cognitive psychology doesn't allow for disturbances, except for learning
new cognitions. I don't know they explain learning. Do they explain
learning as behaviorists?

I will not explain how I understand PCT.

My first Question is if my knowledge about Cognitive psychology is correct
(I know much more can be said about Cognitive psychology, but I think that
is unessential in this context?).

In "Quantitative Analysis of Purpose Systems: .." Bill classified System -
Environment Relationships from the equations below.

  qo=f(qi)
  qi= g(qo) + h(qd)
  g(qo) = qi* + (UV/(1-UV)* h(qd)
  qi = qi* + h(qd)/(1-UV)

Here he analyzed the effect of the function g being zero. Then there is no
feedback and he concluded that qo = f(qi) = f[h(qd)]. This is an open loop
case and corresponds to the classical S-R model of behavior. Bill explains
aspects of behavior explained in this way and concludes with " In order to
show that a given organism should be modeled as such a system, it is
necessary to establish that the organism's own behavior has no effect on
proximal stimuli in the supposed causal chain." And he said: " I believe
that this condition is, in any normal circumstance, impossible to meet".

Then over to my question number two. Bill didn't analyze the equations above
from cognitive psychological terms. Is it possible to do that?

Is it reasonable to say that the special value of qi*, defined as the value
of qi when there is no net disturbance as an actual variable (a cognition),
and that h(qd) = 0, and that the function g also is zero? What more? Or is
it enough to say the same as above: " In order to show that a given
organism should be modeled as such a system, it is necessary to establish
that the organism's own behavior has no effect on proximal stimuli in the
supposed causal chain and that is normally impossible to meet."?

bjorn

[From Rick Marken (2005.03.12.1845)]

Bjorn Simonsen (2005.03.12,12:25 EST)

You are originally a cognitive psychologist. I haven't studied cognitive
psychology well enough to explain the difference between PCT and cognitive
psychology. My limited knowledge say me that:

  1. Cognitive psychology is a S-R theory where S are cognitions and R are
the responses.
  2. Cognitive psychology is a science that covers research domains like
memory, attention, perception, knowledge, reasoning, creativity and
problem solving. 2) is a copy from Wikipedia.
  3. New cognitions are learned.
  4. Cognitive psychology doesn't allow for disturbances, except for learning
new cognitions. I don't know they explain learning. Do they explain
learning as behaviorists?
...
My first Question is if my knowledge about Cognitive psychology is correct

It sounds correct to me.

In "Quantitative Analysis of Purpose Systems: .." Bill classified System -
Environment Relationships from the equations below.

  qo=f(qi)
  qi= g(qo) + h(qd)
  g(qo) = qi* + (UV/(1-UV)* h(qd)
  qi = qi* + h(qd)/(1-UV)

...

Then over to my question number two. Bill didn't analyze the equations above
from cognitive psychological terms. Is it possible to do that?

Sure. I think the only thing needed to make the analysis "cognitive" is to recognize that certain controlled variables (qi's) are perceptions in the realm that is typically called "cognitive", in particular perceptions that have to do with knowledge, reasoning and problem solving. A "knowledge" perception might be a perception of the presumed _relationship_ between models and reality. A "reasoning" perception might be a perception of a logical relationship, like "if A implies B and B implies C then A implies C". A "problem solving" perception might be a "principle" like "control of the center" in chess. Reaction time is also an important way of studying cognition in cognitive psychology. Time is not part of the static analysis above. But reaction time is captured by dynamic models of control, in terms of system "slowing" and transport lag.

Is it reasonable to say that the special value of qi*, defined as the value
of qi when there is no net disturbance as an actual variable (a cognition),
and that h(qd) = 0, and that the function g also is zero?

I don't understand this. As you note, qi* is the reference specification for the state of a perceptual variable. When you act to bring a "cognitive" perception, qi, to the reference state, qi*, your controlling would probably be seen as cognition rather than "perceptual - motor" behavior.

Cognition is just a word that points to the control of certain types of perceptions and to certain types of control -- such as that which occurs when we control imagined perceptions, as in reasoning and memory. All of these cognitive phenomena are handled by the PCT model as described in B:CP. It took me a while to realize that PCT did handle cognitive phenomena. I had to get used to the idea that cognitions, such as principles and system concepts -- which seem more like "thoughts" than perceptions --are, indeed, perceptions. To see an example of control of "cognitive" as well as "non-cognitive" perceptions, try my "Hierarchical Perception and Control" demo at:

http://www.mindreadings.com/ControlDemo/HP.html

Here you can control two "perceptual" perceptions -- of configuration and motion -- and a more cognitive perception -- of a sequence. The sequence perception seems more like a thought: to control it you will probably say the sequence to yourself as it occurs -- small - medium - large - small - medium ... -- but the fact that you can control this "thought" (keep the sequence happening) shows, I think, that it is a perception, in the PCT sense.

Best regards

Rick

What more? Or is
it enough to say the same as above: " In order to show that a given
organism should be modeled as such a system, it is necessary to establish
that the organism's own behavior has no effect on proximal stimuli in the
supposed causal chain and that is normally impossible to meet."?

bjorn

Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bjorn Simonsen (2005.03.13,18:28 EST)]

From Rick Marken (2005.03.12.1845)
>Is it reasonable to say that the special value of qi*, defined as the
>value of qi when there is no net disturbance as an actual variable (a
>cognition), and that h(qd) = 0, and that the function g also is zero?

I don't understand this. As you note, qi* is the reference
specification for the state of a perceptual variable. When you act to
bring a "cognitive" perception, qi, to the reference state, qi*, your
controlling would probably be seen as cognition rather than "perceptual
- motor" behavior.

I understand you don't understand this. I should not written ".., and that
the function g also is zero".

What I thought to say was: " We have
1 qo=f(qi)
2 qi= g(qo) + h(qd)
3 g(qo) = qi* + (UV/(1-UV)* h(qd)
4 qi = qi* + h(qd)/(1-UV)

This equations pass for all behaving systems in relationship to an
environment.
if h(qd)=0, then we get
4 qi = qi*
3 g(qo) = qi*
2 qi =g(qo)
1 qo = f(qi) = f(qi*)

This is the same as "we do what our cognitions tells us to do"

bjorn

[From Rick Marken (2005.03.13.1100)]

Bjorn Simonsen (2005.03.13,18:28 EST)]

I understand you don't understand this. I should not written ".., and that
the function g also is zero".

Yes. That was confusing. Eliminating it helps a lot. It was confusing because, while the _value_ of a function can be 0, a _function_ itself can't be 0. A function is a mathematical operation on the argument of the function (qo, in this). For example, the function g(qo) could be 0*qo, in which case the value of g(qo) is 0 for all values of qo. If the function g(qo) were qo+ 0 then the value of g(qo) would be qo for all values of qo. Of course, there are many other possible forms of the function g(qo): qo^2, qo^3, e^qo, qo+k, etc. All operatoin on qo.

What I thought to say was: " We have
1 qo=f(qi)
2 qi= g(qo) + h(qd)
3 g(qo) = qi* + (UV/(1-UV)* h(qd)
4 qi = qi* + h(qd)/(1-UV)

This equations pass for all behaving systems in relationship to an
environment.
if h(qd)=0, then we get
4 qi = qi*
3 g(qo) = qi*
2 qi =g(qo)
1 qo = f(qi) = f(qi*)

This is the same as "we do what our cognitions tells us to do"

Yes. Now I see. This is basically the point I was making in my "Blind men and the elephant" paper, which is reprinted in _More Mind Readings_. My point was precisely the oned made in your last equation 1. If disturbances are minimal (h(hd) = 0) then variations in what people do (qo) will have no obvious environmental correlate; they will appear to be the result of mental processes -- as indeed, they are. Variations in qo are a function of variations in qi*, which is a mental variable: a varying intention. So the cognitive view is that cognitive behavior consists of observable behaviors (qo) that are caused by internal mental processes (qi*). What PCT brings to this picture is the realization that it is really perceptual input variables (qi), not behaviors (qo), that are "caused" to be in certain states by internal mental activities (variations in qi*).

Best regards

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bjorn Simonsen (2005.03.14, 08:05 EST)]

From Rick Marken (2005.03.13.1100)
Yes. That was confusing. Eliminating it helps a lot. It was confusing
because, while the _value_ of a function can be 0, a _function_ itself >
can't be 0.

We should maybe not be so confused. We have both been taught by Bill. And in
his "Quantitative Analysis of Purpose Systems:..", section Type Z: Zero Loop
Gain he wrote: If the product UV is zero because the function f is zero,
there is no behaving system. If it is zero because function g is zero, there
is no feedback and the simultaneous solution of the equations becomes: qo =
f(qi) = f[h(qd)].

I understand Bill that there in principle is no g function when he say that
the g function is zero. Of course he doesn't perceive a function as a
factor. Maybe my first sentence is wrong.

[From Rick Marken (2005.03.14.1800)]

Bjorn Simonsen (2005.03.14, 08:05 EST)--

Rick Marken (2005.03.13.1100)

Yes. That was confusing. Eliminating it helps a lot. It was confusing
because, while the _value_ of a function can be 0, a _function_ itself >
can't be 0.

We should maybe not be so confused. We have both been taught by Bill. And in
his "Quantitative Analysis of Purpose Systems:..", section Type Z: Zero Loop
Gain he wrote: If the product UV is zero because the function f is zero,
there is no behaving system. If it is zero because function g is zero, there
is no feedback and the simultaneous solution of the equations becomes: qo =
f(qi) = f[h(qd)].

I understand Bill that there in principle is no g function when he say that
the g function is zero. Of course he doesn't perceive a function as a
factor. Maybe my first sentence is wrong.

OK. Maybe the problem was that you asked whether the g going to zero had
something to do with cognitive psychology. Actually, here's what you asked:

Bjorn Simonsen (2005.03.13,18:28 EST)

>Is it reasonable to say that the special value of qi*, defined as the
>value of qi when there is no net disturbance as an actual variable (a
>cognition), and that h(qd) = 0, and that the function g also is zero?

The function g is the feedback function. If g goes to zero there is no
effect of the organism's actions on the controlled variable, qi, so the
organism can't control qi. Cognitive _control_ requires that g be a non-zero
function so that the cognitive perceptual variable, qi, can be controlled.

Imagination, however, is a kind of cognition where the the loop is closed
inside the system. So there is no g when control is done in imagination.
Perhaps this is what you mean? Controlling in imagination (what is loosely
called "thinking") is cognition where g (the function connecting system
outputs to controlled variables via the environment) is virtually zero (it
is actually non-existent).

Is that something like what you were getting at?

Best

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bjorn Simonsen (2005.03.15,10:30 EST)]

From Rick Marken (2005.03.14.1800)

Cognitive _control_ requires that g be a non-zero
function so that the cognitive perceptual variable, qi, can be controlled.

Yes

Imagination, however, is a kind of cognition where the loop is closed
inside the system. So there is no g when control is done in imagination.
Perhaps this is what you mean?

No

Maybe the problem was that you asked whether the g going to zero had
something to do with cognitive psychology.

Yes.

My first mail where I expressed that g = 0 was thought to be a mathematical
account of cognitive psychology. This was as wrong as it could be. Let me
describe, something about, what I think here and now.

g is a general algebraic function describing the physical connection from qo
to qi. This is the feedback path, which is missing when g(qo) is identically
zero.
If it is zero, there is no feedback. Then we have an open loop case that
corresponds to the classical cause-effect model of behavior.

If we shall describe cognitive psychology in the same mathematical way, I
think as you

1. If disturbances are minimal (h(qd) = 0) then variations in what
people do (qo) will have no obvious environmental correlate; they will
appear to be the result of mental processes -- as indeed, they are.
Variations in qo are a function of variations in qi*, which is a mental
variable: a varying intention. So the cognitive view is that cognitive
behavior consists of observable behaviors (qo) that are caused by
internal mental processes (qi*).

Controlling in imagination (what is loosely
called "thinking") is cognition where g (the function connecting system
outputs to controlled variables via the environment) is virtually zero (it
is actually non-existent)

I don't agree with what I see. But I agree with what you think. And I think
you wish to say:
Controlling in imagination (what is loosely called "thinking") is cognition
where g _at the first level _ (the function connecting system outputs to
controlled variables via the _first level environment_) is virtually zero
(it is actually non-existent). Am I right?

At the other levels that are closed inside the first level I think as you
say in the first sentence in this mail:

Cognitive _control_ requires that g be a non-zero
function so that the cognitive perceptual variable, qi, can be controlled.

I don't know if it is correct to be categorical here because there are many,
many examples where controlling in imagination result in g #0 also at the
first level (maybe most examples).
Let me exemplify:
1. I sit in my chair home and think where to drive from home to a place G.
This is an example where g is virtually zero at the first level. But it
happens when I think: "In the road interchange where A-street meets
B-street, turn left", I really turn my head left. Then g can't be zero in
all loops.

2. I stand at the arrival platform waiting for my wife who has been away a
week. When I think: "Soon I will see her. It will be nice to see her back
again", I am in imagination mode. But I don't think all loops are closed
inside the system. My heart beats quicker, and there is a smile on my lips.

I think there is a lot of examples where some loops have qo#0 on the first
level when we say we are in the imagination mode.

3. I sit in my car and drive from home to place G.
This is a normal example of control of perceptions.

My main point is that we very often are in the control mode when we say we
are in the imagination mode.
And my main, main point is that we always are in control mode and
imagination mode when we say we are in control mode. What do you say about
that?

I will end this mail with some mathematical expressions describing what
happens when we are in the imagination mode.
We all know the expression: "The worst accidents are those we imagine". I
experienced such an example yesterday night. My daughter should drive from A
to B ( a long distance on snowy roads). She should phone when she arrived B.
She didn't phone and of course I imagined the worst.

I will then repeat an example I have mentioned earlier (last time a week ago
to Bill)

Place yourself on a board on the floor , 1.0m *5m* 0.15m, and balance from
one end to the other. You do this very well.
Get some help and place yourself on the same board 300 meters above ground
and balance from one end to the other. _I guess the same forces acts on you
both places_.
I bet a dollar that you fall down. The down-falling force is your
imagination.

How can we express this strong "force" mathematical? Why do we imagine worse
thing than what happens? Here are the equations:
  qo=f(qi)
  qi= g(qo) + h(qd)
  g(qo) = qi* + (UV/(1-UV)* h(qd)
  qi = qi* + h(qd)/(1-UV)

If you can help me here, I will next time describe the wonderful effect when
we imagine positive thoughts when we place ourselves in normal situation (on
a board on the floor).

bjorn

[From Rick Marken (2005.03.16.0940)]

Bjorn Simonsen (2005.03.15,10:30 EST)--

Rick Marken (2005.03.14.1800)

Controlling in imagination (what is loosely
called "thinking") is cognition where g (the function connecting system
outputs to controlled variables via the environment) is virtually zero (it
is actually non-existent)

I don't agree with what I see. But I agree with what you think. And I think
you wish to say:
Controlling in imagination (what is loosely called "thinking") is cognition
where g _at the first level _ (the function connecting system outputs to
controlled variables via the _first level environment_) is virtually zero
(it is actually non-existent). Am I right?

I think g only exists at the first level. It's the physical laws that connect you to your perceptual experience. When you imagine, there is no g because you are simply playing a reference signal right back into a perceptual channel: you perceive (in imagination) exactly what you want, regardless of the level of this perception in the hierarchy. The are no constraint on my ability to produce this perception. There are no disturbances, h(d) to resist; there are no physical laws, g (o), to connect me to the perception I want. If I want to fly (have a reference to sail over my backyard) I simply fly (in my imagination).

Cognitive _control_ requires that g be a non-zero
function so that the cognitive perceptual variable, qi, can be controlled.

I don't know if it is correct to be categorical here because there are many,
many examples where controlling in imagination result in g #0 also at the
first level (maybe most examples).
Let me exemplify:
1. I sit in my chair home and think where to drive from home to a place G.
This is an example where g is virtually zero at the first level. But it
happens when I think: "In the road interchange where A-street meets
B-street, turn left", I really turn my head left. Then g can't be zero in
all loops.

Sometimes when we imagine (as in dreaming) we seem to slip out of imagination mode at the lowest levels and actually act on the environment to control the imagined perception. In this case there is an environmental feedback path, g, but it still doesn't really connect to the perception controlled in imagination. We experience this when we "jerk" out of a dream because our lower level outputs (muscle tensions) are acting to prevent an imagined fall. The jerk is our efforts to prevent a fall that is happening only in imagination.

2. I stand at the arrival platform waiting for my wife who has been away a
week. When I think: "Soon I will see her. It will be nice to see her back
again", I am in imagination mode. But I don't think all loops are closed
inside the system. My heart beats quicker, and there is a smile on my lips.

These are emotional side effects of preparation for control. They are not a reflection of the existence of a feedback function, g, in imagination.

My main point is that we very often are in the control mode when we say we
are in the imagination mode.
And my main, main point is that we always are in control mode and
imagination mode when we say we are in control mode. What do you say about
that?

I think some systems are in control model while others are in imagination mode. But systems in imagination mode are not in control model. It seems to me that, when it comes to controlling a particular perception, I am either controlling it or thinking about controlling it (imagining it). I can't do both at the same time.

How can we express this strong "force" mathematical? Why do we imagine worse
thing than what happens? Here are the equations:
  qo=f(qi)
  qi= g(qo) + h(qd)
  g(qo) = qi* + (UV/(1-UV)* h(qd)
  qi = qi* + h(qd)/(1-UV)

If you can help me here, I will next time describe the wonderful effect when
we imagine positive thoughts when we place ourselves in normal situation (on
a board on the floor).

I don't see, right off hand, any way for those equations to explain why we typically imagine worse than what actually happened. I think we do this simply because we know all the horrible things that can happen in certain situations, like when our daughter has not arrived home on time. It turns out that the horrible possibilities (kidnapping, accident, natural disaster, etc) are much less likely than the non-horrible things, so when it turns out that our daughter was delayed for non-horrible reasons, we laugh at ourselves for stressing ourselves out by imagining the worst. But sometimes, tragically, the worst that was imagined is actually what happened. I think that's all that's going on.

Best

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bjorn Simoinsen (2005.03.17, 14:20 EST)]

From Rick Marken (2005.03.16.0940)
I think some systems are in control model while others are in
imagination mode. But systems in imagination mode are not in control
model. It seems to me that, when it comes to controlling a particular
perception, I am either controlling it or thinking about controlling it
(imagining it). I can't do both at the same time.

Let me tell you a story. In my youth I was a somnambulist and I could have
written a book with many stories. One Easter I went skiing in the mountain
and one night I slept in a sleeping bag at a living room floor together with
20-30 other people. We all slept when I walked across the floor, opened a
window and shouted out in the night: "NN come over here. (NN was my employer
that time). Here is plenty of room. (I "saw" him coming on his skies a 100
meter away. Of course I couldn't see him. It was a dark night. I was
dreaming and as you say imagining. But I don't think my walking and my
shouting was side effects. Is this an example where I imagined something and
controlled other perceptions at the same time?

Neither you commented my:

Place yourself on a board on the floor , 1.0m *5m* 0.15m, and balance from

one end to the other. >>You do this very well.

Get some help and place yourself on the same board 300 meters above ground

and balance from one >>end to the other. _I guess the same forces acts on
you both places_.

I bet a dollar that you fall down. The down-falling force is your

imagination.

An imagined force does not exist. "Why did you fall down"?

bjorn

[From Bill Powers (2005.03.17.0745 MST)]

Bjorn Simoinsen (2005.03.17, 14:20 EST)--

Let me tell you a story. In my youth I was a somnambulist and I could have
written a book with many stories. One Easter I went skiing in the mountain
and one night I slept in a sleeping bag at a living room floor together with
20-30 other people. We all slept when I walked across the floor, opened a
window and shouted out in the night: "NN come over here. (NN was my employer
that time). Here is plenty of room. (I "saw" him coming on his skies a 100
meter away. Of course I couldn't see him. It was a dark night. I was
dreaming and as you say imagining. But I don't think my walking and my
shouting was side effects. Is this an example where I imagined something and
controlled other perceptions at the same time?

Clearly, the control systems for walking, opening windows, and shouting were not operating in the imagination mode, while those for controlling visual perceptions (and the rest of the imagined scenario) were in the imagination mode.

Best,

Bill P.

···

Neither you commented my:

>>Place yourself on a board on the floor , 1.0m *5m* 0.15m, and balance from
one end to the other. >>You do this very well.
>>Get some help and place yourself on the same board 300 meters above ground
and balance from one >>end to the other. _I guess the same forces acts on
you both places_.
>>I bet a dollar that you fall down. The down-falling force is your
imagination.

An imagined force does not exist. "Why did you fall down"?

bjorn

[Martin Taylor 2005.03.17.10.27]

[From Bill Powers (2005.03.17.0745 MST)]

Bjorn Simoinsen (2005.03.17, 14:20 EST)--

Let me tell you a story. In my youth I was a somnambulist and I could have
written a book with many stories. One Easter I went skiing in the mountain
and one night I slept in a sleeping bag at a living room floor together with
20-30 other people. We all slept when I walked across the floor, opened a
window and shouted out in the night: "NN come over here. (NN was my employer
that time). Here is plenty of room. (I "saw" him coming on his skies a 100
meter away. Of course I couldn't see him. It was a dark night. I was
dreaming and as you say imagining. But I don't think my walking and my
shouting was side effects. Is this an example where I imagined something and
controlled other perceptions at the same time?

Clearly, the control systems for walking, opening windows, and shouting were not operating in the imagination mode, while those for controlling visual perceptions (and the rest of the imagined scenario) were in the imagination mode.

Rick has suggested that in imagination mode, a reference signal is tied straight back to the appropriate perceptual input. That's how I have understood the convention. But we know even less about what actually happens in imagination than we do about perception and action through the real world (assuming one to exist:-).

Bjorn's dreaming imagination had NN acting in a world that had very much the dynamic physical characteristics of the normal world. This is often, though not always, the case in an imagined world. I can easily imagine an elephant flying by flapping his ears -- as the Disney people obviously did half a century ago -- but if I want to make a plan for future activity (like planning a holiday, as I am now doing for next month), I must imagine everything happening in a world with the feedback characteristics as close to those of the real world as I can manage.

In that "quasi-real" world, the dynamics may be faster (I can imagine a multi-hour flight in a few seconds), but the consequences of my imagined actions on my imagined perceptions have to be reasonably like what would happen in the real world. There must be gain around imagined perceptual loops, whether the match to the real world is close or completely wrong. But the effects are caused differently, not by time delays but by forced (imagined) mapping of the temporal effects of real-world delays onto the rapid imagined feedback.

What I'm suggesting here is that the direct reference-to-perception connection may exist sometimes, but it subjectively seems as if it would have to be a special case. In the more general case, some kind of imagined world offers feedback paths that affect imagined perceptions.

The stability of loops whose feedback paths go through the real world is greatly affected by the delays in the effects of one's actions on one's perceptions. Imagined worlds have much smaller delays, and this allows for loops to be stable when their partners in the real world would not be stable. Whether one is imagining that all one's plans will work, or that everything has gone horribly wrong with the child that borrowed the car for the first time, the imagined world can be (not must be) a more stable support for perceptual control than is the real world.

It's a problem that has to be faced by all planners, especially in an adversarial situation. "The best-laid plans of mice and men gang aft agley." Why so? perhaps because the inherently greater stability of a fast feedback loop allows more precise control of perception than does the real world.

Musings, only.

Back to work!

Martin

[From Bill Powers (2005.03.17.1535 MST)]

Martin Taylor 2005.03.17.10.27 --

Rick has suggested that in imagination mode, a reference signal is tied straight back to the appropriate perceptual input. That's how I have understood the convention. But we know even less about what actually happens in imagination than we do about perception and action through the real world (assuming one to exist:-).

Let's not put the cart previous to the horse. The main things we know about imagination come from experience, not the theory. We know exactly what actually happens in our own imaginations; all we don't know is how the system has to be organized to make it happen that way. Bjorn describes an experience involving a mixture of real control actions and imagined ones. So any model of imagination has to provide a proposed mechanism for making that happen, and any proposal that says it can't happen is wrong.

Best,

Bill P.

···

Bjorn's dreaming imagination had NN acting in a world that had very much the dynamic physical characteristics of the normal world. This is often, though not always, the case in an imagined world. I can easily imagine an elephant flying by flapping his ears -- as the Disney people obviously did half a century ago -- but if I want to make a plan for future activity (like planning a holiday, as I am now doing for next month), I must imagine everything happening in a world with the feedback characteristics as close to those of the real world as I can manage.

In that "quasi-real" world, the dynamics may be faster (I can imagine a multi-hour flight in a few seconds), but the consequences of my imagined actions on my imagined perceptions have to be reasonably like what would happen in the real world. There must be gain around imagined perceptual loops, whether the match to the real world is close or completely wrong. But the effects are caused differently, not by time delays but by forced (imagined) mapping of the temporal effects of real-world delays onto the rapid imagined feedback.

What I'm suggesting here is that the direct reference-to-perception connection may exist sometimes, but it subjectively seems as if it would have to be a special case. In the more general case, some kind of imagined world offers feedback paths that affect imagined perceptions.

The stability of loops whose feedback paths go through the real world is greatly affected by the delays in the effects of one's actions on one's perceptions. Imagined worlds have much smaller delays, and this allows for loops to be stable when their partners in the real world would not be stable. Whether one is imagining that all one's plans will work, or that everything has gone horribly wrong with the child that borrowed the car for the first time, the imagined world can be (not must be) a more stable support for perceptual control than is the real world.

It's a problem that has to be faced by all planners, especially in an adversarial situation. "The best-laid plans of mice and men gang aft agley." Why so? perhaps because the inherently greater stability of a fast feedback loop allows more precise control of perception than does the real world.

Musings, only.

Back to work!

Martin

[From Bjorn Simonsen (2005.03.18,10:55 EST)]

From Bill Powers (2005.03.17.1535 MST)

So any model of imagination has to provide a proposed
mechanism for making that happen, and any proposal
that says it can't happen is wrong.

When Rick says:

But systems in imagination mode are not in control
model. It seems to me that, when it comes to controlling a particular
perception, I am either controlling it or thinking about controlling it
(imagining it). I can't do both at the same time.

, I think he indicates that one system cannot both be in imagination mode
and in control mode.

But as you say some systems may be in imagination mode and other systems may
be in control mode. That explains somnambulism.