Hierarchical MCT model

[From Bill Powers (970313.0400 MST)]

Now that we're starting to get somewhere with the MCT model, I've started
thinking about how it would look in a hierarchical form like that of the PCT
model. While the HPCT model is still just an approximation to the real
thing, leaving out interactions among different perceptual functions, the
basic principle of controlling simple scalar quantities seems promising as a
step toward the best model of a living system. I won't speak about
engineering models; maybe the one-complex-system approach will turn out to
surpass the way living systems control; maybe not.

In the light of our theodolite example and my experience with the Little Man
model, there seems to be a basic principle of hierarchical control: get the
highest derivative of the system (that matters) under control first.

Suppose we have a mass with damping, and a force is applied to it to bring
the velocity from 0 to v. The force required to maintain a specific velocity
is proportional to the damping, so v(final) = k*f. The length of time
required to reach the final velocity depends on the mass of the object being
moved, which we can take as fixed. The most direct way to achieve the new
velocity is to set the force to the required amount, and then wait for the
velocity to reach asymptote.

Of course the velocity could be increased much faster if the force could be
made many times what is needed for the desired velocity, but, as this
velocity was approached, the force could be adjusted downward until it
became just enough to sustain the new velocity. This could be done by
dynamic filtering of the signal producing the force, but a much simpler way
is to use negative feedback.

By using a velocity reference signal indicating the desired velocity, and a
velocity sensor indicating the actual velocity, the system can produce an
initially large error signal that produces a large force. This force could
be far larger, hundreds of times larger, than needed to produce the desired
velocity. The mass would accelerate rapidly, as if toward a very high velocity.

However, as the mass accelerates, its velocity would increase toward the
reference velocity and the error signal would decrease. The force would thus
also decrease. The result would be that just as the mass reached the desired
velocity, the force would have declined to just the value needed to sustain
that velocity. In fact, there would be an exponential increase in velocity
to the desired limit that would have a time constant equal to the open-loop
time constant divided by the amplification factor in the loop. If the
open-loop time constant were 1 second and the loop gain were 100, the time
constant with the control system acting would be 0.01 second.

The effect would be, relative to the signal that is specifying the new
velocity, just as if the mass of the object had been divided by the loop gain.

Now suppose it turns out that, as in a muscle, the applied force can't be
changed instantly. When a driving signal appears instantaneously, let's say
that the force rises toward a value proportional to the driving signal,
along a curve that to a first approximation can be represented by a
decelerating exponential. We now have a higher derivative to consider: the
rate of change of acceleration, or the third derivative of position. The
presence of this higher derivative will create problems for the velocity
control system; in fact, the maximum permissible loop gain of the velocity
control system will be lower, because there will be a tendency to
oscillation. The driving force can no longer be changed instantly. To allow
a higher gain in the velocity control system, we must somehow reduce the
time constant of the force response to the driving signal.

This can be done in exactly the same way. Instead of the driving signal
producing force directly, it becomes the reference signal of a control
system that can also sense the force being generated. If this system has a
high gain, the rise-time of the force in response to the driving signal can
be made much faster -- as much faster as the loop gain in this new control
system. If the open-loop time constant for force changes is 50 milliseconds,
and the loop gain is 100, the closed-loop time constant will be 0.5
milliseconds.

Without the force-control system, the velocity control system would have
some maximum loop gain that would permit stable operation. With this force
control system having a loop gain of 100, the velocity-control system could
have a gain 100 times larger and still remain stable.

The limit on this process is set by delays in the lowest control system, and
by noise in the sensor and comparator.

The spinal reflexes are organized in exactly this way. The lowest level of
control is the tendon reflex, which directly measures the force generated by
a muscle. The lag between a change in muscle force and a change in the
tendon signal is set physically by the speed of sound in muscle and tendon,
and by the refractory period of the sensors. Altogether it must be one or
two milliseconds. Most of the lag is in signal-transmission, and amounts to
perhaps 5 milliseconds. The muscle time-constant, measured open-loop, is
around 50 milliseconds for a contraction, so in principle the tendon reflex
could reduce it to 5 or 10 milliseconds. With many force-sensors acting in
parallel in the same loop, the delay could be considerably shorter than
that, because some sensors would always be at the end of their refractory
periods.

The second level of control involves the phasic part of the muscle spindle
response. Again, the physical lag time here is only a millisecond or two,
since the phasic component is created by a lag between the muscle stretch
and the response of the interior part of the spindle (embedded in the
muscle) to a stretch at its ends. The velocity control system can have a
relatively high gain, too, given the force control system.

The third level of control involves the tonic part of the stretch reflex,
which gives approximate muscle length control and thus joint angle control.
This control system experiences a limb with an apparent mass that is only a
fraction of the actual mass, and operates via muscles that have an apparent
response time that is far shorter than the open-loop response time. The
result is that a joint angle can be changed from one stable value to another
very different angle in only about 150 to 200 milliseconds. These speeds are
achieved by momentary muscle tensions close to the maximum that can be
produced without physical damage. The maximum speed is high enough that the
eye can't follow the movement; it's a blur.

It seems, therefore, that getting control of the highest significant
derivative is a basic principle of hierarchical control. Note that this
greatly simplifies the problem of stabilization, because once we have
reached the highest derivative, the associated control loop becomes a simple
first-order lag system, which is inherently stable. This is undoubtedly why,
in the Little Man Version 2, no special dynamic filtering was needed to
attain stability. The required filtering is inherent in the hierarchical
arrangement.

Finally, the MCT model. When we have reduced each control system to a simple
one-dimensional system with a first-order lag, what kind of "model of the
environment" would be involved? It would be either a simple constant of
proportionality (if the lag is in the output function as for the force
control system) or a single integration (if the lag is in the environment as
for the velocity control system). In either case, adjusting the system for
optimality involves changing just one parameter: the constant of
proportionality or the rate of integration. The required "world model" has
been reduced to the absolute minimum.

This means that adjusting any given control system for optimal performance
involves changing only one parameter, the output gain. If the output gain at
the lowest level, controlling the highest derivative, is set for stable
control, then each higher system's maximum permissible gain is determined,
and each higher system can also be optimized by adjusting its output gain.
It is impossible to do better than that.

In human systems, there does not appear to be any model of the environment
at the lower levels of control. When the inputs are lost at these levels,
performance simply collapses, and it recovers very slowly if at all. In
fact, recovery from injuries to the afferent paths seems to go at the rate
of nerve regeneration, which is both slow and limited in the adult. There is
no hint of control that uses an internal simulation of the external feedback
path.

At higher levels, it does appear that the imagination connection comes into
play, in cognitive planning most obviously, but perhaps also at other levels
in the higher ranges. However, the HPCT model still treats each control
system at the higher levels as a one-dimensional scalar system, so for any
one system, the "internal simulation" is still just a constant of
proportionality or a simple integration, with the output function being of
the complementary type. Complex behavior involves many of these elementary
control systems operating at once, so the overall effect can be that of a
complex simulation of the environment -- when many systems are operating in
the imagination mode at the same time.

The most obvious difference between the MCT and PCT models is the normal
mode of operation. In the PCT model, the normal mode is the one in which the
perceptual signal arises directly from lower-level perceptions or sensors.
In the MCT model, the normal mode is the one in which the perceptual signal
is derived from the internal simulation. In fact, the MCT model never uses
the real-time information from lower systems or sensors _for control_. That
information is used only to optimize the internal simulation. This means
that the ONLY way the MCT model can deal with disturbances is by calculating
them and using the calculated disturbance in the simulation. With the
hierarchical version, this may not be such a disadvantage, since the
disturbance, too, is a simple scalar quantity. However, the PCT model, which
needs no internal representation of the disturbance, can operate without
this calculation, and so is simpler. Or to put it differently: it is always
better, when possible, to control the real-time perception than to control
the simulated one. When that is not possible, there are advantages to being
able to control a simulated perception.

This discussion represents, of course, only the beginnings of trying to
develop a hierarchical model. When as much work by as many people as have
been involved in other approaches has been done, we will know much more
about the feasibility of these suggestions.

Best,

Bill P.

[from Jeff Vancouver 970408.1700 EST]

As a control system, I suffer from a horrible case of lag. Anyway, to
respond to some posts of some time ago...

[From Bill Powers (970313.0400 MST)]

Now that we're starting to get somewhere with the MCT model, I've started
thinking about how it would look in a hierarchical form like that of the PCT
model. ....

In human systems, there does not appear to be any model of the environment
at the lower levels of control. When the inputs are lost at these levels,
performance simply collapses, and it recovers very slowly if at all....

At higher levels, it does appear that the imagination connection comes into
play, in cognitive planning most obviously, but perhaps also at other levels
in the higher ranges. However, the HPCT model still treats each control
system at the higher levels as a one-dimensional scalar system, so for any
one system, the "internal simulation" is still just a constant of
proportionality or a simple integration, with the output function being of
the complementary type. Complex behavior involves many of these elementary
control systems operating at once, so the overall effect can be that of a
complex simulation of the environment -- when many systems are operating in
the imagination mode at the same time.

This latter paragraph represents the idea that I have been trying to
pursue, but somehow the previous paragraph seems to be the only response I
get from the net. Mary Powers (970325) notes that anticipation, planning
etc. are examples of imagination mode and that some think that is all
there is. I think one should be careful in mistaking a view that "that
is all there" with "this is an area that interests me and where I will
focus my research."

Because I focus my research there, I do not usually concern myself with
the test for a controlled variable. That means, depending on his mood,
that Rick thinks I am not doing PCT research. He is correct that I often
do IV-DV work. An example is the Vancouver experiment I suggested some
time back. The IV was input on, input off (eyes closed in my version).
The DV was average error (distance between target and cursor). The
research was not trying to ascert that input caused lack of error, but it
was useful for testing the merit of PCT and the possibility of modeling
by the human control system. (It was interesting that we could not come
to any consensus on the interpretation of the results, and I was
distressed that no one wanted to follow up on trying to publish the
study. Different reference signals I guess).

Also, I wanted to endorse Bruce Abbott's post (970317.1720 EST) on the
difference between prediction and understanding in astrology and
psychology. My only reservation is that I do think the many psychologists
are trying to develop understandings, that they are making progress, and
that not surprisingly, that progress is often compatible with PCT. Many,
in fact, use the negative feedback loop as a key component.

Finally, I wanted to respond to a post by Bill P. (I lost the reference)
on the difference between applied and basic research. Bill makes the
often mistaken conclusion that applied researchers are not testing
theories. Each application is a test of a theory. Issues related to
implementation and the goals of the researcher often make it a poor test
or an undocumented test. However many applied researchers, like myself,
attempt to carefully construct studies to test implications of a theory.
To get published in our best journals, attempts must be made to measure
or manipulate key constructs that the theories suggests as the mechanisms.

Oh yeah, one more thing. I and others try to find higher-order reference
signals that are universal, or at least well-represented in a population.
For example, I am studying impression management - the desire to control
the perceptions others have of you. People differ in the level of control
we desire, the ways we do it, and the input functions that we use to
determine how well we are doing it. But as social animals, it is
reasonable to believe most of us attempt to do it. Given that, it can be
useful in the classroom to use the desire, or avoid having students get
errors in it, depending on what one wants to do. In other words, we might
help teachers do their work better without a real time assessment of all
the variables each student controls.

Ok, I think I have caught up, for this month.

Later

Jeff

[From Rick Marken (970409.0820 PDT)]

Jeff Vancouver (970408.1700 EST) --

Because I focus my research there [control of higher level variables,
imagination?], I do not usually concern myself with the test for a
controlled variable.

Why not? I don't see how you can study control without _always_ "concerning
yourself" with the variables people are controlling, even if some of those
variables are controlled in imagination.

That means, depending on his mood, that Rick thinks I am not doing PCT
research.

Whether I am happy or sad, I know that you have never done (at least, you
have never published) any PCT research:-)

He is correct that I often do IV-DV work.

It's the _conventional_ IV-DV work that is the problem. All research involves
manipulating variables (manipulated variables are IVs by definition) and
looking for concomittant variation in other variables (which can be called
DVs). In The Test, the IV is a disturbance and the "DV" is the (hypothesized)
controlled variable. If the IV produces far less variation in the DV than
expected, then the DV is likely to be under control.

An example is the Vancouver experiment I suggested some time back...The
IV was input on, input off (eyes closed in my version). The DV was average
error (distance between target and cursor).

The Vancouver experiment was a perfectly good example of PCT research.
What you were doing was monitoring the behavior of a controlled variable
(distance between cursor and target) with and without a perception of
that variable available. This is a standard aspect of doing The Test (see
Powers' 1979 Byte article). What you found was that control with eyes
closed was not as bad as expected. This suggests that the visual variable
(distance between target and cursor) is not the variable (or not the
_only_ variable) the subject is controlling in this situation. The next
step would have been to figure out what variable(s) the subject _is_
controlling.

(It was interesting that we could not come to any consensus on the
interpretation of the results, and I was distressed that no one wanted to
follow up on trying to publish the study. Different reference signals I
guess).

You're the one who proposed the study. Why didn't _you_ follow up and try to
publish it? Are you suggesting that only others (like myself, perhaps) are
the only ones responsible for doing and publishing PCT research. In fact,
I do have different reference signals than you; I've been doing (and
publishing) PCT research for over 15 years -- many of those studies covering
the same ground as the "Vancouver experiment". I currently have a paper on
PCT research under consideration for publication and I have developed quite
a few Java based Web experiments (a new one on hierarchical control should
be ready in a week or so); and I do all this in my spare time because I am
no longer a professor (like YOU) who's main job is to DO research. So please
don't tell me about your "distress" about no one following up on your little
experiment. I can't control your damn perceptions for you:-)

Oh yeah, one more thing. I and others try to find higher-order reference
signals that are universal

How in the world can you determine higher order reference signals (I presume
you mean the perceptual variables that correspond to these reference signals)
if you are not (as you say above) testing for controlled variables?

Best

Rick

[from Jeff Vancouver 970410.14:30 EST]

[From Rick Marken (970409.0820 PDT)]

Jeff Vancouver (970408.1700 EST) --

> Because I focus my research there [control of higher level variables,
> imagination?], I do not usually concern myself with the test for a
> controlled variable.

Why not? I don't see how you can study control without _always_ "concerning
yourself" with the variables people are controlling, even if some of those
variables are controlled in imagination.

Because I assign a task (like tracking) where the reference signal, and
somewhat the perception, is obvious. However, the results often do not
suggest that the perception is obvious. (See below.) Nonetheless, this
is common to some of your experiments too.

>That means, depending on his mood, that Rick thinks I am not doing PCT
>research.

Whether I am happy or sad, I know that you have never

done (at least, you > have never published) any PCT research:-) >

I will send you one accepted paper that I consider PCT research. I have
not sent you this paper because I think you will hate it. I have
published two theoretical papers (one in behavioral science, one in
Psychological Bulletin) and you did not like either. Nontheless, it is
true that as far as any perception that you could have, I have not
published any PCT research. I will see what your input function makes of
my in-press paper (if my model of you is correct, you will not like it).

>He is correct that I often do IV-DV work.

It's the _conventional_ IV-DV work that is the problem. All research involves
manipulating variables (manipulated variables are IVs by definition) and
looking for concomittant variation in other variables (which can be called
DVs). In The Test, the IV is a disturbance and the "DV" is the (hypothesized)
controlled variable. If the IV produces far less variation in the DV than
expected, then the DV is likely to be under control.

Whoa! Where did this word _conventional_ come from? So there is "good"
IV-DV research and "bad" IV-DV research. From past posts I think you mean
research where the researchers infer causation when the IV relates to the
DV. My point has long been that much of the psychological literature is
not of this type. Consider the Kahneman & Tversky stuff on
"non-rationality" in human decision making. K&T do not say that because
people are more likely to choose to do a surgery presented has having a
95% survival rate than one presented as having a 5% death rate does not
mean the presentation causes the choice, only that something interesting
is going on that the two ways of presenting the data relate to choice.
(That they use a between person design is because of the contamination that
would occur with a within-person design and that their explanations fall
short of the kind you or I might be looking for does not for me diminish
the finding.) The original purpose was to demonstrate the error in the
economists' assumptions of humans. It did that.

> An example is the Vancouver experiment I suggested some time back...The
> IV was input on, input off (eyes closed in my version). The DV was average
> error (distance between target and cursor).

The Vancouver experiment was a perfectly good example of PCT research.
What you were doing was monitoring the behavior of a controlled variable
(distance between cursor and target) with and without a perception of
that variable available. This is a standard aspect of doing The Test (see
Powers' 1979 Byte article). What you found was that control with eyes
closed was not as bad as expected. This suggests that the visual variable
(distance between target and cursor) is not the variable (or not the
_only_ variable) the subject is controlling in this situation. The next
step would have been to figure out what variable(s) the subject _is_
controlling.

First let me say that I think this is the first time I have seen you
describe the findings the way I would, that closed eyes was not as bad as
expected (although I would substitute random for expected). Second, I
would say that the variable that the subject is controlling is mouse
movement as perceived kinetically. I don't think that is the interesting
question. THe interesting question is what is the reference signal (or
better, where is it coming from) for the mouse movement control system.
Third, that I thought of this experiment I attribute to my training in
psychology and research methods, not to my acquantance with PCT.
(Obviously the reason for the experiment was my acquantance with both).

Fourth, you make a big deal out of the idea that conventional or
traditional or psychological or whatever IV-DV research looks for variance
in the DV that corresponds with variance in the IV, but that PCT research
"knows" the DV should _not_ vary as the IV varies. I have two things to
say about that. One, a conventional psychology researcher could argue and
hypothesize no relationship between the IV and DV because of the negative
feedback loop and I have seen them do that. But more importantly, your
argument always assumes the DV is the environmental variable. As we can
see in the Vancouver experiment, the DV is an indirect measure of error.
The general form of this paradigm is that error increases as control
decreases. Control is operationalized differently (attempts to manipulate
gain, input, output blocks, lag, etc.). In other words, there is a whole
host of IV-DV experiments that are doable within PCT, and yes, I think
subparts of the TEST are exactly that, IV-DV. Of course it is the
combination of the subtests that lead to the conclusions, but that is true
of "conventional" research.

Bottom line here Dr. "I wrote a research textbook so I should know what I
am talking about" Marken, stop blanketly condemning IV-DV research :slight_smile:

>(It was interesting that we could not come to any consensus on the
> interpretation of the results, and I was distressed that no one wanted to
> follow up on trying to publish the study. Different reference signals I
> guess).

You're the one who proposed the study. Why didn't _you_ follow up and try to
publish it?

Ooch, this is somewhat true and I have thought about it. Sometime around
Christmas I posted a request for collobration on the study and got no
response. I felt (and still feel) that I could not publish this study by
myself for several reasons. Certainly at that time I was not in any
position to work on it as I was looking for a job. Now that I have one,
I have been trying to catch up on the backlog, which as far as this list
is concerned, often means deleting vast numbers of posts (this has to do
with the number, not the quality of the posts).

The other reasons relate to my understanding. My areas of publication
have not included the more micro-cognitive types of issues that we were
dealing with in that experiment. You might not believe this, Rick, but
you are much more a micro-cognitive type psychologist than I.
Furthermore, the issue is positioning the study. The study arose to
provide evidence that we can model as a way to control when input is lost.
To a non-PCTer, the issue might seem trivial -- of course we model. But
what we mean by modeling and/or what the findings from that study mean is
somewhat controversal, which I think can come across and make the study
interesting to psychologists. Having multiple perspectives represented on
the paper would help me identify where people are led astray and reduce
the likelihood of "strawman" arguments (which I think are legitimate
concerns). Further, we might want some other studies that build a case
and give more complete understanding. Also, I could not do the
programming so I relied on Bill. But the closing one's eye compromise
between the design I sought and program Bill wrote seems would not fly in
a good psychology journal. Not enough control. Finally, I could not
understand Bill's pure PCT "clamp down" explanation of the findings.
Thus, along the lines of follow-up studies to increase understanding, I
think the players in this debate ought to agree ahead of time on a study
(or set of studies) of which the results would resolve the differences.
It is difficult for one control system to attempt to resolve conflicting
errors when the errors are not occuring within his or her system. I was
somewhat hoping that the Bill and Hans theordite?? model would be along
these lines. I have not tried to follow that though, hoping to await
friendly conclusion while avoiding the acromonious process. Perhaps I am
waiting for Godot. Anyway, I am still looking for collaborators, but now
I have a little more time to work on it.

Are you suggesting that only others (like myself, perhaps) are
the only ones responsible for doing and publishing PCT research. In fact,
I do have different reference signals than you; I've been doing (and
publishing) PCT research for over 15 years -- many of those studies covering
the same ground as the "Vancouver experiment".

Yet another reason, to get published, research must be novel. Is it?

I currently have a paper on
PCT research under consideration for publication

I am carrying it around in my briefcase, hope to get to it soon. Mine,
by the way, was rejected.

and I have developed quite
a few Java based Web experiments (a new one on hierarchical control should
be ready in a week or so)

Are these experiments or demonstrations? Cool either way, but for
different reasons.

; and I do all this in my spare time because I am
no longer a professor (like YOU) who's main job is to DO research.

So, I have been meaning to ask, but of course it is none of my business,
but what do you do for a living?

>Oh yeah, one more thing. I and others try to find higher-order reference
>signals that are universal

How in the world can you determine higher order reference signals (I presume
you mean the perceptual variables that correspond to these reference signals)
if you are not (as you say above) testing for controlled variables?

In some of my work I am doing variations of the TEST. However, at the
higher levels, there are many problems with the TEST. Runkel has talked
about some of these. Some issues have been talked about on the net
(ethics, practicality). Others are related to the issue of the
imagination mode (what I think of as the operation of models). But how
can I talk about them if my descriptions always cause(?) errors in your
perceptions? So instead of the test I often do something worse, I rely on
a proponderance of evidence from the psychological literature to argue
their possible existence. I could be wrong about them, and if I am that
is one among many possible reasons some of my studies do not work out. It
is messy working at the higher levels. Thanks for your support :slight_smile:

Later

Jeff

[From Rick Marken (970411.0900 PDT)]

Me:

I don't see how you can study control without _always_ "concerning
yourself" with the variables people are controlling

Jeff Vancouver (970410.14:30 EST) --

Because I assign a task (like tracking) where the reference
signal, and somewhat the perception, is obvious.

I don't understand this at all. The reference and perceptual signals are
theoretical entities; how can they be "obvious"? The controlled
variable (the presumed correlate of the perceptual signal) is not
obvious either. Do you monitor the state of a presumed controlled
variable (and disturbances to that variable) throughout all you
experiments? Is that why the reference and perceptual signals are
"obvious"?

Me:

It's the _conventional_ IV-DV work that is the problem.

Jeff:

Whoa! Where did this word _conventional_ come from?

"IV" and "DV" are just words that describe the role of variables in
an experiment; an IV is a variable that is manipulated; a DV is a
variable that is observed while (or after) manipulation of the IV.

In "conventional" behavioral research the IV is typically some aspect of
the subject's environment; the DV is some measure of the subject's
"behavior". The goal of this research is determine whether or not the IV
has an "effect" on the DV (statistical inference -- t tests,
ANOVA, etc -- is used to decide the chances of being in error when
concluding that the IV does have an effect on the DV).

I now call this kind of research "conventional IV-DV research" because
people (like Bruce Abbott) have pointed out that PCT research could also
be desceibed as "IV-DV"; in PCT, the IV is a disturbance and the DV is a
measure of the hypothesized controlled variable. People like Bruce use
the same words ("IV-DV") to describe conventional and PCT research in
the hopes of convincing themselves and others that these two kinds of
research are essentially equivalent (same name = same phenomenon) and
that psychologists have, thus, been doing PCT research (since it is the
same as conventional research) all along. This sort of linguistic
legerdemain may be emotionally satisfying to the user but it doesn't
help others (who are willing to learn PCT) learn how to study control.

So there is "good" IV-DV research and "bad" IV-DV research.

All research is "good" (it's a lot better than arm chair speculation).
But there is IV-DV research (the PCT kind) that can tell you what
organisms control and there is IV-DV research (the "conventional" kind)
that can't.

Consider the Kahneman & Tversky stuff on "non-rationality" in human
decision making

Give me a reference to a research report of theirs and I will be happy
to evaluate it from a PCT perspective.

I would say that the variable that the subject is controlling is
mouse movement as perceived kine[sthe]tically. I don't think
that is the interesting question. THe interesting question is
what is the reference signal (or better, where is it coming from)
for the mouse movement control system.

I presume, then, that you also think it's more interesting to know
the source of all the water on Europa before you know that there _is_
water on Europa? You just trying to talk the talk, Jeff. Why not
learn to walk the walk.

you make a big deal out of the idea that conventional or
traditional or psychological or whatever IV-DV research looks
for variance in the DV that corresponds with variance in the IV,
but that PCT research "knows" the DV should _not_ vary as the
IV varies.

No, Jeff. When you do The Test you have to know how the DV (the
hypothesized controlled variable) would behave if it were NOT
under control. In PCT research we _guess_ at a controlled variable
and then see if, indeed, it does vary far less than expected (based
on a physical analysis of the situation).

a conventional psychology researcher could argue and hypothesize
no relationship between the IV and DV because of the negative
feedback loop and I have seen them do that.

I have too and I give an example (in the paper you have in your
briefcase) of some researchers who did just that. But good examples like
the one I cite in that paper are VERY few and far between. If
you know of other examples of conventional research that is
essentially a Test for the Controlled Variable then I'd REALLY like to
know about it.

your argument always assumes the DV is the environmental variable.

In any research the DV MUST be a variable in the environmant of
the RESEARCHER. In that sense, yes, the DV is always an environmental
variable.

As we can see in the Vancouver experiment, the DV is an indirect
measure of error.

That's your theoretical interpretation of your measurement. What you
are measuring is an environmantal variable (for both the reseracher AND
the subject) -- the pixal distance between two lines.

Bottom line here Dr. "I wrote a research textbook so I should
know what I am talking about" Marken, stop blanketly condemning
IV-DV research :slight_smile:

I'm not "condemning" it; I'm just pointing out that _conventional_
IV-DV methods tell you nothing about control becuase they tell you
nothing about what variables that are under control. That's not
condemnation: it's revelation (halleluja;-))

The study arose to provide evidence that we can model as a way to
control when input is lost.

Right. And it was looking pretty much like we DON'T.

To a non-PCTer, the issue might seem trivial -- of course we model.

But we don't; we control perceptions; many of them. But you can
only convince yourself of this if you do the modeling. I also think you
might like to try my "integral control" demo. I guess I'll
develop that and put it on the web to show that mental modeling has
nothing to with with what is happening when we "control in the
blind".

Me:

I have developed quite a few Java based Web experiments

Jeff:

Are these experiments or demonstrations?

They were experiments. But the results are so consistent that I call
them demonstrations now becuase they demonstrate aspects of the
phenomenon of control.

what do you do for a living?

Space systems engineering.

In some of my work I am doing variations of the TEST. However,
at the higher levels, there are many problems with the TEST.

Actually, I think it might be easier to test for control of higher level
variabls than for control of lower level ones. I'd like to hear about
your work on testing for control of higher level variables.
What problems have you run into?

how can I talk about them if my descriptions always cause(?)
errors in your perceptions?

Are you controlling to reduce my errors or yours? You should be able to
talk about these experiments easily if doing so causes no error for you.

It is messy working at the higher levels.

It seems very neat to me;-)

Best

Rick