Testing and Experimenting

[From Rick Marken (990412.0930)]

CHUCK TUCKER (920925C) --

Your instructions suggest another nice way to illustrate the
difference between the PCT approach to research (which I
will call "Testing") and the conventional approach to research
(which I will call "Experimenting"). Your instructions are
like a PCT research methods text; they explain how to do PCT
research (Testing). I think one of the most significant parts of
these instructions defines the _goal_ of PCT research (Testing):

2. Your task is to discover what P has in mind without
asking any questions or using any verbal communication at all.

The goal of Testing is to determine what _perception_ the participant
(P) wants ("has in mind"). In other words, the goal is to determine
the perceptual variable the sparticipant is controlling.

The equivalent instructions for doing conventional research
(Experimenting) might read something like this:

2. Your task is to discover how changes in the coins affect
P's behavior.

You would learn that the way to do Experimenting is to change
the coins in some way (manipulate an independent variable), under
controlled conditions, and look for changes in some measure of
P's behavior (measure a dependent variable). So you might define
the independent variable as the vertical position of coin 1
(see coin layout in diagram below) and the dependent variable
as lateral movement of coin 4.

···

------
Starting coin layout:

1 2

3 4
-------

Let's say you find that vertical movements of coin 1 are,
indeed, associated with lateral movements of coin 4; when
you move coin 1 up P moves coin 4 to the right; when you move
coin 1 down P moves coin 4 to the left. So you have achieved
the goal of Experimenting; you have discovered how variations
in an IV (the position of coin 1) affect a DV (movements of
coin 4).

Note that when you acheive the goal of Experimenting (finding
IV-DV relationships) you have _not_ achieved the goal of
Testing. All you know from Experimenting is the relationship
between a disturbance and an action. This discovery is consistent
with _many_ possible controlled variables. You could do more
Experimenting -- finding more disturbance-action relationships --
and the results of such Experimneting would certainly narrow the
possibilities regarding the variable that P might be controlling.
But since this Experimenting is not driven by hypotheses about
what the controlled variable might be (guesses about what pattern
P "has in mind") it is bound to be very inefficient, ad hoc and
still not definitive (for example, the results of many ad hoc
Experiments may be consistent with the hypothesis that P is
controlling for a "square" but, without Testing, it is still
possible that P has something else in mind, such as a
"parallelogram"). Moreover, the results of Experimenting rarely
provide data about the state of possible controlled variables.
For example, the typical result of Experimenting is data regarding
the relationship between the IV and DV (vertical coin 1 position
and lateral coin 4 movement in this case); there is no information
about the state of a potential controlled variable (such as the
pattern of all four coins) after the manipulation.

Using different instructions to teach Testing and Experimenting
could be an excellent way to teach students (and other social/
behavioral scientists) the difference between PCT and conventional
research in terms of their aims and methods..

Best

Rick
---
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bill Powers (990412.1258 MDT)]

Rick Marken (990412.0930)]

Let's say you find that vertical movements of coin 1 are,
indeed, associated with lateral movements of coin 4; when
you move coin 1 up P moves coin 4 to the right; when you move
coin 1 down P moves coin 4 to the left. So you have achieved
the goal of Experimenting; you have discovered how variations
in an IV (the position of coin 1) affect a DV (movements of
coin 4).

Note that when you acheive the goal of Experimenting (finding
IV-DV relationships) you have _not_ achieved the goal of
Testing. All you know from Experimenting is the relationship
between a disturbance and an action.

This is a really pretty way of contrasting conventional research with
testing for controlled variables. Well said.

Best,

Bill P.

[From RIck Marken (990412.2100)]

Bill Powers (990412.1258 MDT) --

This is a really pretty way of contrasting conventional research
with testing for controlled variables. Well said.

Thank you! Thank you!

Best

Rick

···

---
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[from Jeff Vancouver 990413.1100 EST]

[From Rick Marken (990412.0930)]

Why would the experimentalist be interested in "discovering" the
relationship between the vertical movement of 1 and the horizontal movement
of 4? I think there are different answers to that question. Each
potential answer must be tested for each experimenter (and perhaps for each
experiment) using the test. Otherwise, you run the risk (assuming you
care) of appearing a hypocrite.

Sincerely,

Jeff

[From Rick Marken (990413.0850)]

Jeff Vancouver (990413.1100 EST) --

Why would the experimentalist be interested in "discovering"
the relationship between the vertical movement of 1 and the
horizontal movement of 4?

Why would a behaviorist be interested in discovering the
relationship between response rate and reinforcement schedule?
Why would a cognitivist be interested in discovering the
relationship between target/background similarity and search
rate? Why would a social psychologist be interested in discovering
the relationship between number of bystanders and the proportion
who help in an emergency? Indeed, why are would an experimentalist
be interested in discovering the relationship between any variables?
The answer (in psychology) is: because they have either noticed
such a relationship informally and want to test to see if it
actually exists or their vague verbal theories of behavior suggest
that such a relationship should exist.

What's important is that the (psychological) experimentalist
is _not_ doing research aimed at discovering what variable(s)
the participant is _controlling_.

I think there are different answers to that question. Each
potential answer must be tested for each experimenter (and
perhaps for each experiment) using the test. Otherwise, you
run the risk (assuming you care) of appearing a hypocrite.

I don't understand this? Are you saying that a person who
does research aimed at the discovery of controlled variables
(Testing) rather than research aimed at the discovery of IV-DV
relationships (Experimenting) would appear hypocritical?

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bruce Gregory (990413.1207 EDT)]

Rick Marken (990413.0850)

What's important is that the (psychological) experimentalist
is _not_ doing research aimed at discovering what variable(s)
the participant is _controlling_.

The problem, as far as I can tell, is with the underlying model. I find
in education _everyone_, no matter how different they think their
approach might be, has a "cognitive processing" and therefore
control-of-output model. This approach leads to an extremely mushy
description of and "theory" of learning. It is difficult, if not
impossible, to have a useful discussion with these folks, because their
models are often unstated and never testable. The greatest favor
educators could do themselves would be to try on a control-of-input
model. This will never happen because most of them don't know they even
_have_ a model.

Bruce Gregory

[from Jeff Vancouver 990413.1355 est]

[From Rick Marken (990413.0850)]

Jeff Vancouver (990413.1100 EST) --

Why would the experimentalist be interested in "discovering"
the relationship between the vertical movement of 1 and the
horizontal movement of 4?

Why would a behaviorist be interested in discovering the
relationship between response rate and reinforcement schedule?
Why would a cognitivist be interested in discovering the
relationship between target/background similarity and search
rate? Why would a social psychologist be interested in discovering
the relationship between number of bystanders and the proportion
who help in an emergency? Indeed, why are would an experimentalist
be interested in discovering the relationship between any variables?
The answer (in psychology) is: because they have either noticed
such a relationship informally and want to test to see if it
actually exists or their vague verbal theories of behavior suggest
that such a relationship should exist.

The experimentalist is generally not interested in confirming formally a
relationship observed informally (correlations often are though). Latane
et al, were interested in determining if their explanation (which you
usually call "verbal theory") is correct. Their tests are incomplete.
They know that. For many, they have little or nothing to do with
controlling variables.

What's important is that the (psychological) experimentalist
is _not_ doing research aimed at discovering what variable(s)
the participant is _controlling_.

All experimentalists?

I think there are different answers to that question. Each
potential answer must be tested for each experimenter (and
perhaps for each experiment) using the test. Otherwise, you
run the risk (assuming you care) of appearing a hypocrite.

I don't understand this? Are you saying that a person who
does research aimed at the discovery of controlled variables
(Testing) rather than research aimed at the discovery of IV-DV
relationships (Experimenting) would appear hypocritical?

My point, dear Rick, is that you are only guessing at what they (or any one
of them) are doing. Unless you conduct the test, you cannot say that they
are doing x, or not doing y. That is the hypocrisy.

No doubt most are not controlling to find controlled variables. But I
still think that more are doing so then you think, but that they are doing
so on variables that are difficult to model or operationalize
quantitatively. Suppose you were testing for maintaining a sense of
self-worth, how would you do that? (You need not answer unless you have an
operational ethically doable study).

Sincerely,

Jeff

[From Rick Marken (990413.1320)]

Jeff Vancouver (990413.1355 est) --

Latane et al, were interested in determining if their
explanation (which you usually call "verbal theory") is
correct. Their tests are incomplete. They know that.

I have no idea whether their tests were "complete" or
"incomplete"; all I know is that their tests had nothing to
do with trying to discover any of the variables a person
controls.

For many, they have little or nothing to do with controlling
variables.

Yes. My point exactly. Conventional tests have nothing to do
with determining controlled variables. This is because
conventional researchers wouldn't know a controlled variable
if it came up and shocked them on the paws.

Me:

What's important is that the (psychological) experimentalist
is _not_ doing research aimed at discovering what variable(s)
the participant is _controlling_.

Jeff:

All experimentalists?

Yes (with the exception of Bill P. and myself, of course;-)

My point, dear Rick, is that you are only guessing at what
they (or any one of them) are doing.

Not guessing, really. I am watching what they do (control). I
have never seen a published paper in the social/behavioral/life
science literature that describes a systematic attempt to
determine a controlled variable. And I have never seen anyone
defend (control for) this (Testing) approach to doing research.

Unless you conduct the test, you cannot say that they are doing
x, or not doing y. That is the hypocrisy.

But I have been Testing. My efforts to get psychologists to
stop controlling for doing IV-DV research and start testing
for controlled variables have been treated as a disturbance and
quite effectively resisted. Every psychologist I have met so
far has protected his/her perception of how to do psychological
research from _all_ my efforts to get them to do it differently.

No doubt most are not controlling to find controlled variables.

So far, it looks like "most" = "all".

But I still think that more are doing so then you think

That certainly may be; but I haven't met them. They don't
publish. Where are they?

but that they are doing so on variables that are difficult to
model or operationalize quantitatively.

What does this have to do with it? If they're testing for
controlled variables (whether those variables are easily
quantifiable of not) you could tell. Believe me. It's easy!
All you have to do is _look_ to see if they are doing what
you do when you do the Coin Game; and see if they resist your
efforts to get them to do their research differently.

Suppose you were testing for maintaining a sense of self-worth,
how would you do that?

The same way I would test to see what a person is controlling
in the Coin Game. "Self worth" is just a noise that describes
one possible dimension of variation of a perceptual variable
(like the "shape" of the coins). Once you know what perceptual
variable "self-worth" refers to then you apply disturbances
and see if that perception is protected from those disturbances
(and not protected from disturbances that should have no effect).
If (as is likely, since "self worth" is your first guess about
the perceptual variable under control) P does not protect the
variable from all disturbances, then you reject the hypothesis
that P is controlling whatever you were calling "self worth" and
you switch to a new hypothesis (call it what you will but, what-
ever you call it, it would have to be a _new_ perceptual
variable -- like "number of coins on the table" instead of
"shape of coins" in the Coin Game -- and one that is consistent
with all your previous Testing.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Rick Marken (990413.1345)]

Me:

What's important is that the (psychological) experimentalist
is _not_ doing research aimed at discovering what variable(s)
the participant is _controlling_.

Bruce Gregory (990413.1207 EDT) --

The problem, as far as I can tell, is with the underlying model.
I find in education _everyone_, no matter how different they
think their approach might be, has a "cognitive processing" and
therefore control-of-output model.

Exactly correct!! All the different models in education -- like
all the different models in psychology -- are really just _one_
model: cause-effect. There are no controlled variables in
cause-effect models so one doesn't go looking for there variables
in one's research; one goes looking for the variables that cause
(IVs) behavioral effects (DVs).

The PCT model shows what a controlled variable is and PCT
research shows that controlled variables are _real_ -- as real
as independent variables and dependent variables. PCT shows that
the behavior of living systems is very likely organized around
the control of perceptual input variables (controlled variables).
This means that Testing should always be the first method of
choice when studying the behavior of living systems. If Testing
reveals no evidence of a controlled variable then it's safe to
go ahead and study the behavior as an open loop process, using
conventional experimental methods

Best

Rick

···

---
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bill Powers (990413.1521 MDT)]

Jeff Vancouver 990413.1355 est]

No doubt most are not controlling to find controlled variables. But I
still think that more are doing so then you think, but that they are doing
so on variables that are difficult to model or operationalize
quantitatively. Suppose you were testing for maintaining a sense of
self-worth, how would you do that? (You need not answer unless you have an
operational ethically doable study).

Well, I don't need to meet that criterion. Are you suggesting, Jeff, that
if someone studies variables that are difficult to model or
"operationalize" (?), their findings are somehow given more worth than if
they were easy to obtain, or were obtained at greater cost in time and
resources? If someone is testing for a sense of self-worth, but uses poor
models and bad data, is that supposed to give us more confidence that there
really is such a thing as a sense of self-worth? I don't think that
difficulties in obtaining them confer any increased value on purported
facts; quite the opposite. If a person insists on studying poorly-defined
phenomena with inadequate tools, it is nobody else's fault that the results
are unreliable and unconvincing.

Best,

Bill P.

[from Jeff Vancovuer 990414.0850 EST]

[From Bill Powers (990413.1521 MDT)]

Jeff Vancouver 990413.1355 est]

No doubt most are not controlling to find controlled variables. But I
still think that more are doing so then you think, but that they are doing
so on variables that are difficult to model or operationalize
quantitatively. Suppose you were testing for maintaining a sense of
self-worth, how would you do that? (You need not answer unless you have an
operational ethically doable study).

Well, I don't need to meet that criterion. Are you suggesting, Jeff, that
if someone studies variables that are difficult to model or
"operationalize" (?), their findings are somehow given more worth than if
they were easy to obtain, or were obtained at greater cost in time and
resources? If someone is testing for a sense of self-worth, but uses poor
models and bad data, is that supposed to give us more confidence that there
really is such a thing as a sense of self-worth? I don't think that
difficulties in obtaining them confer any increased value on purported
facts; quite the opposite. If a person insists on studying poorly-defined
phenomena with inadequate tools, it is nobody else's fault that the results
are unreliable and unconvincing.

That would be an odd interpretation of what I had in mind. My point, and I
have made it before, is that many of the hypothesized CV's that many
psychologists are interested in are very difficult to test. They are
either intrinsic, hence they are operating on the internal environment, or
they are higher-order, hence they are constructed from the output of many
input functions which may include a heavy memory influence. Both
conditions make applying the test difficult because the link between the
perception and the CV (what some external observer can monitor) is not
likely isomorphic. In other words, there are likely an array of qi's, of
which an external observer may have the ability to measure only some of them.

What I am suggesting is that one can take one of two positions about these
types of ECU's. One, don't study them, and two, live with the inability to
study them well but study them nonetheless. I am not suggesting that one
have the confidence in interpretations of tests of the ECU's of the sort I
am talking about equal the confidence one can have in the interpretations
of tests applied to ECU's with qi's observable to all. I am suggesting
that you are applying an unfair standard.

ALso, what I find particularly troubling about the anti-psychologist
sentiment is that many psychologists are dealing with issues that relate,
but are not directly about control. For example, I think many
psychologists are studying the nature of memory (how does the brain store
information; how it is accessed, etc.). I think others are studying the
qi's that form perceptions. I think that many would say is, yes, it is a
closed loop and we control, now I want to study this segment of a
particular loop. I have made this argument before, and without data I am
whistling in the wind (the point of the original message in this thread is
that Rick does not have good data either despite his belief that he does).
So let's move on.

I have a more important question for you: how do I make that last Vensim
model I sent to you insensitive to time units? Even more important, how do
I close all the loops I have in the model (I am going to take a crack at
that later question today)?

Sincerely,

Jeff

···

At 03:33 PM 4/13/1999 -0600, you wrote:

[From Bill Powers (990414.0910 MDT)[

Jeff Vancovuer 990414.0850 EST--

My point, and I
have made it before, is that many of the hypothesized CV's that many
psychologists are interested in are very difficult to test. They are
either intrinsic, hence they are operating on the internal environment, or
they are higher-order, hence they are constructed from the output of many
input functions which may include a heavy memory influence. Both
conditions make applying the test difficult because the link between the
perception and the CV (what some external observer can monitor) is not
likely isomorphic. In other words, there are likely an array of qi's, of
which an external observer may have the ability to measure only some of them.

It seems to me that you're assuming the existence of the very "hypothesized
CVs" in which you say psychologists are interested. But how can they be
interested in something the existence of which isn't even proven? What
you're talking about is like the concept of "intelligence," which is so
completely a figment of the imagination that psychologists can define it
only as a score on a certain type of test. There may be a very good reason
that many of these "difficult to test" variables remain obscure: they don't
actually exist, except in the psychologists' imagination.

What I am suggesting is that one can take one of two positions about these
types of ECU's. One, don't study them, and two, live with the inability to
study them well but study them nonetheless.

The former is by far the less likely to generate spurious knowledge and
superstition. Archeologists have the right approach. If the dig reveals
problems for which we lack adequate methodologies, fill up the hole
(carefully) and wait for knowledge and skill to improve.

I am suggesting
that you are applying an unfair standard.

I am applying no standard that I don't apply to myself, or that most physical
scientists don't apply to each other. What is unfair about that? I suspect
that I know what you're going to say: that psychologists deal with a more
difficult subject than physicists or chemists are faced with. I would
HEARTILY dispute that claim -- not the claim the living systems are
complex, but with the claim that psychologists have the tools for dealing
with them. Physical scientists regularly deal with phenomena hundreds of
times as complex as anything psychologists can handle. The claim that
living systems are more complex than nonliving ones is not borne out by the
methods applied to the living systems by most psychologists, which are
extremely elementary and would be inadequate for an understanding of even
simple nonliving systems. There is nothing in psychological methods that
could explain the behavior of even a spinning top, or a control system. Do
psychologists know how to analyze any system as complex as an automobile
manufacturing plant, or a power plant? By claiming that their subject is
far more complex than the subjects studied in the physical sciences or
engineering, psychologists by implication lay claim to a level of expertise
they do not possess. If living systems are as much more complex than
nonliving ones as some people claim, then psychologists ought to trade
places with physicists, chemists, and engineers, whose methods are far
better suited to the exploration of complex systems (there are areas where
that is exactly what is happening). Of course that would leave
psychologists with nothing to do, as their methods are too simple even for
understanding "simple" physical systems.

Best,

Bill P.

[From Rick Marken (990414.0850)]

Jeff Vancovuer (990414.0850 EST) --

My point, and I have made it before, is that many of the
hypothesized CV's that many psychologists are interested in
are very difficult to test.

So what?

Both conditions make applying the test difficult because the link
between the perception and the CV (what some external observer can
monitor) is not likely isomorphic.

The CV is _always_ the observer's _perception_ of the perceptual
aspect of the environment that the subject controls. CV's must be
"isomorphic" to controlled perceptions (p's). The observer and
participant may describe the CV and p differently, even for "simple"
CV's (p's) like the pattern made by the coins in the Coin Game.
This is the "N" vs "Z" phenomenon. Can't you see that whatever
"self-worth" is (or whatever you want to call one of these
complex perceptions you think people control) it is a perception
that both you (observer) and the subject can have? If you can't
perceive (or measure in some way) whatever you think "self-worth"
is then you can't determine whether or not it's a controlled
variable.

In other words, there are likely an array of qi's, of which an
external observer may have the ability to measure only some of
them.

If you can't perceive (measure) a qi then you can't tell whether
it's controlled or not. Such variables simply don't exist as
far as you, PCT or anyone is concerned.

What I am suggesting is that one can take one of two positions
about these types of ECU's. One, don't study them, and two, live
with the inability to study them well but study them nonetheless.

There is actually one other position: three, study them properly
using the Test. This is the position I take.

You seem to be saying that conventional psychological research on
"self-worth" is excusably lousy because "self-worth" is so hard
to measure. I am saying that conventional psychological research
on "self-worth" is inexcusably lousy for the same reason all
conventional psychological research is inexcusably lousy; the
research doesn't study behavior using the process I call "Testing";
it doesn't involve the Test for controlled variables.

You can't hide your reluctance to Test for controlled variables by
saying that the controlled variables you want to study are just
tooo complex. At least, not from me;-)

ALso, what I find particularly troubling about the anti-psychologist
sentiment is that many psychologists are dealing with issues that
relate, but are not directly about control.

No one is "anti-psychologist"; we're anti the "cause-effect"
assumption made by all psychologists. This assumption is so
deep that psychologists don't even know that it _is_ an assumption;
and a _testable_ one at that.

For example, I think many psychologists are studying the nature
of memory

What makes you think memory has nothing to do with control? I
think it has everything to do with control. What are you doing
when you try to remember a phone number? Seems to me like you
are trying to produce a pre-selected _imagined_ perception, ie.
control.

(the point of the original message in this thread is that Rick
does not have good data either despite his belief that he does).

What are you talking about? What data are you talking about?
What's wrong with it?

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[from Jeff Vancouver 990414.1335 EST]

[From Rick Marken (990414.0850)]

The CV is _always_ the observer's _perception_ of the perceptual
aspect of the environment that the subject controls.

So you would cannot hypothesis that the system controls for, say, the total
error in the system, because you could not possibly measure total error.

Or, suppose I hypothesized that people control for not making a fool of
themselves. How do I test this hypothesis? How do I know, for instance,
that singing badly in public does not, for that person feed into a
perception of foolness. I cannot easily get him/her to sing badly, if
he/she is a good singer. And even if I could manipulate singing quality,
and they do not sing when it comes out badly, that it is making a fool of
oneself that I am actually manipulating. Maybe it is the psychologist in
me, but I am not really very interested in when x person controls for not
singing badly, but I am very interested in whether x person, and even more
most people, control for not making a fool of themselves.

CV's must be
"isomorphic" to controlled perceptions (p's). The observer and
participant may describe the CV and p differently, even for "simple"
CV's (p's) like the pattern made by the coins in the Coin Game.
This is the "N" vs "Z" phenomenon. Can't you see that whatever
"self-worth" is (or whatever you want to call one of these
complex perceptions you think people control) it is a perception
that both you (observer) and the subject can have? If you can't
perceive (or measure in some way) whatever you think "self-worth"
is then you can't determine whether or not it's a controlled
variable.

In other words, there are likely an array of qi's, of which an
external observer may have the ability to measure only some of
them.

If you can't perceive (measure) a qi then you can't tell whether
it's controlled or not. Such variables simply don't exist as
far as you, PCT or anyone is concerned.

But as far at HPCT exists? To say it does not exist because I cannot at
this time measure it, is not a reasonable position for a scientist, I believe.

What I am suggesting is that one can take one of two positions
about these types of ECU's. One, don't study them, and two, live
with the inability to study them well but study them nonetheless.

There is actually one other position: three, study them properly
using the Test. This is the position I take.

You just stated above you take position 1.

You seem to be saying that conventional psychological research on
"self-worth" is excusably lousy because "self-worth" is so hard
to measure. I am saying that conventional psychological research
on "self-worth" is inexcusably lousy for the same reason all
conventional psychological research is inexcusably lousy; the
research doesn't study behavior using the process I call "Testing";
it doesn't involve the Test for controlled variables.

That is a tautological argument.

You can't hide your reluctance to Test for controlled variables by
saying that the controlled variables you want to study are just
tooo complex. At least, not from me;-)

I am testing for controlled variables.

ALso, what I find particularly troubling about the anti-psychologist
sentiment is that many psychologists are dealing with issues that
relate, but are not directly about control.

No one is "anti-psychologist"; we're anti the "cause-effect"
assumption made by all psychologists. This assumption is so
deep that psychologists don't even know that it _is_ an assumption;
and a _testable_ one at that.

For example, I think many psychologists are studying the nature
of memory

What makes you think memory has nothing to do with control? I
think it has everything to do with control. What are you doing
when you try to remember a phone number? Seems to me like you
are trying to produce a pre-selected _imagined_ perception, ie.
control.

My first statement above say "relate." Relate does not equal "nothing to do."

You last statement above seems to contradict your earlier statements. How
can you measure the CV from which the imagined perception arises? It is an
excellent example of what I am talking about.

(the point of the original message in this thread is that Rick
does not have good data either despite his belief that he does).

What are you talking about? What data are you talking about?
What's wrong with it?

You claim to have tested conventional psychologists. Given that you cannot
possibly have tested all of them (I do think you have tested a few of
them), I do not know how you can make blanket statements. Further, given
that I am one, who you have presumably tested, and who must have failed,
then you must call what I have done with SimNurse to not be a test of the
controlled variable (which in earlier posts you have). Martin Taylor and
Phil Runkel seem to be other exceptions. If I knew who all the
psychologists on the list were, I could probably name some more.

Oh never mind, if they understand control, then they are, by definition,
not conventional psychologists. This is merely a set problem.

I don't want to talk about this any more. But since both of us control to
getting the last word in, let this paragraph make clear to everyone I am
signing off the argument before Rick's last word.

Sincerely,

Jeff

[from Jeff Vancouver 990414.1415 EST]

[From Bill Powers (990414.0910 MDT)[

It seems to me that you're assuming the existence of the very "hypothesized
CVs" in which you say psychologists are interested. But how can they be
interested in something the existence of which isn't even proven?

What else would they be interested in? If it is proven, science need no
longer be applied to study it. I thought the first step in the test was
"hypothesize a variable."

What I am suggesting is that one can take one of two positions about these
types of ECU's. One, don't study them, and two, live with the inability to
study them well but study them nonetheless.

The former is by far the less likely to generate spurious knowledge and
superstition. Archeologists have the right approach. If the dig reveals
problems for which we lack adequate methodologies, fill up the hole
(carefully) and wait for knowledge and skill to improve.

So I should ignore everything that you have written which warns of it's
hypothetical nature. That's a lot of good thinking to ignore (and bury).

I am suggesting
that you are applying an unfair standard.

I am applying no standard that I don't apply to myself, or that most physical
scientists don't apply to each other. What is unfair about that? I suspect
that I know what you're going to say: that psychologists deal with a more
difficult subject than physicists or chemists are faced with. I would
HEARTILY dispute that claim -- not the claim the living systems are
complex, but with the claim that psychologists have the tools for dealing
with them.

Psychologists tools are crude. And it is not the complexity so much
(although that is part of it), it is the measurement problem (which
physicists have as well).

Physical scientists regularly deal with phenomena hundreds of
times as complex as anything psychologists can handle. The claim that
living systems are more complex than nonliving ones is not borne out by the
methods applied to the living systems by most psychologists, which are
extremely elementary and would be inadequate for an understanding of even
simple nonliving systems. There is nothing in psychological methods that
could explain the behavior of even a spinning top, or a control system. Do
psychologists know how to analyze any system as complex as an automobile
manufacturing plant, or a power plant? By claiming that their subject is
far more complex than the subjects studied in the physical sciences or
engineering, psychologists by implication lay claim to a level of expertise
they do not possess. If living systems are as much more complex than
nonliving ones as some people claim, then psychologists ought to trade
places with physicists, chemists, and engineers, whose methods are far
better suited to the exploration of complex systems (there are areas where
that is exactly what is happening). Of course that would leave
psychologists with nothing to do, as their methods are too simple even for
understanding "simple" physical systems.

I have little disagreement with this (except maybe the 100 times as
complex). So what is light, a particle or a wave? Anyway, by unfair
standards I mean that it would be unfair to say to a particle physicist
that if she cannot measure it with a ruler, the gluon does not exist. Oh
yeah, what is gravity?

Again, as with Rick, these are useless discussions when we have so much in
common. Let's drop it.

Sincerely,

Jeff

[From Tim Carey (990415.0555)]

[from Jeff Vancovuer 990414.0850 EST]

ALso, what I find particularly troubling about the anti-psychologist
sentiment is that many psychologists are dealing with issues that relate,
but are not directly about control. For example, I think many
psychologists are studying the nature of memory (how does the brain store
information; how it is accessed, etc.).

Jeff I wish I was in contact with the kinds of psychologists you are talking
about. I am in the second year of a PhD in clinical psychology and I can
assure you that none of the information I've been introduced to so far has
had anything to do with control of perceptual input. In fact, just yesterday
I sat through a neuropsych lecture. For about half of the lecture we dealt
with "Brain structures controlling memory" and we looked at various "models"
concerning the "Neural basis of working memory" and the "Neural basis of
long term memory". I learned that working memory functioned through the
"central executive" which was comprised of "cognitive problem solving" and
"social cognition affect" these then divided into the "visual sketchpad"
and the "phonological system". I could go on but you've probably got the
idea.

Not only in my patch of the world is their no notion of controlled
perceptual input there is also no notion of the kind of mathematical
modelling that Bill talks about. I think if someone ever actually tried to
attempt to build a working model of "memory" they would quickly find that
terms like "visual sketchpad" are meaningless.

I guess at this embryonic stage of my career I consider _how_ psychologists
are currently doing research to be more problematic than _what_ they are
doing research about. At the moment in psychology people can throw together
some boxes and arrows and call their ideas a model. The thing I love about
PCT is that, for me, it introduced a new standard of what modelling was all
about.

I think if more psychologists understood _how_ mathematical modelling is
done (and I'd like to be one of them one day ;-)) then they would quickly
discover that _what_ they are currently proposing is mostly nonsense.

Cheers,

Tim

Tim Carey wrote:

[From Tim Carey (990415.0555)]

>[from Jeff Vancovuer 990414.0850 EST]

>ALso, what I find particularly troubling about the anti-psychologist
>sentiment is that many psychologists are dealing with issues that relate,
>but are not directly about control. For example, I think many
>psychologists are studying the nature of memory (how does the brain store
>information; how it is accessed, etc.).

Jeff I wish I was in contact with the kinds of psychologists you are talking
about. I am in the second year of a PhD in clinical psychology and I can
assure you that none of the information I've been introduced to so far has
had anything to do with control of perceptual input. In fact, just yesterday
I sat through a neuropsych lecture. For about half of the lecture we dealt
with "Brain structures controlling memory" and we looked at various "models"
concerning the "Neural basis of working memory" and the "Neural basis of
long term memory". I learned that working memory functioned through the
"central executive" which was comprised of "cognitive problem solving" and
"social cognition affect" these then divided into the "visual sketchpad"
and the "phonological system". I could go on but you've probably got the
idea.

Not only in my patch of the world is their no notion of controlled
perceptual input there is also no notion of the kind of mathematical
modelling that Bill talks about. I think if someone ever actually tried to
attempt to build a working model of "memory" they would quickly find that
terms like "visual sketchpad" are meaningless.

I guess at this embryonic stage of my career I consider _how_ psychologists
are currently doing research to be more problematic than _what_ they are
doing research about. At the moment in psychology people can throw together
some boxes and arrows and call their ideas a model. The thing I love about
PCT is that, for me, it introduced a new standard of what modelling was all
about.

I think if more psychologists understood _how_ mathematical modelling is
done (and I'd like to be one of them one day ;-)) then they would quickly
discover that _what_ they are currently proposing is mostly nonsense.

Cheers,

Tim

Tim
If you have the time, have a look at my page on intelligent systms. They
describe a theory that has been partially tested (sucessfully) in
artificial intelligent systems.
http://www.anice.net.ar/intsyst/

They are "psychology" insofar as they describe some fundamental
processes of the mind. And these processes work.
What I still need is to expand and add perceptual control theory.
All I use so far of PCT is the "objective" the system tries to reach.
Walt

[from Jeff Vancouver (990414.1705 EST)]

[From Tim Carey (990415.0555)]

Jeff I wish I was in contact with the kinds of psychologists you are talking
about. I am in the second year of a PhD in clinical psychology and I can
assure you that none of the information I've been introduced to so far has
had anything to do with control of perceptual input. In fact, just yesterday
I sat through a neuropsych lecture. For about half of the lecture we dealt
with "Brain structures controlling memory" and we looked at various "models"
concerning the "Neural basis of working memory" and the "Neural basis of
long term memory". I learned that working memory functioned through the
"central executive" which was comprised of "cognitive problem solving" and
"social cognition affect" these then divided into the "visual sketchpad"
and the "phonological system". I could go on but you've probably got the
idea.

Did you question? The presentation of material in classrooms (and
textbooks) is watered down. Unfortunately, those are the sources of
information about the state of the much of the field for all but those
conducting research in those areas. Hence, it appears much less
sophisticated than it may be (on the other hand, you may get into the
research and find it as hopeless inept as is first appears). Particularly
when teaching to clinicians and other applied psychologists (like my field
I/O) the presentation (and hence understanding) of the deeper processes is
missing. What I have found, though, is that you will find examples of
closed-loop thinking if you look deep enough, particularly in cognitive
psychology. Sterman, Klienmuntz, D. Ford, J. G. Miller, and several others
come to mind. Now many of these people are ignored by others on this list
because they carried one or more "errors" from the TOTE model or Wiener
with them. They probably did and they probably are errors (despite my
scare quotes). But sometimes, like when Bill interacted with G. P.
Richardson, they found a sympathetic ear. Indeed, Rick's "Dance" was
published in Psychological Methods -- Psychology's flagship methods
journal. A fact he cannot seem to explain.

There is no doubt that most of what is out there is crap. Every graduate
student in every program I have known has wondered how so much of it is
published. Whether they have had the luck of being exposed to PCT or not.
Do some research of your own. That will give you some perspective.

Sincerely,

Jeff

1. If a control system is controlling an environemntal variable well, the
variable will not appear to be varying. It will just hover around some
desired value. 'Experimenting' classical psychologists will mistakenly
assume that since the value is close to constant, it is not a variable.
But the subject will exhibit "behaviors" to maintain that value. The
'experimenter' will not see the connection between the apparent constant
and the "behaviors" being used to maintain the value of the variable
constant.

The 'experimenter' will note that some other variables are changing. The
'experimenter' will be looking for correlations between the "behavior"
and the varying values. According to PCT, the varying values are of some
things or some variables that the subject is simply not attempting to
control, or may be beyond the subject's control. Any correlation between
the "behavior" of the subject and the varying value is coincidental, at
best.

2. In looking for controlled variables described by terms like "self
worth", it seems to me that what a 'tester' has to do is decide if the
description is of an observable "behavior". If what classical psychology
is exploring is indeed a "behavior", what they are looking at is the
output of a living control system. The 'experimenter' will try and
analyse this output, which the control system is varying to achieve
constant ends. Sometimes they'll get lucky, sometimes they won't. The
perception of humans as very complex and difficult to understand will be
confirmed. :wink: Meanwhile, the 'tester' will try to figure out, and
test for, what goals regarding which variable(s) might lead to these
"behaviors" being used by a living control system to reduce some error
signals.

Sincerely,
Hank Folson

704 ELVIRA AVE. REDONDO BEACH CA 90277
Phone: 310-540-1552 Fax: 310-316-8202 Web Site: www.henryjames.com

[From Tim Carey (990415.1450)]

[from Jeff Vancouver (990414.1705 EST)]

Did you question? The presentation of material in classrooms (and
textbooks) is watered down.

If you have to water down material to PhD students who are training to be
researchers as well as clinicians, who do you ever present the unwatered
down version to?

I've questioned heaps in my lectures and I receive answers like "Yes we know
that in reality this isn't the way it works but we accept that for the sake
of the model it happens that way"; and "Many of the aspects of this model
are unspecified as yet"

missing. What I have found, though, is that you will find examples of
closed-loop thinking if you look deep enough, particularly in cognitive
psychology.

Yep, I agree. Much of it looks superficially like closed loop thinking. I
guess I'm just a bit more narrow minded than you.

Cheers,

Tim