Are there any FACTUAL problems?

[From Rick Marken (951107.1340)]

Bruce Abbott (951107.1035 EST) --

It's not so much that they [psychological researchers] have lacked rigor,
but that they have been using the wrong approach.

Correct!

they [psychological researchers] turn on the TV, and they attempt to learn
something about it by fiddling with the knobs and testing various hypotheses
about its internal organization.

The problem, of course, is that people are not like TVs. The input to a
person is controlled; the input to a TV is not. You can't fiddle with the
knobs (inputs) on people because people control their own "knobs".

I maintain that there is nothing fundamentally unscientific or nonrigorous
about this approach

There is nothing necessarily unscientific or nonrigorous about this approach
- - except that it ignores the FACT that the experimenter cannot control the
input to a living system as he can the input to a non- living system (like a
TV); the input to a living system is determined by BOTH the experimenter AND
the system itself. This is fact, not theory. Because conventional
"scientific" psychology ignores this fact about living systems and continues
to study these systems as though they were TV sets, conventional scientific
psychology can be considered unscientific.

This kind of research can lead to progress of a certain type. To the extent
that the empirically-established relationships hold, one can make practical
use of them. It the picture too dark? Try turning this little knob
clockwise.

This is true for a TV, but not for a living system. Empirically established
relationships between inputs and outputs mean something quite different than
what you think they mean if the system under study is a control system (like
a person) rather than a lineal causal system (like a TV).

it is the kind of science one does when one does not yet have the proper
foundation on which to begin building the underlying mechanism. It is top-
down rather than bottom-up research.

It is the kind of science one does when one doesn't know the FACTS of
life. It is the kind of science one does when one does not have a proper
FACTUAL, not theoretical, foundation.

Your claim is that conventional scientific psychology has problems because it
doesn't have a solid theoretical foundation; it doesn't have a good idea of
the nature of the "circuit diagram" of the TV, so to speak. In fact, the
problem is much simpler, and much more fundamental, than that.

The problem is that the relationship between living systems and their
environment is like this:

s -->p---S-->r
     > > (1)
     <-------

and not like this:

s -->p---S-->r (2)

In these diagrams, s is the environment (the "stimuli" or "independent
variables"), p is the representation of the environment at the sensory
surface of the system (S) and r is the output of the system (the "responses"
or "dependent variables"). Both of there diagrams represent observable,
FACTUAL characteristics of systems; they are NOT theories.

Diagram (1) is a factual description of the relationship between a living
system and its environment (s). The input to this system (p) always depends
SIMULTANEOUSLY and continuously on the state of the environment (s) and on
the state of its own output (r). The diagram shows that what is currently
happening at your own sensory surface (p) depends on what is going on in the
environment (s) AND on what you happen to be doing right now (r); what you
are seeing depends on what's in your office (s) and on where your eyes and
head happen to be pointed (r); what you feel on the tips of your fingers
depends on the kind of keyboard you're using (s) and on what you fingers are
doing (r).

Diagram (2) is a factual description of the relationship between a non-
living system (like a TV) and its environment (s). The input to this system
(p) depends ONLY on the state of the environment (s). So an outsider can
manipulate what is happening at the sensory surface of the system by
manipulating s. This is what is done when you turn the knob on the TV; the
torque of the hand influences an input variable (position of the knob) that
determines (via the system) a response (such as the brightness of the
picture); the response of the system has no effect on the state of the input
(knob angle).

Clearly, conventional psychological research methods assume that you are
dealing with a system like the one in diagram (2). Indeed, this is precisely
how you described the approach of conventional behavioral scientists:

they attempt to learn something about it [the person/TV] by fiddling with
the knobs

This is also the way the conventional approach is described in behavioral
science methods texts (like yours and mine); this method WOULD work (and tell
you something about the internal organization of the system, S) if the
relationship between living systems and their environment were as described
in diagram (2).

The FACT of the matter, however, is that the relationship between living
systems and their environment is as described in diagram (1). PCT shows
what this FACT means for conventinoal methodology; it means (unfortunately)
that empirical IV-DV (s-r) relationships tell us nothing (zero, nada, zilch)
about the internal organization of the system (S) but plenty about the
environmental laws relating r to p (again, check out Powers' (1978) Psych
Review paper).

Whether you agree with my analysis or not, at least you should now follow
why I take the position I do about the scientific merits of psychological
research.

Yes, I think I understood why you take the position you do about the merits
of conventional psychological research. It seems to me, however, that you
have not understood (or wanted to deal with) my position regarding the merits
of conventional psychological research. I have tried to make that position as
clear as possible in this post: conventional behavioral research is aimed at
understanding a system that is in an open loop relationship with its
environment (diagram 2); but living systems are actually (and demonstrably)
in a closed loop relationship with their environment (diagram 1).

I wish you would tell me what you think of this argument because I can't see
how anyone could continue to argue for the value of conventional behavioral
research methods (and the data derived from them) once the implications of
diagram 2 have been understood.

The whole premise of conventional behavioral methods is that variation in the
environment (s), the independent variable, leads to changes in perception
(p) that lead to changes in responses (r) based on these perceptions. But
diagram 1 shows that the environment is not the only cause of variations in
the perceptions of living systems; the responses of the organism itself are
another cause of these variations.

From the point of view of conventional methodology, responses of a closed

loop system (diagram 1) must be considered a continuous and inevitable
"confounding" variable, rendering this methodology useless for determining
IV-DV relationships. Moreover, if a perception is under control then the
environment has almost NO effect at all on the perception that leads to
changes in responses (r). The foundation of conventional methodology
collapses; time to start testing for controlled variables.

Bruce Abbott (951107.0940EST) --

I, too, have read _The Society of Mind_ (I have my own copy), and found
Minsky's ideas in many ways congenial to PCT. I wonder if Bill P. or
Rick or any of the other "old timers" in CSG have read it: I'd like to hear
their views of it.

The congeniality is probably superficial. I didn't get the impression that
Minsky understood the nature of purposeful behavior; he certainly didn't seem
to get the idea that behavior is the control of perception, an idea that
seems rather central to PCT.

Rick

[From Bruce Abbott (951107.2025 EST)]

Rick Marken (951107.1340) --

Bruce Abbott (951107.1035 EST)

It would be very nice indeed if we could pursue one debate at a time. The
issue before us was, I believe, whether the research and theoretical efforts
of "conventional psychology" constitute "science" or "mush," to use Dag's
contemptuous metaphor.

Now it seems to me that the approach and methods used by a discipline
determine whether it should merit the designation of "science," not current
state of its theoretical development. Furthermore, I am arguing that the
methods that can be employed at any given time depend on the current state
of knowledge.

they [psychological researchers] turn on the TV, and they attempt to learn
something about it by fiddling with the knobs and testing various hypotheses
about its internal organization.

The problem, of course, is that people are not like TVs. The input to a
person is controlled; the input to a TV is not. You can't fiddle with the
knobs (inputs) on people because people control their own "knobs".

This is true only to an extent. I seem to be doing a pretty good job of
tweaking your knobs, if I do say so myself. And by so doing I've learned
quite a bit about Rick Marken behavior. For example, it was no surprise to
me at all that you used my post to launch a discussion of open-loop versus
closed-loop analysis, a topic that actually has nothing to do with the issue
at hand. But this is another issue, so I will take it up (again) some other
time.

There is nothing necessarily unscientific or nonrigorous about this approach
- - except that it ignores the FACT that the experimenter cannot control the
input to a living system as he can the input to a non- living system (like a
TV); the input to a living system is determined by BOTH the experimenter AND
the system itself. This is fact, not theory. Because conventional
"scientific" psychology ignores this fact about living systems and continues
to study these systems as though they were TV sets, conventional scientific
psychology can be considered unscientific.

This "fact" didn't seem to be much in evidence until fairly recently as I
recall (1948? 1960? 1973?), and for some strange reason, on CSG-L is
referred to as a THEORY (perceptual control theory, if memory serves). So,
if I may summarize your position, what determines whether a discipline can
be considered scientific or not is whether it has developed the correct
theory or conception. It is not whether it uses "scientific" procedures of
observation and logic, but whether it has guessed right.

This kind of research can lead to progress of a certain type. To the extent
that the empirically-established relationships hold, one can make practical
use of them. It the picture too dark? Try turning this little knob
clockwise.

This is true for a TV, but not for a living system. Empirically established
relationships between inputs and outputs mean something quite different than
what you think they mean if the system under study is a control system (like
a person) rather than a lineal causal system (like a TV).

It seems to me that when you are trying to understand a system whose
principles are unknown to you, there are only three things you can do. You
can observe it in operation to see what it normally does and under what
conditions, you can poke it to see how it responds, or you can throw up your
hands.

Your claim is that conventional scientific psychology has problems because it
doesn't have a solid theoretical foundation; it doesn't have a good idea of
the nature of the "circuit diagram" of the TV, so to speak. In fact, the
problem is much simpler, and much more fundamental, than that.

O.K., it's NOT that psychology doesn't have a good idea of the nature of the
circuit diagram.

The problem is that the relationship between living systems and their
environment is like this:

[What followed was a description of two different circuit diagrams: open
loop and closed loop.]

Clearly, conventional psychological research methods assume that you are
dealing with a system like the one in diagram (2). Indeed, this is precisely
how you described the approach of conventional behavioral scientists:

So, the real problem is that psychology doesn't have a good idea of the
nature of the circuit diagram.

they attempt to learn something about it [the person/TV] by fiddling with
the knobs

Without a proper conception of how the beast works, there is not much else
you CAN do. Even now, our conception is murky. What does the Test for the
controlled variable tell you about how memories are stored and accessed?
How objects are differentiated and recognized? What else is there to do,
but manipulate variables, observe the result, and try to understand what may
be going on inside the brain to produce these relationships?

The FACT of the matter, however, is that the relationship between living
systems and their environment is as described in diagram (1). PCT shows
what this FACT means for conventinoal methodology; it means (unfortunately)
that empirical IV-DV (s-r) relationships tell us nothing (zero, nada, zilch)
about the internal organization of the system (S) but plenty about the
environmental laws relating r to p (again, check out Powers' (1978) Psych
Review paper).

Here we go again, making an identity of S-R and IV-DV. IV-DV is a method of
looking for causal relationships (whether direct or indirect); S-R is a
particular lineal cause-effect model of behavior. The Test is IV-DV: diddle
the knob, observe the effect. You are committing a logical blunder, holding
that what is true of the instance is true of the category. In the case of
the Test, you push and observe whether the system pushes back. This is
IV-DV methodology, pure and simple, and your assertion that empirical IV-DV
relationships tell us nothing about the internal organization of the system
is either false or constitutes an assertion that the Test for the controlled
variable cannot work. I maintain that it is false. I suppose this means
you take the other position.

I wish you would tell me what you think of this argument because I can't see
how anyone could continue to argue for the value of conventional behavioral
research methods (and the data derived from them) once the implications of
diagram 2 have been understood.

We've already had that discussion but I don't believe you heard anything I
said. I doubt if you could repeat the argument I have made in that regard.
But at this time I am trying to address another debate, and your efforts to
deflect the conversation back to this one are only serving to confuse the
issue. The issue before us is whether the approach to understanding mind
and behavior employed by psychology and ethology during the past 100 years
qualifys as a truly scientific effort or is, as Dag asserts, only "mush." I
have argued that the approach adopted by psychology was for a long time the
only one avaiable, given the state of ignorance about possible mechanisms
from which to construct a bottom-up approach. Nothing you have said in your
reply addresses that argument.

Regards,

Bruce

[From Tom Bourbon (951108.0001 CST)]

[From Rick Marken (951107.1340)]

Bruce Abbott (951107.1035 EST) -- (Replying to Bill Powers, "At Mary's

bequest")

It's not so much that they [psychological researchers] have lacked rigor,
but that they have been using the wrong approach.

Correct!

And Rick continued his reply.

I read Bruce's comments a short time ago and began thinking of a reply,
then I discovered that Rick had said everything I might say.

The major problem with traditional behavioral science is not that
traditional scientists have lacked rigor, or motivation, or intelligence,
or any of a dozen or more other attributes. The major problem is that
traditional scientists have treated living things like objects whose
behavior can be explained with a lineal model of cause and effect. Living
things are not cause-effect devices. They have no knobs to tweak.
Research methods that follow the knob-tweaking metaphor will never disclose
the fact that living things control some of their own experiences, hence,
those methods will never provide data about one of the defining features of
life.

The "knob-tweaks" (discrete values of the independent variables -- IVs) in
traditional methods can only provide disturbances to variables that affect
perceptions that are controlled by the subjects in traditional experiments.
In such a situation, what the traditional experimenter describes as a
"response" (some value of the dependent variable -- DV) is in fact an
action, by the subject, to control the perception that is disturbed by the
supposed IV. An infinite number of environmental operations, and
combinations of operations, might disturb any given controlled perception,
but a scientist who exclusively uses knob-tweaks would not know that. It
is conceivable that a traditional scientist might devote an entire career
to identifying ever more environmental operations that lead to the DVs the
scientist prizes above all others. During such a career, a scientist might
demonstrate rigor, ingenuity, intelligence, perseverance, cleverness, even
brilliance, and more, but if the research program is built on knob-tweaks,
the scientist will not know that, first and foremost, living things are
perceptual controllers.

Tom

[From Rick Marken (951108.0800)]

Me:

You can't fiddle with the knobs (inputs) on people because people
control their own "knobs".

Bruce Abbott (951107.2025 EST) --

This is true only to an extent. I seem to be doing a pretty good job of
tweaking your knobs, if I do say so myself... For example, it was no
surprise to me at all that you used my post to launch a discussion of
open-loop versus closed-loop analysis

You either misunderstand my analogy or you don't understand how control
works. I'll assume the former and explain the analogy.

I was comparing the position of a knob on a TV to the state of an input that
a person controls. You can turn the TV knob at will because the system
produces no action in opposition to your torque "disturbance". You can't
"turn" a variable that a person is controlling because the person acts
to oppose the effects of the "disturbance".

Your post was, indeed, a disturbance to one of my "knobs" (controlled
variables); the brilliant post you saw in reply was not a "tweak" of my knob
(a variation in the state of the variable I am controlling); it was my
action that prevented your "tweak" from having any effect on the variable I
am controlling (a variable that I would call "an accurate representation of
the nature of purposive behavior").

Me:

There is nothing necessarily unscientific or nonrigorous about this
[conventional] approach - - except that it ignores the FACT that the
experimenter cannot control the input to a living system

Bruce:

This "fact" didn't seem to be much in evidence until fairly recently as I
recall (1948? 1960? 1973?)

That's true. It was not a clearly described fact until Bill Powers pointed it
out in the mid 1960s. But now that it is a well understood fact,
psychologists who persist in ignoring it (99.9% of them) can be fairly called
"unscientific".

and for some strange reason, on CSG-L is referred to as a THEORY
(perceptual control theory, if memory serves).

For the last ten plus years I have been arguing (in the first chapter of
"Mind Readings", for example) that Powers' genius was the "discovery" of the
FACT than behavior IS control. The theory of control existed well before
Powers wrote BCP. In BCP, Powers showed the proper way to apply control
theory to behavior by recognizing that behavior must be viewed as controlled
results of action. Indeed, the title of Powers' book (Behavior: The control
of perception) is a statement of fact, not theory. Behavior IS the control of
perception. Control theory explains how this works.

It is unfortunate that when people find out about the existence of perceptual
control theory they think they are discovering a new "theory of behavior".
These people assume, of course, that PCT is an alternative theory of
"behavior" (as that term is conventionally understood -- "behavior" is caused
output). Such people (like you, for example) react with confusion and anger
when it becomes evident that PCT is not about what they think of as
"behavior". I think this problem could be reduced if Powers' work were
publicized as "a demonstration of the fact of purposeful behavior" rather
than as "a new theory of behavior".

So, if I may summarize your position, what determines whether a discipline
can be considered scientific or not is whether it has developed the correct
theory or conception.

I hope you can see now that this isn't even close. What makes a discipline
(like psychology) scientific or not is whether it uses ALL the facts in
evidence as a basis for it's judgements regarding the best theory to account
for those facts. Conventional psychology has not included the facts of
control in its consideration of the best theory of "behavior".

[What followed was a description of two different circuit diagrams: open
loop and closed loop.]

They were precisely NOT circuit diagrams. Look at them more closely; they
are diagrams of observable relationships between system variables
(perceptions, responses) and environmental variables. They were not
guesses about circuits inside the system; they were descriptions of
observable relationships between system and enviromental variables.

So, the real problem is that psychology doesn't have a good idea of the
nature of the circuit diagram.

Missed the point completely (and, I suspect, purposefully). No, the real
problem is that psychology doesn't have a good idea of the nature of the
observable, FACTUAL relationship between a living organism and its
environment. Psychology (well, psychologists) has a FACTUAL (not a
theoretical) problem. The problem is that ALL input to ALL living organisms
ALWAYS depends on what is happening in the environment AND on what the
organism is doing; what is occuring at the sensory surface of a living
organism is NOT an independent variable; this is a FACT. Organisms actually
PREVENT independent (environmental) variables from having an effect on
certain sensory variables -- the ones that are controlled.

IV-DV is a method of looking for causal relationships (whether direct or
indirect)

Right. And when you are dealing with a closed loop system it makes no sense
to look for causal relationships (you can only see these relationships when
you BREAK the loop). What psychologists should be looking for are _controlled
variables_ -- quite a different thing.

The Test is IV-DV

Yes. A variable is manipulated (IV) and you look for LACK of effect on
another variable (DV). But the goal and the process of The Test is not like
the goal OR the process of conventional IV-DV research. In the latter (as
you noted) the goal is to detect causal relationships and the process is to
manipulate the IV (under controlled conditions) and look for concommitant
variation in the DV. In the former, the goal is to detect a controlled
variable and the process is to guess what that variable (DV) might be,
manipulate an IV that is _expected_ to have an effect on the DV if there is
NO control and look for LACK of effect on the DV; if there is an effect,
then try a new definition of the DV and start over.

In the case of the Test, you push and observe whether the system pushes
back. This is IV-DV methodology, pure and simple,

It is NOT IV-DV methodology if you look to see whether the system "pushes
back". In order to know whether or not the system is "pushing back", you must
have guessed that both the IV and the DV have effects on the same variable --
the hypothetical controlled variable. "Pushing back" implies that the
MUTUAL effects of variables (IV and DV) on another variable (CV --
controlled variable) are equal and opposite. You can't know there is "pushing
back" unless you are monitoring a possible controlled variable. As I said
before, conventional IV- DV methodology is useless for studying control
systems because you don't monitor a suspected controlled variable.

In conventinoal IV-DV methodology, you "push" with the IV but all you do is
look to see if there is a concommitant change in the value of the DV. If you
"push" a person with different types of contextual letters (IV) in a visual
search task and find a concomitant change in search time (DV) then that's
your finding -- and the implication is that variations in the context letters
cause changes in search rate. You don't conclude that search rate "pushes
back" against variations in contextual letters, do you? Indeed, the fact that
a variable might be controlled is never even considered in such studies; and
it's virtually impossible to tell what the controlled variable might be in
such a study. You don't learn about controlled variables from conventinoal
IV-DV experiements.

and your assertion that empirical IV-DV relationships tell us nothing
about the internal organization of the system is either false or constitutes
an assertion that the Test for the controlled variable cannot work. I
maintain that it is false. I suppose this means you take the other
position.

Yes. I emphtically take the other position: empirical IV-DV relationships
tell us nothing about the internal organization of the system. Nothing.
Again, this is easily demonstrated; maybe Gary Cziko will stop pondering
evolution long enough to take the time to explain his demonstration of this
for you. For now, just take my word for it (or, again -- take Powers' word
for it from his 1978 Psych Review paper) : when you do IV- DV research on a
control system, the relationship between IV (disturbance) and DV (typically
system output) is the inverse of the feedback relationship between output and
controlled variable.

I have argued that the approach adopted by psychology was for a long
time the only one available, given the state of ignorance about possible
mechanisms from which to construct a bottom-up approach.

It was not ignorance of possible _mechanisms_ that was the problem; it was
ignorance of the FACTUAL existance of a closed loop relationship between
organisms and their environment -- a relationship that exists right before
their eyes -- that led psychology to adopt the useless approach to the study
of living systems to which it still desperately clings. PCT's message to
psychologists about the conventional "experimental method" is the same as
God's message to Abraham as he held the knife over Isaac out there on Highway
61 -- drop it! (Of course, PCT had the good sense not to tell psychologists
to pick it up in the first place).

Rick

<[Bill Leach 951108.06:17 U.S. Eastern Time Zone]

[Bruce Abbott (951107.2025 EST)]

As seems to so often be the case (at least for me), disturbances to your
CEVs caused by Rick (and vice versa) eventually result in a control
action that I believe that I understand -- as is the case for the
referenced message.

First, I want to critically comment upon:

This is true only to an extent. I seem to be doing a pretty good job
of tweaking your knobs, if I do say so myself. And by so doing I've
learned quite a bit about Rick Marken behavior. For example, it was no
surprise to me at all that you used my post to launch a discussion of
open-loop versus closed-loop analysis, a topic that actually has nothing
to do with the issue at hand. But this is another issue, so I will take
it up (again) some other time.

While the "for example" is doubtlessly true there is nothing here that
suggests that there is any truth in the preceding sentence. I can mix
LOX and Alcohol repeatedly, observing the effects of the ensuing rapid
oxidation but that does not mean that I know anything whatsoever about
what is going on or why.

... theory or conception. It is not whether it uses "scientific"
procedures of observation and logic, but whether it has guessed right.

Of course, this in not the case. What is at issue is how such
"scientists" handle observed errors and how extensively they attempt to
test the predictive limits of their theory.

An example of what I am talking about I think, is the difference between
PCT and HPCT. PCT is theory -- testable and verifiable. HPCT is
hypothesis -- plausible, reasonable but not yet testable in a rigorous
sense.

It seems to me that when you are trying to understand a system whose
principles are unknown to you, there are only three things you can do.
You can observe it in operation to see what it normally does and under
what conditions, you can poke it to see how it responds, or you can
throw up your hands.

Not quite, there is one other. Well not really, this other is really
just a manifestation of the last choice but people delude themselves into
believing that it is not. "You can reason about how the system works and
when you have an internally consistent 'theory' you are 'done'."

This is the sort of "science" that brought us such well established
principles as "heavier objects fall faster than do light objects", "an
object held while in lateral motion will immediately cease such motion
and fall vertically when released".

It seems that psychology has "bought" only part of the principles of the
"hard" sciences. The reason that the physical sciences have made such
astounding progress is not _just_ that they try to model with generative
models. It is vital to such science that exceptions and errors are
viewed as the "gateway" to new knowledge and understanding.

In the physical sciences, exceptions must be studied and understood.
Even that the "wave theory" of light provides 100% accurate predictive
results under specified conditions is not enough. Since it has been
demonstrated that they theory's predictive power is in error under some
circumstances (themselves very accurately known) then it is accepted that
we have a serious fundamental flaw in our understanding.

I don't know enough about Thorndike and his time to comment with a high
degree of objectivity. I presume that at the time, Thorndike was much
like the physical scientists... "Lets do physical experiments and see
what happens as opposed to just talk about it".

If that is true and in that respect, one should conclude that Thorndike
was a scientist. The problem is not attributable to just one person
anyway. The lack of the presence of scientific rigor in the behavioural
sciences rests solely on the idea that it is not possible to achieve the
modeling accuracy of the other physical sciences. With this idea "in
hand" then behavioural scientists congratulate each other whenever their
"predictions" exceed what is predicted by random variation (I know -- I
am exaggerating).

At some point, to be a science, someone has to question the fundamental
assumptions of behavioural science, this is not done though it is oft
claimed.

Physics continuously has challenges made against its' most basic
premises. Some of the challenges are probably absurd. Though far from
perfect, physics generally accepts such challenges and attempts to test
them. The entire history of the hard sciences is one of investigating
the exceptions or if there were none, trying to find some.

That does not strike me as an attitude employed anywhere in the
behavioural sciences except right here in PCT. In the behavioural
sciences, exceptions are the rule and researchers seem to be busy trying
to dismiss or explain away exceptions.

(Since this will likely come up anyway...)
I do believe that there are people within EAB that will eventually
further EAB as a science. These are the people that are asking the basic
questions about EAB... the people that are quite uncomfortable about the
exceptions and the current "explanations" for these exceptions.

This is of course overly simplistic. Tyco Brae was a scientist even
though it was Kepler that deduced the theory. Science is not as easily
reducible as my statements might sound. Theories must be challenged and
tested, this is science when done honestly. However, data must be
acquired and those that observe carefully are (or at least can be)
scientists.

It is much easier to talk about what is not science:

It is NOT science when experiment after experiment provide exceptions to
the conclusions and the proponents of the theory continue to maintain
that the theory is valid. (Though admittedly, this involves opinion)

It is NOT science when the explanations is nothing more than an identity.

It is NOT science if general principles deduced from the theory do NOT
hold in conditions where the theory predicts that they should and the
proponents can not alter the theory in a generative manner. The duality
of light is a good example from the physical sciences. The wave and
particle theories of light are both very useful and very accurate when
properly applied but they known to be a failure as far as the "theory of
light" is concerned.

For me, a problem is that I think that EABer's in general are scientists
in that they do work quite hard at defining questions and then attempt
carefully to quantitatively measure experimental results. I think that
much of their work is seriously misguided but their methods are at the
level of scientific rigor. Their "error" as scientists is the denial
that exacting prediction is not possible. As long as this belief exists
then it is not likely that EAB will ever recognize that their is a
fundamental error in their assumptions about the nature of what they are
studying.

Most (all?) other behavioural scientists suffer from the further problem
in that they don't believe in generative models at all, nor do they
believe in rigorous experimental process. Oh, their mathematics might be
rigorous, but their data collection methods and experimental methods
(when experimental methods are used at all) are not.

-bill