WORDS & CONFUSION-RKC

From Bob Clark (931208.1410 EST)

From time to time I find difficulties arising from familiar words

being used in unfamiliar ways. Every specialty has its own lingo,
often modifying its own technical words to form a "short-hand." In
addition, words/concepts that are effective and useful in one field
are borrowed by another field -- but with changed significance. This
can be very confusing indeed. Examples:

DYNAMICS AND DYNAMICISTS
I have been thoroughly baffled by the discussion of "dynamics" and
"dynamicists." To me, these words are related to second order partial
differential equations, systems thereof, methods for their solution,
application to functions of complex variables etc. How this is
related to behavior, from any standpoint was a puzzle.

"WAS" because I looked it up in my dictionary: "dynamic psychology,
any approach to psychology which emphasizes drives and motives as
determinants of behavior." This seems to have nothing whatever to do
with physics or "dynamics" as physicists and engineers use the term.
Given this definition, dynamic psychology can be discussed from a PCT
standpoint. But it would have greatly simplified the discussion if
"drives and motives" had been the topics under discussion.

If some other definition of "dynamic psychology" is intended, please
provide it.

ENERGY
This is one of my pet peeves. In physics this has a very specific
meaning: "capability of doing work." And "work" is the "product of a
force and the component of a displacement parallel to the force." It
is a property of the situation -- it is not "in" anything, nor is it
any "where." Indeed, its magnitude depends, in part, on the reference
system chosen for description of the situation. This is not
"reference level" in the PCT sense. Energy can exist (in the
computational sense of "exist") in various "forms," and can be
transformed (sometimes) from one form to another. This is an
exceptionally useful, and powerful concept in working with physical
entities.

ENERGY AND VORTICES
I have been surprised at recent discussion of vortices in terms of
negative feedback and "energy flows." A vortex is a form of motion
that can occur in a fluid under certain conditions. If linear
movement of a fluid is interfered with by its surroundings, some
energy is transferred to rotational motion -- a vortex. This can
occur for the fluid in a pipe, or the banks of a stream, or around an
airfoil. This is a dissipative effect: energy is removed from the
stream. The vortex has acquired rotational energy and angular
momentum. The vortices rub against their surroundings, converting
their energy to heat. This involves nothing beyond classical
physics. Their is no feedback involved with vortices.

ENERGY AND MOTORS
Hans Blom reports being "stumped" by a DC motor question. If it
responds to added friction by increasing torque, is this not a case
of negative feedback? Of course not. Go look up DC motors. It is a
case of balance of forces. The force of friction is overcome by the
increased difference between the applied EMF and the back EMF. The
increased friction reduces the speed of the motor resulting in
reduction of the back EMF. Balance of forces. No feedback.

ENERGY AND AMPLIFIERS
In electronics driving forces, "voltages," have been emphasized
rather than energy or power. When designing any electronic device,
it is necessary to consider the inevitable power requirements. But
it is the EMF that is of interest, that does the job. Energy is
required throughout the loop, but the output function may require
additional energy to convert the magnitude of the error signal to a
proportionate (approximately) muscle force. In this sense, the
magnitude of the error signal is amplified and converted to muscle
force. All this requires energy. In addition, further energy may be
needed to actually move the environmental variable toward the
position required by the reference signal. Although energy is
necessary, the system can only be understood in terms of the
relations among the signals, EMF's, forces, etc that occur within the
system. With systems that include physical materials, you can't get
away from the requirements of classical physics. Whether a system is
considered a "power amplifier" or not, depends on whether power is an
important aspect of its design. All amplifiers require power, but
only relatively small amounts are needed by some of them. Control
systems generally require only the relatively small amounts of power
needed to transmit signals. More power is needed only when muscles
are involved.

BALANCE OF FORCES
I am repeatedly troubled by attempts to apply negative feedback
concepts to cases of balance of forces, ordinary energy
relationships, and other "ball-in-the-bowl" situations. Even,
recently, feedback has been suggested for geological and cosmic
events!! Amazing!

ENTROPY - DEFINITION
This word has been used increasingly in recent posts. It is an
appealing term, usually misunderstood. It is a subtopic of
thermodynamics and is defined in terms of energy and temperature. It
applies to random movements of large numbers of identical,
independent particles. It has a simple mathematical form: S = dE/dT.
There is a well known proof that it is intrinsically positive, that
is, it can never decrease in a _closed system_. A "closed system"
has boundaries which energy cannot pass across. If S is integrated,
the constant of integration is indeterminate. "Levels," if any, can
only exist as a consequence of other conditions.

ENTROPY & ORGANIZATION
Entropy is often said to pertain to degree of "organization," or
"disorganization." In some situations, that seems to apply, but the
problem is that there is no independent _physical_ definition of
"organization." Thus, "entropy" becomes conceived as a measure of
"organization." But this only applies to physical systems, and has
nothing to do with organizations of people.

ENTROPY & INFORMATION
Likewise Entropy is sometimes applied to "information." The
mathematical form for changes in information is the same, but there
is no _physical_ definition of "information." As information
theorists define their terms and draw their conclusions, I now have
no problem -- since I got a copy Weaver's Mathematical Theory of
Communication presenting an interpretation of Shannon's work. (Bill
Cunningham suggested I look it up.) But I find no connection
whatsoever between Entropy and Information or Information Theory.

INFORMATION -- DEFINITION
My dictionary also includes a special definition of "information"
that differs from the usual one and agrees with the Information
Theorists. Five definitions are listed initially, all pertaining to
"knowledge" and its transfer. Next is: "6. (in communication
theory) an indication of the number of possible choices of messages,
expressible as the value of some monotonic function of the number of
choices of messages, usually log to the base 2." For this to apply,
the list of possible messages must pre-exist in the receiver and
correspond exactly to the sender's list. Any message sent, perhaps
by an automatic device responding to a disturbance, that is not on
the receiver's list carries no information -- it is meaningless
noise. And the amount of information received is determined by the
"number of possible choices of messages" available to the receiver.

From the standpoint of the message sent vs the message received, the

"information" received can be very different from the "information"
sent. The notes above suggest some examples of verbal messages
received that differ from those sent. Information Theory is very
useful in its own place, but doesn't seem to have any application to
HPCT.

···

-----------------------------

I hope these comments help clarify some of the discussions on the Net.

Regards, Bob Clark

[Avery Andrews 931209.1107]
(Bob Clark (931208.1410 EST))

>
>If some other definition of "dynamic psychology" is intended, please
>provide it.
>

`The dynamicists' are a group of people, inspired by Gibson, and ideologically
dominated by Michael Turvey, who emphasize the role of dynamics in
explaining activity. I don't see anything wrong with what they actually
do in a positive sense (e.g. they do have reasonable-looking arguments
that properties of the circulatory system are part of the explanation
for certain properties of rhythmic movements), but they annoy PCT-ers
by

  a) distorting the nature of control systems

  b) claiming that they've explained things by producing `equations of
       constraint', without presenting any proposals about why these
       constraints actually hold.

I suppose this is a pretty obscure name for them, but the alternatives
aren't much better, I think. Some of Rick's papers contain references
to their earlier work, Gavan Lintern's postings to some later work.

Avery.Andrews@anu.edu.au

[Martin Taylor 931208 18:30]
(Bob Clark 931208.1410)

I hope these comments help clarify some of the discussions on the Net.

Far from it. I think they muddy the waters considerably, unlike most
of your other admirable contributions. Sorry to say so, but that's the
way it looks.

Throughout the discussions, I and I believe other contributors have been
using "dynamics," "energy," "entropy," and such terms as if we were physicists.
Or, at least, trying to do so.

I have been thoroughly baffled by the discussion of "dynamics" and
"dynamicists." To me, these words are related to second order partial
differential equations, systems thereof, methods for their solution,
application to functions of complex variables etc.

Right. But why just "second-order?"

How this is
related to behavior, from any standpoint was a puzzle.

They relate to behaviour because control systems do, and control systems
are dynamical systems.

ENERGY
This is one of my pet peeves. In physics this has a very specific
meaning: "capability of doing work."

Exactly so. And that capability is related to the entropy as well.
As you say, there has to be a reference level. If the work is being
done by a transfer of energy from a high-temperature source to a low
temperature sink, the temperature difference is what matters. In other
circumstances the energy flow may be in other forms.

This [energy, MMT] is an
exceptionally useful, and powerful concept in working with physical
entities.

Such as control systems, which are physical entities.

I have been surprised at recent discussion of vortices in terms of
negative feedback and "energy flows." A vortex is a form of motion
that can occur in a fluid under certain conditions. If linear
movement of a fluid is interfered with by its surroundings, some
energy is transferred to rotational motion -- a vortex. This can
occur for the fluid in a pipe, or the banks of a stream, or around an
airfoil. This is a dissipative effect: energy is removed from the
stream.

It certainly IS dissipative, but a sustained vortex, such as one seen
when water goes down a drain at the same rate as water is supplied
upstream, does not vanish. It is a stable structure, at least it maintains
its structure against mild disturbances. It is not at all like the
case of a ball-in-a-bowl, in which a disturbance supplies energy to
the ball, to be released (into heat) when the ball drops down to its
previous position. In a self-organized structure, a disturbance may
take energy from the structure or add energy to it. Either way, the
structure, using the energy of the main flow, returns to its original
state. It is a (possibly very high gain) negative feedback system.
You can't have stable structures in a non-dissipative system, because
such a system cannot incorporate negative feedback.

This involves nothing beyond classical physics.

True.

ENERGY AND AMPLIFIERS
When designing any electronic device,
it is necessary to consider the inevitable power requirements. But
it is the EMF that is of interest, that does the job.

And impedance. And that implies power. If the source has insufficient
power, considering the load impedance, the device cannot have a desired
gain.

Although energy is
necessary, the system can only be understood in terms of the
relations among the signals, EMF's, forces, etc that occur within the
system.

You need all of these and more to understand the system. But you can't
build a system that violates thermodynamic laws.

All amplifiers require power, but
only relatively small amounts are needed by some of them. Control
systems generally require only the relatively small amounts of power
needed to transmit signals. More power is needed only when muscles
are involved.

Quite so. The effective impedance of a control system whose load is
another control system (as in most of the hierarchy) depends largely on the
parameters of the load (control) system. As a rule, the higher system needs
very little power. But it needs some. That's why the brain is quite a
hot part of the body, isn't it, and why one can examine the general
location of the brain involved in different kinds of processing by
monitoring where the glucose is used?

ENTROPY - DEFINITION
This word has been used increasingly in recent posts. It is an
appealing term, usually misunderstood. It is a subtopic of
thermodynamics and is defined in terms of energy and temperature. It
applies to random movements of large numbers of identical,
independent particles. It has a simple mathematical form: S = dE/dT.

Some textbooks of thermodynamics do define entropy in this form, yes.
But I think you might enjoy an article in the September 1993 issue of
"Physics Today" (a journal with many fascinating articles over the
years). It is called "Boltzmann's Entropy and Time's Arrow," by
Joel Lebowitz, on pp 32-38. The form you give is a consequence of
taking the Gibbs formulation of entropy, with which Boltzmann's form
agrees in the case of large ensembles. Lebowitz argues persuasively
that Boltzmann's approach, which applies to any micro-ensemble, is
correct even now, after 100 years. Look it up. It's a nicely written
article.

But I find no connection
whatsoever between Entropy and Information or Information Theory.

That's always been a point of discussion among phsyicists and information
theorists. Some say there is, some that there isn't. You aren't alone
in not finding a connection. But if you look at the article I just
mentioned, you might be tempted more into thinking that a useful
connection does exist.

My dictionary also includes a special definition of "information"
that differs from the usual one and agrees with the Information
Theorists. Five definitions are listed initially, all pertaining to
"knowledge" and its transfer. Next is: "6. (in communication
theory) an indication of the number of possible choices of messages,
expressible as the value of some monotonic function of the number of
choices of messages, usually log to the base 2." For this to apply,
the list of possible messages must pre-exist in the receiver and
correspond exactly to the sender's list. Any message sent, perhaps
by an automatic device responding to a disturbance, that is not on
the receiver's list carries no information -- it is meaningless
noise. And the amount of information received is determined by the
"number of possible choices of messages" available to the receiver.

Apart from the inevitable simplifications you get when you look physical
or mathematical stuff up in a dictionary tuned for language testing,
that's not bad. (Two oversimplifications ought to be noted: that the
_number_ of choices is irrelevant--what counts is the probability
distribution over the choices--, and that the number of choices may
well be infinite--information is defined over continuous as well as
discrete probability distributions.)

What's good about your description is that it centres on the receiver.
("Correspond exactly with the sender's list" is wrong. The sender's
list is irrelevant to the information gained by the receiver.)

The information provided by a message (NOT "in" the message) is a
reduction of the receiver's uncertainty about something. No outside
observer can determine how big that reduction actually is, unless the
receiver's probability distribution can be observed before and after
the receiver receives the signal. In fact, the information in a signal
may be negative for some receiver, if it is more uncertain after than
before receiving the signal. The "same" signal could have a large positive
information value for a different receiver.

What an outside observer CAN determine is the maximum information the
signal could have provided to the receiver, if the outside observer
knows appropriate parameters of the signal source and of the receiver,
such as bandwidth and resolution.

Information Theory is very
useful in its own place, but doesn't seem to have any application to
HPCT.

I seem to have heard that somewhere before, and I still don't believe it.

Martin

From Tom Bourbon [931209.1320]

From Bob Clark (931208.1410 EST)

From time to time I find difficulties arising from familiar words

being used in unfamiliar ways. Every specialty has its own lingo,
often modifying its own technical words to form a "short-hand." In
addition, words/concepts that are effective and useful in one field
are borrowed by another field -- but with changed significance. This
can be very confusing indeed. Examples:

DYNAMICS AND DYNAMICISTS
I have been thoroughly baffled by the discussion of "dynamics" and
"dynamicists." To me, these words are related to second order partial
differential equations, systems thereof, methods for their solution,
application to functions of complex variables etc. How this is
related to behavior, from any standpoint was a puzzle.

"WAS" because I looked it up in my dictionary: "dynamic psychology,
any approach to psychology which emphasizes drives and motives as
determinants of behavior." This seems to have nothing whatever to do
with physics or "dynamics" as physicists and engineers use the term.
Given this definition, dynamic psychology can be discussed from a PCT
standpoint. But it would have greatly simplified the discussion if
"drives and motives" had been the topics under discussion.

If some other definition of "dynamic psychology" is intended, please
provide it.

Avery and Martin have already replied to you, Bob. As my reply, I want to
add the following material, which I had prepared a couple of days ago in
anticipation of an opportunity like the one presented by your post. (This
is not a case of precognition, or of feedforward. :slight_smile: The topic of
"dynamics" seems to be in the air these days.)

···

===================================================

     Some behavioral scientists characterize living things as
self-organizing systems, nonequilibrium systems, or dynamical
systems, or some combination of those. In what terms do they
describe behavior? Do they provide generative models for
behavior? Do they acknowledge, describe and explain the
phenomenon of control? When they speak of attractors and
dynamics, do they use the terms descriptively, or instead are the
terms meant to be explanatory?

     The recent flurry of discussion on csg-l about dynamical
systems analysis (DSA) left me wondering if I accurately
remembered what I had read of the DSA literature. To refresh my
thoughts on this subject, I went to my file of articles by DSA
authors, reached in without peeking, and pulled out what will
pass as a representative case. Anyone interested in verifying or
refuting the adequacy of the sample article may read it, or may
select any number of others by any means they desire. The
article is:

P.G. Zanone & J.A.S. Kelso (1992). Evolution of behavioral
attractors with learning: Nonequilibrium phase transitions.
Journal of Experimental Psychology: Human Perception and
Performance, 18, 403-421.

(I am pleased with "the luck of the draw." I remember this as a
good example of the genre.)

The abstract:

"Learning a bimanual coordination task (synchronization to a
visually specified phasing relation) was studied as a dynamical
process over 5 days of practicing a required phasing pattern.
Systematic probes of the attractor layout of the 5 S's
coordination dynamics (expressed through a collective variable,
relative phase) were conducted before, during, and after
practice. Depending on the relationship between the initial
coordination dynamics (so-called *intrinsic dynamics*) and the
pattern to be learned (termed *behavioral information*, which
acts as an attractor of the coordination dynamics toward the
required phasing), qualitative changes in the phase diagram
occurred with learning, accompanied by quantitative evidence for
loss of stability (*phase transitions*). Such effects persisted
beyond 1 week. The nature of change due to learning (e.g.,
abrupt vs. gradual) is shown to arise from the cooperative or
competitive interplay between behavioral information and the
intrinsic dynamics" (p. 403).

=============================================
     I will summarize their method, placing special emphasis on
statistical procedures and results. Those results provide the
raw material -- the behavioral phenomenon or phenomena the
authors will explain with DSA. They asked people to perform a
task requiring movements of the two index fingers, with the
movement of each finger cued by one of two lights whose onsets
could occur at various relative phases. At various points in
time, the authors sampled the actual phasing of finger movements.
They calculated "relative phasing (RP)," the average relationship
of actual phasing to target phasing, averaged across specified
intervals of time. They did not sample behavior continuously, at
least not with a short sampling interval. In their analyses,
they emphasized changes, across time, in (a) the average within-
trial mean RP (averaged across all subjects) and (b) the average
within-trial SD of the RP (again, averaged across all subjects).

In this long article, everything the authors say is aimed at
explaining the behavior of individuals, but their arguments all
rest on means of means of means, or on means of means of standard
deviations. Does that raise strong doubts in anyone other than
me? If not, it should. Figure 4 and Table 1 (both on p. 409)
show the overall results. The figure shows means of means of ...
more or less doing the right things across trials and across
days: the mean of the mean RP (averaged across all subjects)
approximates the target values; the mean of the mean standard
deviation of RPs more or less decreases across days. Table 1
shows the results for individual subjects. It does not
unambiguously support the aggregated data in Figure 4, or the
description in the text: Only two of the five subjects had
decreases in mean RP and mean SD across all five days; the other
three had slight to major deviations from the pattern the authors
say characterizes human performance on this task.

Please do not assume that I exaggerate the deviation of
individual subjects from the idealized aggregates the authors
submit to DSA interpretation. They report that in "a regression
analysis performed on the mean RP and SD, collapsed across
subjects, R-squared = .342, p < .01, and R-squared = .291, p <
.01, respectively." The authors did not report the coefficient
of alienation (k, where k-squared = 1 - r-squared), for those R-
squares, but I calculated them. (PCTers often call k the
coefficient of uselessness of a correlation.) For mean Relative
Phasing, k = .81; for mean Standard Deviation, k = .84. Either
of the R-squares is pretty useless; the relationships between RP
and time, and between SD and time, are extremely weak. Given
either R-squared, if the authors tried to predict specific values
of RP, or of the SD of RP, across trials, the standard error of
the predictions would be > 80% of the standard error if they knew
only the overall mean values of RP, or of the SD of RP -- their
data don't tell them very much about the kind of performance they want to
explain.

I am certain a few people on this net would argue that, given
data such as these, there is no behavioral phenomenon for the
authors to interpret with DSA. But the data clearly meet the
criteria accepted in modern behavioral science and the authors
did use DSA to interpret the data.

The authors speak, many times, about the relationships between
"intrinsic dynamics" (which seem to be patterns of movement a
subject makes before you teach something new) and "behavioral
information" (a new pattern you require the subject to learn).
Their ideas are summarized in this passage: "Learning, in the
dynamic pattern framework, is the process by which environmental
behavioral information defining a pattern to be learned becomes
memorized behavioral information. A coordination pattern is
learned to the extent that the intrinsic dynamics are modified in
the direction of the to-be-learned pattern. Once learning is
achieved, the memorized pattern constitutes an attractor of the
behavioral pattern dynamics (references omitted here)" (p. 404).

I cannot help but read that passage as saying, "learning is the process by
which learning occurs. The extent to which learning has occurred is the
extent to which it has occurred. Once learning has occurred, it has
occurred."

The article is rich in remarks that the authors are using, "the
language of self-organization," and "the image of attractor
layout," and "the language of nonlinear, dissipative dynamical
systems." They speak of observing "cooperation and competition
between behavioral information and intrinsic dynamics." The
intrinsic dynamics have a "potential" (V); environmental
information has "strength" (c); sometimes the two "forces" (V and
c) "cooperate," other times they "compete" and "pull" the result
away from the required relative phase; the "potential" can be
deformed; and so on. "As this new relative phase is learned, the
influence of the initially bistable dynamics attracting the
system to in-phase or anti-phase patterns dwindles, because of
the progressively overwhelming attraction by the pattern being
memorized . . . ." The "... strength of memorized information
increases." (All from pp. 405-6.) Of course, all of these ideas
might be purely descriptive and metaphorical. That would be no
problem. But I have a continuing uneasy feeling when I read the
article. It seems to be about more than description; the authors
intend it to be about *mechanisms*. They say so, clearly, on
page 406:

"3. What mechanisms and principles govern changes due to
learning? Whether some tasks are learned more easily than others
(e.g., in terms of rate of learning and performance efficiency)
depends on the extent to which behavioral information cooperates
or competes with the intrinsic dynamics." Mechanism, not
description. Also, the authors' discussion of individual
subjects includes the idea that they might have arrived at their
respective final dynamics through different mechanisms, meaning
through different combinations of cooperation and competition
between what they did before learning and what they were required
to learn, all explained in dynamical terms.

===================================

To the degree that the present article represents the genre (I
believe it serves quite well), my recollection of the DSA
literature **in behavioral-cognitive science** was accurate.
Dynamical systems analysis is popular in many fields of study and
it might play important roles in some of those fields. On the
other hand, I believe my doubts are justified when I see DSA used
to explain behavioral "phenomena" in the form of weak
correlations between scores aggregated many times across several
subjects. One thing is certain: When authors claim that
attractors and other features of dynamical systems play the roles
of causal mechanisms in behavior, they often invoke some
interesting causal "forces," "powers" and "processes."

If the article reviewed here is any clue, it seems fair to say
that DSA people in behavioral science have goals different from
those of PCTers. I have no quarrel with that -- how could I? But it
does seem fair for PCTers to request strong empirical evidence from anyone
who says DSA, **as it is typically applied in behavioral-cognitive
science**, subsumes PCT.

Until later,

Tom

But it
does seem fair for PCTers to request strong empirical evidence from anyone
who says DSA, **as it is typically applied in behavioral-cognitive
science**, subsumes PCT.

I agree with this carefully worded statement, while at the same time
asserting that all control systems are dynamical systems. Clearly the
authors you quote have very little empirical evidence that there is
ANY dynamical system per se (mechanism) involved in the process
they're studying. Dynamics requires STATE DETERMINISM (for a GIVEN
input), and their systems are all over the place.

Chaos is a red herring: chaotic systems are dynamical systems, but
don't LOOK like them FROM THE OUTSDIE. Determining the presence of a
mechanism as the source of a random process (chaos) is VERY DIFFICULT.

So, yes, the evidence you've supplied supports the hypothesis that DSA
**as it is typically applied in behavioral-cognitive science**, is a
metaphor.

O----------------------------------------------------------------------------->

Cliff Joslyn, Cybernetician at Large, 327 Spring St #2 Portland ME 04102 USA
Systems Science, SUNY Binghamton NASA Goddard Space Flight Center
cjoslyn@bingsuns.cc.binghamton.edu joslyn@kong.gsfc.nasa.gov

V All the world is biscuit shaped. . .

[Martin Taylor 931210 12:10]
(Tom Bourbon 931209.1320)

In reply to Bob Clark:

I want to
add the following material, which I had prepared a couple of days ago in
anticipation of an opportunity like the one presented by your post. (This
is not a case of precognition, or of feedforward. :slight_smile: The topic of
"dynamics" seems to be in the air these days.)

Thanks, Tom. Now I have a much better appreciation of what it is that
raises antennae when you hear the word "dynamics." I dissociate what I
have been talking about from the kind of thing you present. It's not
the same objective or even domain of thought, despite perhaps using
methods with the same mathematical background. As Bill P says, having
a common mathematical background doesn't give things equal validity.

I guess I have to be more careful about what I presuppose in respect of
how people will interpret stuff. You have explained some reactions I
have up to now considered quite bizarre.

Martin