IT, dynamics, PCT

[Martin Taylor 931221 14:00]
(Tom Bourbon 931216.1544)

OK, Tom. You win. Here is a brief and probably incomprehensible outline
of where I see linkages between, firstly information theory and dynamic
analysis, and secondly between informational dynamic analysis and control
theory. I sincerely hope you are right that it is worthwhile for me to
do this in public rather than to you directly.

In particular, I could not follow this comment of yours about IT and DSA:

Martin:

Indeed, information theory is appropriately applied
in dynamical analysis, attractors being manifestations of the destruction
of information about prior states.

My non-understanding extended to the subsequent sentence (quoted below),
which I did not include in my reply. In this sentence, I believe you were
indicating that, like attractors, control systems also destroy, or at least
reduce, information about prior states. You said:

Control systems are more appropriate
to locally divergent systems, in that they, too, reduce the information
available about prior states, whereas the divergences actually increase
it.

Once more, I confess my inability to follow your reasoning, which is in part
due to my problem with disentangling the several agents, actions and objects
of actions, that are declared or implied in your passage. I am disappointed
that you have elected to withdraw from the discussion of these topics on the
net.

I was disappointed, too, but you well know the reasons. Nevertheless, with
some trepidation I will try to expand on this one aspect of the topic.

The viewpoint is that of the external observer, not an internal point in
any possible control system. Words like "destroy" that imply an agent
are metaphorical.

Baseline information, already noted in several postings. An "attractor,"
as I use the term, is part of a description of a dynamic. A dynamic is
the total set of possible orbits that a physical system could follow in a
phase space if it were undisturbed from outside. The phase space is a space
of as many dimensions as it takes to make a complete description of the
present state of the system. If the system is strictly mechanical, the
phase space consists of enough dimensions to describe all the locations
of all the parts of the system and all their momenta, angular or linear.
For a single ball in a bowl, for example, where the ball is constrained
not to leave the surface of the ball (unless it falls out of the bowl),
the phase space has two location dimensions and two momentum dimensions.

An orbit of the ball-in-a-bowl could experimentally be traced out by
starting the ball at some location with some momentum, and seeing where
the ball goes, plotting its location and momentum over time. The set of
all possible orbits is the dynamic. (If you change the steepness of the
sides of the bowl parametrically and find the dynamic for all steepness
values, you get a "superdynamic.") Any orbit in the dynamic will end either
with the ball out of the bowl (outside the interesting region of the phase
space) or with the ball at zero momentum at the bottom of the bowl. Many
orbits will have the ball at the centre of the bowl at some point or points,
but with non-zero momentum. However, all of them will wind up at the same
place in the phase space, zero momentum at the middle of the bowl. That
point in phase space is an attractor, and the set of points in the phase
space that are on orbits leading to the attractor define the basin of
attraction for that attractor.

The orbits that lead the ball to fall out of the bowl are in some other
basin of attraction for which the attractor is effectively "at infinity."
There is an orbit or set of orbits in which the ball just does not fall
out of the bowl, but doesn't fall to the middle either. When the ball
stops, it is perched on the rim. These orbits define a "repellor," a
boundary between basins of attraction. A physical system is extremely
unlikely to be found very near a repellor, but very likely to be found
near an attractor.

The observer can obtain information about the ball by taking a measurement.
That is to say, the observer can locate the position of the ball in phase
space at any particular moment. Before taking the measurement, the observer
may have had some idea of where the ball was, meaning that the observer
had some probability distribution over the phase space for where it was.
Perhaps that probability distribution was uniform (there exists a ball in
the bowl), perhaps not. The probability distribution defines a specific
uncertainty, according to Shannon's formulae, determined to within a
constant if the distribution is continuous. After the measurement, the
observer has a different probability distribution. Measurements are not
exact, thought the perception related to it may have a specific value.
So after the measurement, the observer has a different uncertainty than
before it, specified except for the same constant. The difference between
these two uncertainties is the information provided by the measurement.
The measurement provides information "about the state of the ball;" all
information is "about" something (which is why I always have a bit of a
problem when people say that semantic information is special).

There's no way I can draw a dynamic in ASCII (even a 2-D cross section
of it) so you have to try to visualize. Remember, all the orbits wind
up (!) at the attractor--zero momentum at the centre of the bowl. That
means that they get closer to one another as time goes on. If you make
a measurement that is less than infinitely precise, the corresponding
probability distribution covers some finite area of the phase space. To
simplify the visualization, think of this area as a box extended in the
location and momentum dimensions.

Knowing the dynamic, the observer could say where any precisely specified
point in the box would be or would have been at any moment in the past or
the future (remember, the dynamic allows no external disturbance, which
would move the ball to a different orbit; we will introduce disturbances
later). Orbits converge toward the attractor and diverge away from it.
If the less-than-infinitely-precise measurement happened to find the ball
very near the attractor, the measurement would tell the observer almost
nothing about which orbit the ball was on, or about where in the phase
space it might have been at some specific earlier time. But the uncertainty
of a measurement that found the ball far from the attractor would cover
fewer orbits and would allow much better postdiction.

What this says is that a measurement near the attractor provides less
information about the past than does a measurement far from the attractor.
The attractor is in that sense a destroyer of information. If all roads
lead to Rome, where did someone in Rome come from? Someone in Naples
probably came from further south, someone in Milan from further north.
When they get to Rome, you can't guess.

The second part of your question should be readily answered, inverting the
above reasoning:

... I believe you were
indicating that, like attractors, control systems also destroy, or at least
reduce, information about prior states. You said:

Control systems are more appropriate
to locally divergent systems, in that they, too, reduce the information
available about prior states, whereas the divergences actually increase
it.

Firstly, why does the divergence of orbits increase the information
available about prior states? If you make a measurement now, the
uncertainty distribution covers a certain volume of the phase space.
If the orbits are diverging, the area covered by the orbits that pass
through the uncertainty box will increase in future time (you don't
know where they will go) whereas in the past the area will be reduced
(you know where they came from).

A control system stabilizes, meaning that with the control system active,
the effective (controlled) dynamic has parallel orbits in the region of
phase space that contains the physical variable. That means that for
effective control, continuing measurement by the perceptual process must
gather information as rapidly as the basic divergence makes it available.

Now add a disturbing variable to the situation. In the absence of control,
a disturbance may be defined as something that moves the physical variable
from one orbit to another in the phase space. In other words, its effect
is inherently unpredictable, because prediction is based on orbit-following
within the dynamic, and disturbance moves the physical system from one
orbit to another. (One might incorporate the disturbing variable within
a larger phase space of description, but then it would no longer qualify
as a disturbing variable, besides requiring more one-dimensional perceptual
functions to define the expanded phase space).

One can look at the effect of a disturbance as causing each orbit in the
phase space to fan out from a measurement, in both directions of time.
This means that no matter how precisely a measurement is made at time t1,
prediction at t2 becomes worse and worse as t2 departs further and further
from t1 in either the past or the future. In this, disturbances differ
from the behaviour of the simple dynamic, in which either (a) things are
informationally stable or (b) either prediction or postdiction is possible.

Control in the presence of a disturbance is like that in the diverging
dynamic. The information lost to the disturbance must be regained by
the continuing observations, if stability is to be retained (the orbits
local to the physical variable maintain effective parallelism within the
accuracy of measurement). The actions of the control system cause this
effect in what we might call the "controlled dynamic"--the dynamic when
the control system is operative.

When the control system is active, the local orbits may be parallel, but
orbits for the physical variable that would result in perceptual signals
further from the reference level are convergent. The controlled dynamic
contains an attractor, which is the orbit defined by the reference level.
Of course, this attractor is not something that the ECS observes, it having
only a perceptual signal that corresponds to the current value of one
parameter of the physical entity. The attractor is only detectable by
an outside observer--the analyst, who might deliberately impose disturbances
(apply the Test) to determine the nature of the dynamic and see whether
it has the characteristics of a controlled dynamic: orbits that converge
not to a point attractor or a limit cycle, but to a noisy set of interweaving
"orbits" that are essentially parallel over some region of the phase space.
The noisiness represents the lower of two rates, that at which the perceptual
input can measure the physical variable, and that at which the control
actions can influence it.

I realize that I picked up the tarbaby, in deference to Brer Tom's cajoling.
I don't want to spend long hours trying to find Texas solvent to get rid of
it. I may respond to direct questions, if I can do so without taking too
much time and net bandwidth, but I will make every effort to avoid getting
into a prolonged discussion that seems likely to lead nowhere. In contrast
to the vortex discussion, I do think this is important to PCT, but I realize
a lot of people don't, and also that the backgrounds of many who might think
it important are different from mine in ways that make public communication
very difficult to do usefully.

I have no illusions that the foregoing has clarified matters, but I have
tried to make it responsive to Tom's questions.

Martin

Tom Bourbon [931221.1727]

[Martin Taylor 931221 14:00]
(Tom Bourbon 931216.1544)

OK, Tom. You win. Here is a brief and probably incomprehensible outline
of where I see linkages between, firstly information theory and dynamic
analysis, and secondly between informational dynamic analysis and control
theory. I sincerely hope you are right that it is worthwhile for me to
do this in public rather than to you directly.

Brer Martin! Thanks for picking up the tarbaby once more and sending a
careful, thorough post, which is far more intelligible that the material I
have been reading by Turvey, Kelso and others. I just found your post and
I am already late getting out of the office. I will reply tomorrow, if
circumstances allow -- the holiday is rushing toward us and we have the
added excitement and confusion of our daughter telling us she is engaged.
Not a very stable time. Can a repellor be far away?

Until later,

Tom

From Tom Bourbon [931223.0906]

[Martin Taylor 931221 14:00]
(Tom Bourbon 931216.1544)

OK, Tom. You win. Here is a brief and probably incomprehensible outline
of where I see linkages between, firstly information theory and dynamic
analysis, and secondly between informational dynamic analysis and control
theory. I sincerely hope you are right that it is worthwhile for me to
do this in public rather than to you directly.

I think the reactions to this post justify a conclusion that your time was
well spent. You have been telling us (asserting) for a long time that there
are significant differences between the way(s) you think dynamic analysis
relates to PCT, and the ways some writers (Turvey, Kelso, and others) use
dynamic analysis to explain behavioral phenomena. You did not identify
or explain those differences as you saw them, and your assertions-as-
disclaimers did not make the differences clear, hence your frequent
bewilderment when people on the net reacted as though you had posted in the
Turvey-Kelso style. Your present post makes the differences clear -- at
least a sufficient number of them to allay many protests like those you
encountered in the past. Worthwhile?

For any people lurking on this net who do not understand why PCT modelers
have reacted so strongly and negatively to the introduction of ideas about
dynamic analysis (dynamical systems analysis -- DSA), I suggest a reading of
several of the articles I cited in recent posts (by Kelso, Turvey and
associates). After you finish them, read Martin's post again, at least the
portions that deal with DSA. I believe several important differences will
be obvious. (And Martin, clarity of style is *not* the least of those
differences.)

You also introduced some of your ideas about the "bigger" scheme of things,
in which you try to relate DSA, PCT and information theory (IT) -- this in
direct reply to my equally direct challenge when I posted the summary of
Kelso's article on Fitts's Law (J. Exp. Psychol: General, 1992, vol. 12,
260-261). Kelso had discussed his ideas about the DSA-IT connection and I
quoted his remarks in a flagrant attempt to "draw you out" on the subject of
the DSA-IT-PCT connection. You took the challenge -- you grasped the
tarbaby. Now we have a hint of the direction you intend to go with your
proof that PCT follows, *necessarily*, from IT. What is not yet clear is how
you will demonstrate the *necessity* in the relationship, but your present
post should serve to hold some of the PCT hounds at bay while you work out
that detail. ;-))

I have no illusions that the foregoing has clarified matters, but I have
tried to make it responsive to Tom's questions.

You did that, Brer Martin. Thanks.

Until later,

Tom