[From Bill Powers (950320.1906 MST)]
Just got back from Boulder to find a number of interesting messages, as
well as an uninteresting quabble or two.
Bruce Abbott (950318.1305 EST)--
Your new model for SDTEST3 is exactly what I had in mind. I don't think
we need to worry right now about how the target is selected; there are
so many possibilities that picking any one right now would be pointless
(and untestable). Let's just say, as you did, that the output of the
logic-level control system selects a target. Your diagram looks just
like one I was scribbling before leaving for Boulder. I think you were
reading my mind.
The tracking (Level 1) system should contain a perceptual delay; we've
shown that such a delay can slightly improve the model's fit. I do this
(following a suggestion by Martin Taylor) by defining a 64-element
array, with an input pointer and an output pointer. The array is made
circular by ANDing the decremented value of the input pointer with 63.
The output pointer is computed by adding the delay to the input pointer,
also modulo 64. The computed perceptual signal is entered via the input
pointer, and the delayed perceptual signal is extracted with the output
pointer. The value of 64 should cover much more than the maximum delay
ever seen with continuous tracking. If that explanation is opaque I'll
post the code segment.
Once the Level 1 perceptual delay is found using the non-excluded data
segments, and WITH a disturbance present, you can proceed to find the
delay in the level 2 system as you've been doing.
The problem with matching the model under conditions of no disturbance
is inherent: you can't evaluate the parameters of a control system
without a known disturbance. However, what you can do is to use the
parameters found with the disturbance active to predict the same
person's behavior with the disturbance missing (or, of course, with some
other pattern of disturbances, zero disturbance being one of the
possibilities). This should be done after sufficient practice to
minimize learning effects.
In your second post you describe running the model in parallel with the
person's data from a run. We've done this a lot, and have experienced
the same fascination in seeing the model behaving in synchrony with the
participant. That's how we know we're having fun now.
The next step, of course, is to change the disturbance pattern, or set
the disturbance to zero, and run a _prediction_ of the participant's
behavior using the parameters determined with a disturbance present.
Then do the new run with the participant using the same disturbance or
lack thereof, and play back the model's prediction along with the real
data. You can see how well the prediction works from ordinary numerical
analysis, of course, but the playback method is a hell of a lot more
Your problem with going back and forth between levels to optimize both
delays is that the convergence is not guaranteed by that method -- in
any version of Newton's method (which this is) it's possible to get
endless loops. One solution is to set k and kc to some small number and
given k, compute kc. Then change kc from the initial small number only
_part of the way_ (like half) to the computed new number before plugging
that in to compute k. Instinct tells me to do this part-way change only
with one number, but it might work if you do it with both. Of course a
half-way change may be too large...
It should suffice for our purposes to determine the Level 1 model first,
and then the Level 2 model. The parameters we get will show variations
from test to retest, so k-values to three significant figures are
probably not meaningful. The control model isn't THAT good! One thing we
need to determine is the variation in the derived parameters over a
series of tests with an asymptotically performing participant.
Are you taking any demos to the BAAM meeting? Good luck with your
Martin Taylor (950317 Begorrah 11:00)--
If the complementary category were a real phenomenon in perception,
then every time we recognized a category we would also have to
recognize that it is NOT _each other category we are capable of
perceiving_. I don't think that happens.
And neither do I. (Although I leave open the possibility that it
does, as your proposed circuits would allow).
My proposed circuits? To what are you referring? The only category-
recognizing circuit I can recall proposing is a simple "or" of a number
of different inputs from lower levels. How does that allow for a
perceptual signal standing for the negation of every other category
In a post on the 16th to Bill Leach you said
Here and in a lot of the rest of your posting, you seem to equate
the notion of category with "X is present" versus "X is not
present." That's not the kind of choice involved in what I was
talking about. I was dealing with "X or Y is present--which is it"
extended to many possibilities for "X or Y."
This is a different model of perception from the one I've adopted for
HPCT, which is "one perception, one perceptual function." Your version
is basically the conventional one that treats category perception as a
box receiving multiple inputs and capable of producing multiple outputs
(or different codings of a single output), one for each possible
"recognition" of a category. The question "which one is it?" is based on
the premise that there can be only one "right" answer, and your cross-
connected flip-flop scheme is designed to assure that there will be only
However, the same result can be achieved by letting all category-
recognizers respond in parallel to the degree that the inputs exemplify
each category, and applying the logical processes to the result. If the
rule is adopted at the logic level that there can be only one valid
category signal in some group of signals, then the logic can take care
of suppressing or ignoring all signals but the biggest one. That takes
care of the flip-flop effect. However, that arrangement also permits
other ways of dealing with simultaneous category signals, including
allowing a given experience to be classified in several different ways
at once, a point I was trying to make with my example of the dog that
looks like a pig. We can perceive "dogness" and "pigness" simultaneously
in the same object in the same visual field; the one does not preclude
the other. The only thing that would make us insist that "it" must be
either a dog or a pig but not both is the assumption of Aristotelian
logic that a thing is either A or not-A. Not to mention the naive-
realist assumption about perception itself, namely that for every
distinct perception there must be a distinct "it" to be perceived.
Aristotelian logic sometimes works. But other times it may not work. It
doesn't work with a peanut-butter and jelly sandwich, which does not
have to be either peanut butter or jelly. The faces-vases perceptual
switch could work by flip-flops or by logic, but it may also be caused
by the fact that the same visual processing machinery must be used for
each case and can't be used for both at once. Or it may simply be that
the perception, being ambiguous, requires an imagined input that
supplies the missing assumption about which is to be figure and which is
to be ground. A similar effect occurs in size/distance perception, where
the size signal depends on the assumed distance, or vice versa. So there
are lots of possibilities other than the flip-flop circuit, all creating
the same end effect.
Your own examples support my version of category perception:
The sound was a "b" or a "d"; the colour was a "red" or a "green"
or a "blue" or a "yellow;" the candidate was "democratic" or
"authoritarian" or "populist." In such cases, the colour could be
a little bit red with some yellow, but if one has to decide whether
the light is red or yellow, then if it is yellow, it is NOT red.
The critical phrase is "if one has to decide." The decision (to pick the
last example) rests on the premise that a light can't be both yellow and
red. Given that premise (unspoken but present in your argument) it
follows logically (not categorically) that the light must be called one
or the other. However, people continually violate this rule: they say
that the light is a yellowish red, or a reddish yellow. They recognize
the presence of _both_ categories, and find a way to refer to them both.
Only in the world of pure logic does the light have to be yellow OR red
and never both: yellow XOR red.
Behind the Aristotelian premise is a sort of theory of essences, an idea
that the world itself falls into logical and mutually-exclusive
categories. If a particular object IS a dog, then it IS NOT anything
else. Either it possesses the dog-essence or it possesses some other
essence; it cannot both possess the dog-essence and not possess it.
But in PCT there is no dog-essence; there is only perception. And unless
adopted for some reason, there is no general rule that a given
perception of a given element of experience must classified in one way
only, or that only one classification can exist at a time.
In summary, I think that your flip-flop proposal is collapsing two
levels into one: logic and category perception. Your suggestions about
the various kinds and signs of cross-connections describe what would
happen as a result, but the same things would happen if the category
signals were all present at the same time, and the sorting out was done
at the logical level, in ways depending completely on what rules were
put into effect. It is not necessary for those rules to be hard-wired
into the category level. Why not leave the category level free to
produce as many category signals as there are categorizing input
functions, and allow any rules at all to be applied to the signals?
Or better yet, why not let the organization of the model follow the
observations as they come up, instead of trying to guess once and for
all how the whole thing works?
Bruce Buchanan (950318.21:00 EST)--
Bruce Abbott (950319.1000 EST) --
Bruce Nevin (950317 09:38:17 EST) --
RE: What's in a word?
Hmm. Do I detect a pattern here, Bruce?
The interesting thing about "cybernetics" is that the part of the word
that gave it the flavor of something important was not the "cyber" part,
but the "-etics." As in electronics, physics, economics, esthetics, and
so on. I don't think many people recognized "cyber" as anything in
particular; only the connoisseurs of control theory got that extra
little jolt from learning that the root was the Greek word for
"steersman." We could probably use any crisp and euphonious morpheme
followed by etics or just ics or istics or such endings. Loopics?
Circulonics? Cybonomy? Guidology? Purponics? Purpistics? Purposophy?
Intentics? Resistics? Or maybe one of those long German compounds on the
pattern of "disturbance-resisting-outcome-determining-goal-selecting-
But that sort of thing just gives us a generic name for what we do, and
doesn't supply a name for the phenomenon that I'm still calling the X
So we have engovern, contrain, and something relating to "pilot."
Engovern has its points, but I don't think the "en" will succeed in
wresting the customary meanings away from "govern." Contrain will appeal
to jazz freaks, but it suggests entrainment, which unfortunately is a
phenomenon by which linked periodic or oscillatory processes tend to
come into sychronism with each other and has nothing to do with X.
"Pilot" is in the right airspace, but people don't understand how pilots
do X, either. So I'm not sold yet.
Bill Leach (950315.18:39 EST) says
since there is a complete science and engineering discipline that
already use most of our terms properly, it seem to me that it is an
error to change.
I long to agree. Dag Forssell reprinted my observations on why I think
we ought to stick with "control" and insist that control is without
exception the X phenomenon, and show that others who use the term
differently are using it incorrectly or sloppily even in their own
terms. But every time I suggest something like that, the fur rises on
all the backs and I get told that I'm trying to legislate word-usage and
that everybody has a right to use any word any way they want to.
Of course if we make up and carefully define our own word, other people
will start using it wrong anyway and ignore our definitions, with the
same arguments about verbal imperialism, arrogance, and so forth.
People, as Mary observed, just like to groove on the sounds of words,
and they don't pay too much attention to what their inventors were
trying to mean. Look at "positive feedback." It hasn't done us much good
to explain that this term is usually used with exactly the opposite of
its original meaning. Hell, a past president of the American Society for
Cybernetics told me that he taught his cybernetics classes that positive
feedback means "Just keep on doing what you're doing." I have witnesses.
Bruce N. says
However, if it is a new and unfamiliar word, readers and hearers at
least start off with the hint that there might be something new to
learn here, and that they don't already know what the word means.
The problem is that there are lots of very bright people who just HATE
to be told that there's something they don't understand, particularly if
it's related to their own fields of knowledge. Toss a new term at them
and they grab it and start conjecturing about what it could mean,
inventing a plausible story for themselves until they're satisfied that
they knew it all along -- or if not that, that they now have a better
understanding of it than its originators. Then they publish. What's
worse, as soon as they publish their version of the new idea, this acts
as a challenge to their buddies in the same clubs, and the buddies start
improving on the story. Pretty soon they have a whole thing made up, and
start citing each other in a sort of intellectual -- uh, I guess the
term is too crude to use here, although I'll give you a hint: it starts
with "circle-" and has to do with mutual gratification.
Here's the problem. If a person is willing to consider a new term and to
try to understand the process to which it is meant to refer, that person
can hear a definition of "control" and quickly come to see why it is a
good general definition. You won't have to tell this person that this
usage is specific to the way organisms work; that will be obvious. This
person won't start talking about other usages, or about how many systems
he or she has designed without feedback, or all the other ways that
"control" could be used. This person will just nod and say "I see." No
But there are other people who see nothing but a challenge to their own
expertise in these definitions. They don't want to admit that they've
never considered it that way, because that would mean there is something
they don't know. What they want is to tell you that you have it all
wrong, that control doesn't work that way at all. After that they will
tell you about all sorts of other kinds of systems they are expert in
and know lots and lots about, and how they interpret human behavior (if
they do). The last thing they want is to sit quietly and learn
something. For these people, I am convinced, there is no good word for
the X phenomenon that will bring them around. After a while you get the
idea: you can't teach anyone anything that the person doesn't want to
So the temptation is strong to go along with Bill Leach, say what we
mean by control, and to hell with the "problem." It's not our problem;
we understand each other, or are pretty close.
On the other hand, it's 2315 now, and I got up at 0400, and Mary and I
drove for 7 hours to get home. Maybe I will have more energy for
contemplating the difficulties tomorrow.
Best to all,