[From Bill Powers (921218.1500)]
Dennis Delprato (921218) --
RE: Feedback is too delayed.
Dennis, would you be willing to become a repository for citations
from the literature containing misstatements about feedback
control, PCT, etc.? I really think that an article in a major
psychological journal on this subject would give us a base from
which to launch more introductory articles. If you would be
willing to write such an article, great -- but let's start
collecting the information. I suppose we could include statements
from reviews, although they're hard to cite.
···
---------------------------------------------------------------
Bruce Nevin (921218.1324) --
That Latin saying developed into a most interesting and relevant
discussion. It is surely true that our most profound problems in
introducting PCT come from those who think they already have a
grasp of what feedback and control are about. You'll remember
that a year or so ago we had a participant on the net who was a
"real control-system engineer." He obviously understood control
systems -- but he absolutely could not accept the statement that
control systems control their inputs! He eventually bade us
farewell, saying in a perfectly friendly way that he just
couldn't go along with this strange way of looking at control
systems, but good luck to us.
One of the indications that PCT is an important new idea is the
way people get enthusiastic about it until it impinges on their
own life's work. They can see how it makes sense in other
people's fields, and even in their own fields when it doesn't
conflict with their own work. But there's the nitty-gritty point
where they have to compare it against ideas they have spent years
developing and justifying, and often defending against attack,
and there progress bogs down. I don't suppose that there's a
single person who now goes under the label of perceptual control
theorist who hasn't run up against this wall. It's not a pleasant
experience, although when they manage to get past the barrier I
think most people would grudgingly admit that the struggle was
worth it. If we could figure out a way to make this transition
easier we would all have an easier time of it. But it's a purely
personal battle that each one has to wage alone.
----------------------------------------------------------------
John Gabriel? Charlynne Clayton? Anyhow (921218) --
AU BOOKMAN, LAWRENCE ALAN.
TI A TWO-TIER MODEL OF SEMANTIC MEMORY FOR TEXT
COMPREHENSION.
Semantic memory consists of two tiers: a relational tier that
represents the underlying structure of our cognitive world
expressed as a set of dependency relationships between
concepts, and an analog semantic feature (ASF) tier that >represents the
common or shared knowledge about the concepts in
the relational tier, expressed as a set of statistical
associations.
The concept of a higher-level discrete world that (re-)
represents a lower-level analog world is consistent with my
definitions of levels in the hierarchy; you draw the line at
about the relationship or category level. But restricting the
higher levels to "relationships" is too restrictive for me, and I
am not in sympathy with treating the lower levels of perception
as statistical (except in the sense that there is always a
signal-to-noise ratio, pretty high for most perceptions).
AU KNIGHT, KEVIN CRAWFORD.
TI INTEGRATING KNOWLEDGE ACQUISITION AND LANGUAGE
ACQUISITION.
Very large knowledge bases (KB's) constitute an important step
for artificial intelligence and will have significant effects
on the field of natural language processing. This thesis
addresses the problem of effectively acquiring two large bodies
of formalized knowledge: knowledge about the world (a KB), and
knowledge about words (a lexicon).
This treatment of the "knowledge base" leaves out the nonverbal
knowledge base, which to me is indispensable in bringing order
into any verbal knowledge base. If you treat a dictionary as a
kind of knowledge base, and start trying to find out what some
term like "inteligence" means, you end up going in small circles
among a few basic terms, none of which have any meaning unless
you already know the meaning experientially. Enlarging the verbal
knowledge base doesn't help with this problem. You can check the
basic lexicon against a user's "intuitions," meaning experiences,
but you can't get those intuitions into the computer's knowledge
base. The only way to do that would be to give the computer human
senses.
--------------------------------------------------------------
Bruce Nevin (921218.1350) --
Don't we get something like discrete states in the latch
mechanism for steps in events and sequences? And indeed for
category perceptions and on up?
Yes, something like that. The event level is a sort of bastard
level. I've never found it very useful for explaining the control
of anything, except just to point at events that people seem to
control. This level began life as the sequence level, and then
suffered attrition as the transition level, then the level now
called sequences, were peeled away and relocated. Pure sequence
perception, in which ONLY ordering is important, is relatively
easy to model. Transition is relatively easy to model. But the
idea of a space-time pattern remains too vague for my comfort. I
don't know what to do about this but to wait until someone
extracts yet another more clear level from it, and perhaps leaves
nothing behind at all. Follow the bouncing ball.
Transition, which is basically derivatives, clearly doesn't
require latching. Pure sequence perception, in which only
ordering matters, clearly does require it. I don't know what that
leaves for the "event" level except doom.
It seems to me that, as the strength of some category
perception grows, it becomes easier (more acceptable?) to fill
in missing category-attribute perceptions by imagination. At
some threshold, or seeming threshold, where imagined
perceptions are integrated with real-time perceptions, it is as
though the exemplar of the category is perceived as fully or
truly present, whereas before there were only unsupported signs
or symptoms that fostered a belief, readiness, or expectation.
Another phenomenon of categories, well known in experimental
psychology, is the hysteresis effect. As a figure changes shape,
say between a rectangle and an ellipse, there is a point where a
person switches from one label to the other. When the change is
carried out in the reverse direction, the switch-point is delayed
so it occurs well into the region where the other label was used
when the change was going the other way. This is explained by
calling it "perseveration." The person who exhibits this
phenomenon has perseverosia. I suppose that curing it would call
for taking an antiperseverant.
If you had to dispose of the "event" label completely and
subsitute another name, what would you call it? What seems to be
the central phenomenon in the observation that
speakers and hearers perceive words (morphemes) as (probably
event-level) sequences of discrete tokens, where the types are
sound contrasts established by social convention for their
speech community.
?
--------------------------------------------------------------
Martin Taylor (921218.1530)--
I have a problem with the specific detail of Bill's proposal.
Not with the principle.
So do I. Same problems. That's why I won't seriously propose this
arrangement until such problems are worked out.
The basic idea is that the upper level controls by sending its
output signal into a model, which provides the perceptions that
that level controls. In general, this means that copies of the
output signal will NOT produce the correct real-world results
when also sent to lower-level systems, because the model will be
in some respects wrong. But if the lower-level error signals are
also included in the upper-level perception, they will cause
errors in the upper-level system, changing the outputs of the
upper system until those outputs do have the proper effects.
If now the error information can be used as the basis for slow
changes in the model, eventually the model will converge to a
form such that there are no lower-level error signals, and
controlling the model provides outputs that do produce the wanted
effects in the world.
Well, that's the basic spec for the system; now all we have to do
is find a way to make a simulation work like this. In doing so,
I'm sure that we will find that the spec itself is confused.
So don't take any of this very seriously. This is just the germ
of an idea. I have one way of doing it that works, and it doesn't
work the way I just described. Remind me of it some day when
there's a lull and I'll post it; for reference, I call it the
"Artificial Cerebellum" (just to preserve a proper air of
modesty).
-------------------------------------------------------------
Tom Bourbon (921218.1438 CST) --
In the past, both of us wondered how, specifically, Shannon's
ideas, or any of the major concepts from information theory,
would improve any of the quantitative predictions we make with
our simple PCT models.
This is the right question about information theory -- not "does
it apply?" but "what does it add?" The basic problem I see is
that information theory would apply equally well to an S-R model
or a plan-then-execute cognitive model -- there's nothing unique
about control theory as a place to apply it. Information theory
says nothing about closed loops or their properties OTHER THAN
what it has to say about information-carrying capacity of the
various signal paths. All working models have signal paths of
some sort, but only the control-system model puts those paths
together in a way that results in control. And in a control
model, the signals in the various paths normally carry far less
information than the theoretical limits allow. If an auditory
channel theoretically could represent a maximum frequency of 20
KHz, according to information theory, what do the theoretical
limits matter when the system is controlling for a voice tone of
middle C? The most you would get from information theory would be
a prediction of the minimum amount of error to be expected. But
information theory doesn't tell you there will BE an error
signal.
I think that the problem here is that information theory can't
distinguish between a model that works and one that doesn't; a
model that represents a real behaving system and a model that
represents a totally imaginary system. It's like the law of
forces; all the forces balance in every bridge: bridges that
carry traffic and bridges that collapse. Conservation of energy
and momentum, in Newtonian mechanics, applies equally well to the
spacecraft that gets to Mars and the one that misses it by a
million miles. In all these cases, something has to be added to
get a workable system. And I don't think that this something
comes from the abstract principles involved, however convincingly
one can prove that they apply.
---------------------------------------------------------------
Best to all,
Bill P.