[From Bill Powers (950227.0740 MST)]
Bruce Buchanan (950226.0030 EST)--
Scientists speak of models where literary critics speak of
metaphors.
It's not as simple as that. The term "model" can be taken in even more
ways than "control" can. A model in the sense we use it in PCT is
ideally a simulation of the real system: each variable and each function
in the model is supposed to be in one-to-one correspondence with a
variable or function in the real system. In other words, we would expect
that some day a dissection of the real system would reveal components
and connections that do exactly the same things that we draw in our
models and simulate in our computers.
Realistically, of course, we would also expect to find that a dissection
would prove us wrong in many ways -- but the point it that it _could_
prove us wrong. The basic point here is not the correctness of the
model, but the idea of modeling that underlies it: a committment to a
specific proposal about how the system works inside. The point is not
what so many people seem to think is modeling: to state the model so
generally that no matter what is eventually found inside the system, you
can interpret what is found so as to make your model look right. That is
quite a different game. That's the game of trying to make right
statements before the evidence is in and more importantly before anybody
else does. You win that game by being able to say, when the results are
finally known, "Ahah, I said _exactly the same thing_ 20 years ago."
This is a way of making yourself look brilliant without actually having
to know much of anything.
As I see it, the distinctions between models and metaphors are not
absolute. In each case our perceptions fasten on partial
resemblances to other things. We note similarities and differences,
elaborate descriptions in terms of necessary and sufficient
conditions, and try to categorize experience.
This is specifically NOT what PCT is. PCT is not a machine analogy. It
is not based on the similarity between human behavior and the behavior
of artificial control systems. While the basic principles of PCT were
first recognized by control system engineers, they could have been
discovered in any other field, including behavioral science, had people
in those fields not already thought they understood how behavior works.
What is behind PCT is _systems analysis_, in which a system is
represented in terms of observable variables and observable functions
relating them, and then analyzed mathematically to see all the
interactions among the variables that follow from the observed
organization. A complete analysis of simple control behaviors can be
developed from scratch without reference to similarities to any other
kind of system or behavior.
It seems to me that basic to neural function and a control system
model is the logic of a feedback of information (from a comparator)
which occurs _after_ the first cycle of action.
In a control loop there is no "first cycle" of action. All components in
a control loop are active simultaneously -- not nearly simultaneously,
but literally simultaneously. The entire loop is normally turned on and
active all of the time, whether or not any discernible disturbances or
other events are happening. There are signals present all around the
loop all of the time, as well as outputs into the environment and inputs
from it. Speaking poetically, if you could put your ear up against it,
you could hear it humming. If something changes in the environment, what
we call "a disturbance," the whole loop responds with a change in the
signal levels entering and leaving all the functions, and the system-
environment relationships come to a new equilibrium. The same happens if
the current setting of the reference signal changes to a new setting, if
the sensitivity of the output apparatus gradually shifts, or if the
environmental connection between output and input shows a drift in its
parameters.
Well, as you say, effects in nature are never really retroactive
i.e. they cannot act backwards in time. Nevertheless the control
system is capturing an event and relating it very specifically to
other event(s) later in time, or is doing so dynamically over time.
(I am aware my description may be less than perfect but I hope it
may be sufficient to convey the key notions in principle, which I
think are such as to support specific elaboration.)
Unless there is a specific model behind what you are saying, no, this is
not sufficient to convey key notions in principle. What, specifically,
is the operation you call "capturing an event?" You say the system
"relates" the event "very specifically" to other events later in time,
but saying "very specifically" is to avoid being specific, and saying it
"relates" the event is to avoid saying what the relationship is.
Naturally all events are related to other events "dynamically over
time," but the whole question is, WHAT ARE the dynamical relationships
over time? To say that THERE ARE dynamical relationships is to say
nothing we couldn't say about any system. The whole point of modeling as
done in PCT is to stick your neck out and propose specific relationships
among specific variables with specific dynamical characteristics.
Now, as I understand it, without this specific mechanism of the
control loop no basis for memory could exist - however the data may
be later manipulated in terms of neural impulses and structural
alterations in distant systems.
As I understand it, the specific mechanism of the control loop has
nothing to do with memory, although it may use (specifically, as a
reference signal) information derived (somehow) from memory. When you
refer to "this specific mechanism of the control loop" do you KNOW what
this specific mechanism is? What its organization is? What makes it act
as it does? How the signals and functions are related to each other? I
would guess not, because there is nothing in these signals and functions
that suggests any "memory" process to me.
So, to me, this seems to be not a metaphor but rather a model of
the relationships, in their simplest form, which alone can account
for memory, and by logical extention the capacities for sustained
reflective thought which are involved in imagined and projected
futures, as well as in composing messages on the internet.
(Obviously I am trying to compress a lot here ...
You would be better off in this discussion to expand rather than
compress. I don't see anything in your words that can "account" for
memory except your assertion that something does this.
In part I also am asking for a thoughtful reader to fill in enough
blanks to ascertain any plausibility.
That's your job, not the reader's. If you can't make a plausible case
yourself, how do you know what interpretation the reader is going to put
on your words? While we can't ever reach absolute knowing agreement, we
can get a lot closer than this.
Moreoever, as I see it, without the comparator function, which
involves an evaluation of some kind, no _purposive_ action could
occur - and this almost by definition - so that no hierarchical
control systems or questions of goals or values could arise.
The same can be said about any component of a control system, including
the external part of the loop. Why fasten on one function when there are
others that are just as critical? What you say is true, but trivially
true; it's like saying that without axles, a car can't work, so axles
are obviously the critical component of a car.
And a comparator does not do an "evaluation of some kind." In the PCT
model it _subtracts_ one signal from another, and that is all.
"Evaluation" can mean all kinds of processes, including but not limited
to what a comparator does.
My impression is that there is no really scientific study of values
because no objective conceptual foundation has been identified.
Perhaps there may be clues to what is required, which involve
specific causal mechanisms, i.e. are not merely poetic metaphors,
in the basic control system model.
If your agenda in learning about PCT is to discover some scientific and
objective foundations for specific human values, I think you're doomed
to disappointment. PCT tells us what values ARE, and it can say
something about relationships among values (what happens, for example,
when values at the same level conflict), but it will never tell us WHAT
VALUES WE SHOULD ADOPT. You can construct proposed systems of values
that might be feasible in terms of avoiding conflict and other PCT-type
considerations, but any system of values that doesn't violate basic
principles is as much grist for the PCT mill as any other. Is there some
specific set of values for which you are seeking validation?
There is no such thing as literal foresight; there can only be
present-time estimations, forecasts, or imaginings about what has not
yet happened. So this, too, must be a metaphor.
You perceived or assumed the meaning to be metaphorical. I intended
its referents to be scientific, with the qualifications that
entails. (I am not sure why you assumed otherwise.)
What I meant by the "neurophysiology of foresight" are the brain
mechanisms involved in anticipatory imaginings, specific mechanisms
studied among other ways by their absence in brain-damaged
patients.(cf. Damasio: Descartes' Error)
But the mechanisms involved in anticipatory imaginings are not the
anticipatory imaginings themselves; those are the _products_ of the
alluded-to mechanisms. We do not study these mechanisms by studying the
absence of their products; that absence only tells us to look for a
specific mechanism that is no longer working. The specific mechanism of
imagination, as proposed in PCT, is a re-routing of the output of a
system at a given level back to its own input rather than the output
serving as a reference signal for lower control systems. "Anticipatory
imagining" is not one phenomenon, but two: imagining, and anticipating.
Either can happen without the other. I can imagine an aardvark wearing a
striped suit without anticipating that I will see one, or anticipate
where an approaching baseball will land without imagining its landing.
In fact, Damsasio's observations suggest that imagination is still
working in these patients, because they can rationally discuss
_hypothetical_ ethical situations. What seems to be missing is the
connection to the lower-level reference inputs, because these patients
don't seem to _live_ what they can discuss hypothetically.
···
----------------------------------------------------------------------
Best,
Bill P.