Martin's theories, Understanding PCT

[From Rick Marken (961017.1500)]

Martin Taylor (961017 14:50) --

Kepler did not provide a mechanism; I did, in some detail, repeated on
several occasions over the years, and Rick knows it. The mechanism is a
consequence of reorganization in interacting control systems.

Reorganization IS a mechanism; it is the mechanism that controls the quality
of controlling done by control systems. Reorganization is a control
mechanism. Do you propose another mechanism -- one that "is a consequence of
reorganization" -- to explain the behavior of interacting control systems?
If so, that's great.

I was critical of your approach to theory because it seemed to me that you
were using what you said was an _analytical tool_ ("attractor theory") as a
_mechanism_ to explain aspects of social interaction. For example, in an
earlier post you said:

Attractor theory, if you want to call it that (I prefer "dynamics") argues
that these three are the only possibilities for interacting systems.

It sounded like you were making a prediction about a possible observation
(what might be observed in interacting systems) based on a method of
describing (attractor theory) those observations. It sounded to me like you
were saying something as peculiar as "Laplace theory argues that there are
only three possibilities for the shape of the orbit of Mars". Of course,
Laplace "theory" could describe the orbit but you'd need Newton's theory too
predict it).

I don't understand how attractor theory can "argue" or predict anything.
Reorganization theory does predict behavior (such as the behavior of subjects
in the "E. coli" experiment). This behavior can be _described_ (possibly in
very interesting and suggestive ways) by attractor (or Laplace) theory; but
it certainly can't be predicted by those theories.

You seem to have difficulty making it clear (to me, anyway) that the
"theories" you like (attractor, information, etc.) are analytic devices (like
algebra and calculus) rather than theoretical mechanisms. I'd be interested
in hearing whether anyone else has this problem with your posts. Maybe it's
just me.

Me:

you have to be able to perceive that a particular configuration is a
refrigerator rather than a stove or window.

Bill Benzon (961017) --

And just how do you do that? ...How do we recognize it?

That is a question of how the perceptual functions work. I presume perceptual
functions work pretty much the way Bill Powers said they work in B:CP.
Physical signals are converted by sensory receptors into intensity
perceptions; intensity perceptions are converted by first order perceptual
functions into sensation perceptions; sensation and intensity perceptions are
converted by second order perceptual functions into configuration
perceptions, etc. Near the top of this hierarchy, category level perceptual
functions are converting lower level perceptions into classes like perceptual
signals indicating the degree to which the input is in the class
"refrigerator", "stove", etc.

I don't have any idea how the perceptual functions actually work; I presume
that the perceptual functions themselves are neural networks that take lower
level perceptual signals as inputs and produce a single perceptual signal as
an output. The magnitude of this output signal represents the degree to which
a particular perceptual variables is represented in the states of the input
perceptual signals. If the perceptual function computes "refrigerator", the
magnituide of the output indicates the degree to which the input signals are
a "refrigerator".

I'm standing still, not moving. I'm looking right at the refrigerator.
What motoric force am I exerting on that refrigerator?

None. Which just means you ain't gonna get the door open;-)

there is experimental evidence that we can recognize many things with a
single glance. Where is the motoric output in single-glance recognition?

You don't need motoric output to _perceive_. You need motoric output to
_control_. We can obviously perceive without controlling. (We can also
control without acting, by the way; this is called "imagination" or
"thinking". We are not really controlling. But we creating the (imaginary)
perceptions we want on demand. This is all in B:CP by the way).

PCT has no accounts of visual recognition (or aural recognition, or haptic
recognition, or olfactory recognition or taste recognition). That means in
some sense it is a blind theory.

I really think you should re-read B:CP, especially the chapters describing
the levels. Bill makes some pretty clear proposals regarding the nature of
the perceptual functions that "recognize" ("compute" is a better word
because "recognize" implies that there is something out there to be
"recognized") sensations and sequences. I got my PhD in perceptual
psychology; I think PCT provides a very satisfying conceptual model of the
perceptual "recognition" functions, one that is based on the best evidence we
have from neurophysiology, psychophysics and machine recognition systems.

you have no account of how the visual system knows that the situation has
been obtained.

I just can't understand where you get this impression. Have you really read
B:CP?

And if you are interpreting the motor circuits as servo-loops maybe you
could get somewhere by doing the same for the similar visual circuits?

Both motor (efferent) and visual (afferent) neuronal connections are needed
to make a servo circuit. I can't figure out what you are talking about. What
is a "motor circuit"?

Me:

[your statement that] "I want a model where the inputs and outputs are
sensory" [suggests] that your understanding of PCT is _way_ different than
mine.

Ye:

And here I thought I was suggesting a new arena where you (or someone) can
apply PCT. It's one thing for you to object when I tell you something like
an existing PCT account doesn't float. I may think your objection is wrong,
but I understand it. I tend to be like that myself. Who? me wrong? You
gotta' be kidding. But to be antsy over the possibility of going boldly
into new territory, shesh!

I must not be understanding what you mean by "I want a model where the inputs
and outputs are sensory". This doesn't sound like new territory; it sounds
like nonsense. In PCT, "sensory" means "signals caused by events in the
outside world". Inputs are the cause of sensory signals; outputs (eg. muscle
tensions) are caused by error signals. You seem to be saying that outputs
should also be the cause of sensory signals. In PCT, they are -- indirectly
(muscle tensions cause effects that are sensed). But I don't see how an
output can be sensory in the PCT sense -- ie. carry signals caused by events
in the outside world. If outputs were sensory then they would be inputs.

I can do a little more to make the idea intelligible. But not right this
minute. Gotta go.

I'll be here;-)

Best

Rick

Rick Marken (961017.1500) sez:

Bill Benzon (961017) --

And just how do you do that? ...How do we recognize it?

That is a question of how the perceptual functions work. I presume perceptual
functions work pretty much the way Bill Powers said they work in B:CP.
Physical signals are converted by sensory receptors into intensity
perceptions; intensity perceptions are converted by first order perceptual
functions into sensation perceptions; sensation and intensity perceptions are
converted by second order perceptual functions into configuration
perceptions, etc.

These are just words Rick, words. I know the words I really do. And I
also know that lots and lots of people write computer programs to simulate
how the visual system might work and their problems are not trivial.

Near the top of this hierarchy, category level perceptual
functions are converting lower level perceptions into classes like perceptual
signals indicating the degree to which the input is in the class
"refrigerator", "stove", etc.

Again, "category level" is a lable on a box (or boxes) in a diagram. What
is the mechanism in that box actually doing? How does it determine the
degree to which something is in a class? And, when you use the word
"class" are you using it in a technical set-theoretic sense is just in an
ordinary language sense? I ask this question because there is considerable
disbute (and some experimental evidence) on whether on not it is reasonable
to talk of the mind as positing classes in the set-theoretical sense.

Now, this category level is more recent that B:CP and I haven't read about
it, but believe me, I have thought a great deal about "categories" and how
they relate to a servostack. I've got some questions: Is this category
level connected only to the level immediately below (and the level above),
or does its connections jump levels so that it is, e.g. directly connected
to sensations, configurations, etc.? If so, why? That is, why "violate"
what had been a basic feature of the HPCT's initial architecture?

Note that when Hays and I worked on this long ago we decided that, to deal
with language semantics, we needed a degree (the technical term we chose)
which was orthogonal to the stack and directly connected to each order
except the intensity order.

You don't need motoric output to _perceive_. You need motoric output to
_control_. We can obviously perceive without controlling. (We can also
control without acting, by the way; this is called "imagination" or
"thinking". We are not really controlling. But we creating the (imaginary)
perceptions we want on demand. This is all in B:CP by the way).

Wouldn't you know it, I'm about to tell you that I'm aware of imagination
and thinking from B:CP. And to get it going B:CP posits some switch-gear
between the stack and the memory units which don't look like anything in
the nervous system. That's OK for abstract modeling purposes. But when it
comes time to nail this stuff down to the nervous system you gotta' go back
to the drawing board.

And, my current line on perception, whether visual, auditory, tactile,
whatever, is that the physical mechanism which is actually doing this
stuff, the brain, has neurons running from the periphery to the center, and
neurons running from the center to the periphery? What are those
center-to-periphery neurons doing in a perceptual system? This is where
folks start talking of filters and feedforward. I think then should be
talking about reference signals and output functions.

Yes. I know what I just said. REFERENCE SIGNALS and OUTPUT FUNCTIONS
within a purely sensory system. The process by which (visual, auditory,
tactile) sensations, configurations, etc. are constructed may not be a
matter of just operating on the input and then sending output up to the
next level. Signals are going "downstream" as well.

I know this is not orthodox HPCT. And I am not saying it because I
misinterpreted B:CP. I'm saying it because the neuroanatomy forces me to
think about what those downward signals are going in a purely sensory
process.

I really think you should re-read B:CP, especially the chapters describing
the levels. Bill makes some pretty clear proposals regarding the nature of
the perceptual functions that "recognize" ("compute" is a better word
because "recognize" implies that there is something out there to be
"recognized") sensations and sequences.

But "pretty clear" isn't clear enough for Bill to file patents and get rich
in the artificial vision business. Now I don't pretend to have anything the
meets that standard either, but I'm not inclined to be satisfied with a
model that isn't at least within shouting distance of that standard.

I got my PhD in perceptual

psychology; I think PCT provides a very satisfying conceptual model of the
perceptual "recognition" functions, one that is based on the best evidence we
have from neurophysiology, psychophysics and machine recognition systems.

I got my PhD in literary theory so I'm not qualified to say anything about
the nervous system, perception, etc.

Both motor (efferent) and visual (afferent) neuronal connections are needed
to make a servo circuit.

My suggestion about servos entirely within the visual (or auditory) system
may be flat-out wrong. But that is the suggestion I'm making. I'm not
confusing a motor signal with a visual signal or any such thing.

I must not be understanding what you mean by "I want a model where the inputs
and outputs are sensory".

Right, you don't understand.

This doesn't sound like new territory; it sounds
like nonsense.

So you think I'm talking nonsense. From my point of view that means you
can't help me. That's neither here nor there.

In PCT, "sensory" means "signals caused by events in the
outside world". Inputs are the cause of sensory signals; outputs (eg. muscle
tensions) are caused by error signals. You seem to be saying that outputs
should also be the cause of sensory signals.

Surely you know that people undergoing sensory deprivation will start
hallucinating after awhile. Why? What's the source of this "outward
pressure" within the sensory systems? Why do these systems insist on
inventing perceptions of things which aren't there? Is this just
imagination (in the B:CP) sense at work? But imagination in that sense is
under the imaginer's control. These hallucinations are not.

I don't have a good answer to these questions. But I have a sense that the
sensory systems are dynamic physical systems which are stable over long
periods only when then have external inputs (photons in the retina, air
molecules colliding with the ear drum) from which they are actively
constructing perceptions. When there is no such input, those downward
connections start going haywire.

I'll be here;-)

Part of a line from a great Beatles tune.

best,

bill b

···

********************************************************
William L. Benzon 518.272.4733
161 2nd Street bbenzon@global2000.net
Troy, NY 12180 http://www.newsavanna.com/wlb/
USA
********************************************************
What color would you be if you didn't know what you was?
That's what color I am.
********************************************************

[Martin Taylor 961018 12:00]

Rick Marken (961017.1500)

Reorganization IS a mechanism; it is the mechanism that controls the quality
of controlling done by control systems. Reorganization is a control
mechanism. Do you propose another mechanism -- one that "is a consequence of
reorganization" -- to explain the behavior of interacting control systems?
If so, that's great.

Oh, ye of little memory. Is it so long since the last restatement of the
mechanism (three days)? Do you not remember Bill P's arguments for why
control systems at one level in a hierarchy come to have nearly orthogonal
perceptual input functions?

No, there's no _new_ mechanism. There's only the old one, the same one
Bill used so long ago, but applied to control hierarchies that interact
through the outer environment (the "real world") rather than to ECUs within
a level of one hierarchy.

To assist your memory, here it is yet again. Remember that ECUs
within a level know nothing about one another, and if error levels are
persistently high or increasing, reorganization is more likely than if
error levels are low or decreasing. If two ECUs are controlling non-orthogonal
perceptions, the actions of one are likely to affect the other's controlled
CEV, a disturbance. That means higher error levels than if the two ECUs'
actions don't disturb each other, and faster reorganization than will
occur if the the two ECUs don't disturb one another. This means that the
two ECUs will "learn" to avoid disturbing one another. And that applies
to any two ECUs, whether they are within a level of one person or are
within two different people.

ECUs within a person organize themselves into a set of conventions, which
would be orthogonal if the external environment provided an undifferentiated
set of environmental feedback functions. Of course it doesn't, and so there
are some "conventions" more common than others. It is more common to have
low-level perceptions based exclusively on light, or exclusively on sound,
than it is to have low-level perceptions based on a mixture of light and
sound.

When the ECUs in question belong to different people, their interactions
through the real world likewise may result in mutual disturbance. If
similar interactions are often repeated, the ECUs will tend to reorganize
faster than they would if there were no mutual disturbance. Similar
interactions can occur not only between person A and person B, but also
between person A and persons C, D, E..., if B, C, D, E are similarly
organized (as, for example, if they observe similar social conventions).
If that is the case, person A will tend to reorganize so as to minimize
the mutual disturbance between any of his/her ECUs and those of "people
like B, C, D, E...". In other words, A will tend to take on what an outside
observer would call an acceptable social role, within the society defined
by having B, C, D, E... as its members.

Now, if B, C, D, E... are all different, there may not be any acceptable
social roles for A. Interactions will tend to be mutually disturbing, and
A will be likely to undergo rapid reorganization, no matter who he/she
interacts with. And so will B, C, D, E... At some point, either the
disturbances will overwhelm one or another (they die), or reorganization
reduces the level of error when some pair or small group interact. Social
conventions begin to be defined (by which I do NOT mean that they are
linguistically described, or in any way prescribed). Eventually, more
people reorganize so that they can interact with little mutual disturbance
with members of the small group, and the group itself continues reorganizing
so that the errors in each individual ECU in each person tend to be reduced.
The social conventions become confirmed, and social roles begin to be
developed; it is not true that if A and B both observe the same social
conventions, they behave the same. Conflict may be reduced if A tells B
what to do, provided that B takes on a reference to perceive him/herself
as pleasing A (perhaps because A feeds or pays B, or refrains from beating
B). A takes on a master role, B a servant. But they have only reorganized
to reduce the mutual disturbances each might cause to the other by their
actions. It is an outside observer who describes their different social roles.

Is this a new mechanism? Or is it what must happen if we believe reorganization
works as Powers suggests it may?

Does it describe how social conventions are attractors of the dynamics of
interacting reorganizing systems?

···

---------------

Attractor theory, if you want to call it that (I prefer "dynamics") argues
that these three are the only possibilities for interacting systems.

It sounded like you were making a prediction about a possible observation
(what might be observed in interacting systems) based on a method of
describing (attractor theory) those observations. It sounded to me like you
were saying something as peculiar as "Laplace theory argues that there are
only three possibilities for the shape of the orbit of Mars".

It's quite a bit less peculiar than that. In fact, I am describing the set
of possible observations for a set of interacting systems left undisturbed
to interact with each other for infinite time. Observations of real systems
will be more complex, since in most cases they are neither undisturbed nor
left alone long enough to reach the attractor.

As an exercise--I don't know who invented it, but it is mentioned in
Godel, Escher, Bach--try finding the attractors for the sentence:

This sentence has x1 A's, X2 B's....X26 Z's.

under the operation that you start with some random numbers for X1, X2...
written alphabetically (e.g. ...has thirteen A's, three B's,...) and each
time you count (taking uppercase and lowercase as the same), you substitute
the number you counted into the Xn for each letter.

Quite quickly you either reach a sentence that is true (a point attractor)
or a sequence of sentences that repeat after N recounts, where N might
be as small as 2 or very large (a cyclic attractor). The sentence has a
lot of attractors, and you might be able to find a few of them with a
short Hypercard program. Or read GEB.

I don't understand how attractor theory can "argue" or predict anything.

No, I've noticed over the years that you like concrete examples and images
much more than you like abstractions (Benzon calls these different "ranks"
of thought, but I'm not sure whether I agree).

The process is rather like that of Georg Cantor proving that there are
different kinds of infinity. In the case of attractors, there are limits
on the possible dynamics, some possible and some impossible is real
physical systems. But the physicality of the system really is unimportant
to the mathematics of the dynamics. It's not a bit like Kepler asserting
that the "real" planets move in orbits whose sizes are determined by the
Platonic solids. It is more like Plato (or whoever) asserting that there
are 5 (or however many there are) regular solids in 3D, and NO MORE, no
matter how hard sculptors try to make more.

Reorganization theory does predict behavior (such as the behavior of subjects
in the "E. coli" experiment). This behavior can be _described_ (possibly in
very interesting and suggestive ways) by attractor (or Laplace) theory; but
it certainly can't be predicted by those theories.

If one starts by postulating a mechanism that may lead to an attractor,
then one can assert that the attractor will be of one of the three kinds.

Martin

Martin Taylor 961018 12:00

No, I've noticed over the years that you like concrete examples and images
much more than you like abstractions (Benzon calls these different "ranks"
of thought, but I'm not sure whether I agree).

Abstractions can be of various ranks. It isn't necessarily the case that an
image or an example is necessarily of a lower or different rank from
abstract mathematics. Euclidean geometry is rank 2; Newtonian calculus is
rank 3. The sort of mathematics in which one finds objects like attractor
basins is probably rank 4 math. One can create a graphic image of such a
phase space (and of course, such images abound) and that image can be
viewed by someone who knows nothing of mathematics. It's just a pretty (or
ugly, or indifferent) visual pattern. To interpret the image as a
mathematical object requires the appropriate rank 4 math, or some
conconction of rank 2 or rank 3 math and a dash of physics and appropriate
linguistic hand-waving, as the case may be.

Then we have those curious diagrams Feynmann developed to do quantuum
mechanics. The diagram is just a diagram. But the interpretation involves
rank 4 physics.

BP likes to implement PCT models in computer programs. Those programs are
rank 4 constructions.

···

********************************************************
William L. Benzon 518.272.4733
161 2nd Street bbenzon@global2000.net
Troy, NY 12180 http://www.newsavanna.com/wlb/
USA
********************************************************
What color would you be if you didn't know what you was?
That's what color I am.
********************************************************