[Martin Taylor 940623 17:30]
Jeff Vancouver (940622)
I am afraid I did not understand your post regarding the 12 kinds of
reorganization. I think the point I am missing relates to the difference
between reorganization and where attentional processes are focused in the
hierarchy. I thought from reading BCP and ABS that they were the same?
I've e-mailed you separately my posting of 921005 11:30 in which the
12 possible modes of reorganization are tabulated.
Reorganization changes the way the hierarchy works. I originally
thought that Bill P. used "reorganization" to refer only to structural
changes in the network such as which lower-level ECSs connect to which
higher level ECSs, or the generation of new ECSs. However, in
a posting on the topic, Bill mentioned that he had studied changes in
perceptual input functions, using the e-coli technique. Those changes
can, in principle, be smooth, and need not be random.
Attentional processes, whatever they might be, deal with the situation
of the moment, in a hierarchy at some specific state of reorganization.
Questions:
1) If reorganization is a change in the input function (e.g., creating new
meanings), the output function (e.g., creating new or using different
reference signals for a lower level loop), or both, but can
reorganization affect gain? lag?
In principle, reorganization can change any structural aspect of the
network. What aspects are ACTUALLY changed in learning is a matter for
study.
2) Is the process of deciding a course of action, creating a new or using
a different reference signal?
The tenor of this question suggests the control of output, whereas PCT
is based on the idea that what is controlled is perception. Output can
be controlled only insofar as the output is reconnected to some perceptual
input function, and even then it is only the perception of that output
that is being controlled.
That having been said, "deciding a course of action" can be rephrased as
"deciding reference levels for lower-level perceptions" and when it is
phrased like that, the answer becomes self-evident. There is no "new
or different" reference signal, but the values of existing reference
signals are affected.
================Alerting experiment=========
Marken [Marken 940616.1400] was correct in ascerting I would recognized
your proposed alerting experiment. And of course I am much more
sympathetic with your position than Marken's. But, there are some
questions about what you hope to find and/or think you are doing.
First, you think that subjects will not begin monitoring (and have a
reference signal) for beeps as opposed to off target.
I don't think the subjects will monitor the beeps at all. They might,
and I suppose I have to consider the possibility as a contaminating
influence on interpreting the results of the study.
You say that the
beep is only imaginary until it is heard, therefore subjects will not be
controlling for a different perception. But, once it is heard, or even if
it is in imagination, how do you know it does not become a controlled
perception?
A controlled perception must have a mechanism whereby the controller can
affect the perception. One can certainly control the perception of an
imaginary beep, and decide to imagine hearing one or not. In the experiment,
however, the subject cannot affect the beep except by controlling the
thing that the beep warns about. The nature of the experiment is supposed
to be such that the subject cannot always control all of the elements.
The beep, having happened, goes away independently of any action on the
part of the subject, which is why I think it fair to say it is not a
controllable perception.
But the real question is the phenomenon you are trying to examine.
Perhaps you have addressed this in previous posts, but what I think of
when you say "alerting" is the allocation of attentional (or some other)
resources to some part of the loop.
Quite the opposite. Firstly, the alerting perception is specifically NOT
part of the loop of any controlled perception, just as the beep is not
part of the perception of the line-location that is to be controlled.
Secondly, the whole notion of alerting is to avoid the requirement of
allocating attentional resources. Attention comes into play only on the
occasion of an alerting event having happened. Think of the visual
periphery, where a small movement in an otherwise stable environment
is likely to draw your attention. Pilots wearing night-vision gear can't
do that, and have to monitor the whole visual area (and get into lots
of accidents).
To be grossly general, the resources are either physical (eyes, hands,
etc.), which is what I think you are talking about when you say output
dfs, or cognitive, which is what I mean be attentional.
A degree of freedom is a variable that can be changed independently of
others in a system of variables. If you have variables X and Y, you
have at most two df, which can be expressed in many ways. If you change
X, you can change Y, but if you set X and Y, you can't independently set
X+Y or X-Y. Alternately, you can set the sum and difference independently,
but you can't then set either X or Y as well.
In the case of human systems, you can move your knee joint independently
of your elbow joint, so there are two degrees of freedom. In making my
count of around 125 df for humans, I was quite arbitrary. I said soemthing
like "Each finger has 3 joints--15 per hand, that's 30" (quite ignoring the
fact that not many people can straighten the middle joint while bending
the tip). So the count went something like fingers-30, toes-20, wrists
3 each (left-right, up-down, and twist) for 6, elbows 2 each for 4, shoulders
3 each for 6, hips 2 each, knees 3 each, ankles 3 each, neck 3, back 5,
making about 90 for the hard joints. I then added around 15 each for the
soft musculature of face and speech mechanisms, and 5 for stomach muscles
and the like, totalling 125. I think I was very generous, in that lots
of the imputed movements cannot be performed independently, or fast. But
the number is so small compared to the possibilities for independent
activity level changes in the sensory apparatus that the difference between
30 and 300 output degrees of freedom would be quite inconsequential.
There is no argument IN PRINCIPLE about the availability of degrees of
freedom in the working of the brain (other than the number of nerve fibres!).
If my guess is right about attention, that what is attended are those
perceptions to which or from which control is to be given or taken,
then the output df limit the number of attentional df that can usefully
be deployed. The limit is not clean, but is on the order of twice the
output df. However, since it is most unlikely that at any level of the
hierarchy ALL controlled perceptions are candidates for momentarily
relinquishing control, the actual number of df used for attention is
probably substantially smaller. I wouldn't know how to go about estimating
it, though.
Apparently
the input function is continuous - never off or deprived of resources.
However, it could be deprived of input and requires resources to get that
input (I need to go to the next room to see how my baby is doing).
This is what I have called the "search mode;" a perception that "should be"
controlled cannot be effectively controlled because of data lack. Other
perceptions are controlled in such a way as to remedy that lack. You
control perceptions relating to your location in order to acquire data
(other than your imagination) on the state of your baby.
This would than be an issue of other (supporting) loops receiving
resources.
Exactly.
···
=========================
Jeff Vancouver 940622)
I get the impression that PCT likes to think
of actions as unintended. I certainly think they can be. But I also
think they can be intended as well.
The point is, I think, missed. PCT thinks of actions as output. Output
occurs because certain error signals exist in certain ECSs that have
non-zero output gain. Output signals go where reorganization has
connected them--eventually, possibly, to muscles. Output is not
"unintended," because as they are the consequence of reference signals
differing from perceptual signals, and those reference signals are the
"intention" of that control systems. High level outputs ARE the intentions
of lower-level control systems. But look at what is intended. It is not
an output or an action, but it is the state of a perception. The action
happens as a result of the intention, but it is not the objective of
the intention.
But are you, Tom,
telling me that PCT does not included intended behaviors in
its model?
I can't speak for Tom. But you should be aware that in PCT-speak,
"behaviour" is not the same as "action." Behaviour is that which is done
to satisfy an intention--i.e. to bring a perception to its reference level.
Action is that which affects the outer world. Action is often observable
by outsiders, whereas behaviour often is not, because usually behaviour
very closely opposes some external disturbance. The balance is like
an isometric exercise. You can't see the effort the exerciser is putting
in, because nothing moves. The actions you CAN see are often just
side-effects of behaviour, and have nothing to do with the person's
intentions. But the intended behaviour is what is important to the
person behaving. Not the actions.
I know the net well enough to know that there are no
behaviors beyond PCT's description, so it must be the intentions that
disturb. Why do intentions disturb?
This is an unclear question. Intentions come into control systems from
above. They are the reference levels for the controlled perceptions.
Disturbances affect the physical outer world that is sensed and eventually
is transformed into perception. Changes in either can affect the perceptual
signal, since changes in reference levels induce error until the actions
have changed the physical situation to its new "intended" state, and
disturbances change the physical situation until the actions have restored
it. It is not normal to say that "intentions disturb," though both changes
in intention and changes in disturbance can result in changes in perception,
transient in the case of disturbance, sustained in the case of intention.
=====================
Bill Leach 940622.23:43 EST(EDT)
I still understand "reoganization" as a process that takes place when
errors exist under sustained conditions. That is, reorganization may
"start" immediately when an error exist but it's intensity probably does
not become significant until the error exists for a period of time.
Thus, other methods, including direct cognitive thought processes may
eliminate the error without changes induced by reoganization.
This may well not be "the PCT line" but I believe that it is at least
close.
Nobody can state "the PCT line" authoritatively. There is the opinion of
the inventor of an idea, but the proof of any PCT construct is (a) whether
it would actually work, and (b) whether it works the way experiment shows
humans to work. Bill Powers' construct "reorganization" has passed
some tests under (a), none under (b) so far as I am aware. It can work.
As best I understand Bill's approach (I do this only because he is in Wales
and can't speak for himself, otherwise I'd simply put my notions, which
differ a little from his), there is a set of "intrinsic variables" that
are defined genetically and that cannot be sensed by the ordinary sensory
apparatus. They reflect internal states which we might loosely relate
to the momentary health of the organism (CO2 partial pressure, body
temperature, blood acid balance, stuff like that might well be related
to some intrinsic variables). These intrinsic variables have genetically
set reference levels, and it is important that they be kept within
physiologically safe and appropriate bounds (not necessarily constant
over time). Since these variables cannot be sensed directly, and no
action on the outer world can directly affect them (note "directly"),
they cannot be controlled by the normal hierarchy of ECSs.
If the hierarchy cannot control the perceptions of intrinsic variables,
how, then, can they be kept within physiologically appropriate bounds?
The key is in that word "directly." Actions in the outer world can
affect intrinsic variables indirectly. Eating affects blood sugar,
exercise affects CO2, and lots else besides. So, if the hierarchy
can control some suites of perceptions that relate (even if not causally)
to the states of intrinsic variables, then by controlling these perceptions
at appropriate reference levels, the intrinsic variables will probably
be controlled.
As an extreme example--well, maybe not so extreme--I control a perception
of typing this message as part of the control of my intrinsic variable
"blood sugar" (supposing that to be an intrinsic variable). How? I see
typing this as a way of perceiving myself to be better informed on the
subject I write about, and as a way of helping other people to be better
informed so that the ideas will return and support me in my acquisition
of a perception of having enough money (salary) to acquire a perception
of having food...Get the idea?
There's nothing in the "fun" of writing this message that relates directly
to blood sugar, but in the long run, it has been the control of blood sugar
among other things that has developed the hierarchy that I use in writing it.
Now, reorganization. The reorganizing system is seen as a control hierarchy
that acts not on the outer world, but on the main sensory perceptual
control hierarchy that does act on the outer world. If the main hierarchy
happens to be controlling its perceptions in a way that maintains the
intrinsic variables at satisfactory levels, the reorganizing system need
do nothing. It is a control system in which the perceptual and reference
variables are equal. But if the intrinsic variables are far from their
appropriate values (according to genetics and the current situation--
sometimes high adrenalin is a good thing, but not usually), then the
reorganizing system has error in some of its control systems, resulting
in output that affects the main hierarchy.
How does the output affect the main hierarchy? Bill P's contention is
that it has no way of relating the structure of the main hierarchy to
the intrinsic variables that are in error, and that therefore the only
effective way to make changes is randomly. But he is quite willing to
accept that the random changes might well affect specific parts of the
main hierarchy, and that lower levels are likely to be more resistant
to reorganization than are higher-levels. I have the feeling that this
aspect is not securely developed.
I've deliberately kept mentioning "intrinsic variables" as related to
physiological states. But there is a derived intrinsic variable. If the
main hierarchy exists only so that the organism survives until at least
it propagates its genes, it must be able to control its perceptions. A
hierarchy that fails to control is a hierarchy that is not controlling
useful perceptions! So there must be an intrinsic variable associated
with error in the main hierarchy. Much error, sustained error, and most
particularly increasing error, all are signals of control problems, whether
in a single ECS or averaged over the whole hierarchy or regions of the
hierarchy. In my discussions of reorganization, I have tended to concentrate
on error as the core of reorganization. But it isn't. Reduction of
error is only a necessary stage in the reorganization of the main
perceptual conrol hierarchy. It permits the hierarchy to control those
perceptions that, by chance or by cause, are related to the variables
required for survival of the genes.
To quote again:
reorganization may
"start" immediately when an error exist but it's intensity probably does
not become significant until the error exists for a period of time.
If I understand Bill's concept properly, the "intensity" of reorganization
is like a Poisson rate. It is the probability that a reorganizing "event"
will occur in any specific small instant of time. The nature of the
reorganizing event might be the change in sign of an output link, or
a change of weight in a perceptual input function, or any of the 12
possibilities, or something else we haven't thought about. The rate
of reorganization is a function of the level of error in the intrinsic
variables--which include some function of the error in the main hierarchy.
One function that we have considered for relating error in the main
hierarchy to rate of reorganization is
E = sum over ECSs (a * e^2 + b * d(e^2)/dt)
where E is the quantity "perceived" by the reorganizing system, e is the
value of the error signal in some ECS of the main hierarchy, and a and b
are arbitrary constants. The sum is over all the ECSs in the part of the
main hierarchy we are considering for reorganization.
Whether the reorganization rate is a linear or nonlinear function of E,
or even whether the reference level for E might be non-zero, some sustained
error being desirable in the main hierarchy , is not specified. In models,
all these things can be tried, but in my personal opinion, the results
will not readily be comparable to any data we would be able to get on
human learning.
=====================
Bill Leach 940622.23:59 EST(EDT)
I am fast becomming convinced
that PCT IS RIGHT. I and even the "priests" of PCT may well be wrong in
their individual and collective interpretation of what PCT may be
"telling us" with regard to any particular instance of human behaviour,
but the theory itself is not likely wrong.
My position, too.
Uncertainty, was covered in a typically devastating fashion by Bill in
one of his postings. PCT does not have a problem with "uncertainty".
"Uncertaintly" is an absolute concept but WILL always be evaluated as a
perception (or set of perceptions). It just has to be that way as there
is no way that a person can actually KNOW the "real truth" of a
situation.
I think you misunderstand the nature of both claim and counterclaim, but
I wouldn't like to get into the discussion when the "other side" is absent.
As I see it, "uncertainty" is not a special situation for PCT.
I'd hope that everyone would see it that way.
PCT has no "boundries". It is intended to be an explaination for all
behaviour regardless of intent or circumstances. With sufficient
knowledge, PCT should be able to explain "normal" behaviour, the most
extreme "deviant" behaviour and everything in between.
"Sufficient knowledge" means knowledge of the reorganizations that have
happened in an individual (i.e. the actual structure of the control
hierarchy), the gains and nonlinearities of the ECSs... It is a
parameterization of a VERY high-dimensional system. In my long-ago
class on curve-fitting, the professor had an aphorism to the effect that
with four parameters you can fit an elephant, and with five you can make
its trunk wave. Enough parameters make any fit to data dubious. What is
good about the high-quality fits that PCT models achieve is that there is
(usually) only one or two control systems in question, with only one or
two parameters each, and there are thousands of independent data points
to be fit. When you need to parameterize a complex hierarchy, the fits
will be less impressive.
I see PCT as a framework theory, like Quantum Electrodynamics in physics.
In principle, you can use QED to predict the breaking stress for bending
in a carbon-steel rod. In practice, you can use it to describe single
not-too-complex molecules, or more complex ones if you add some constraints.
In principle, PCT applies to all situations involving living organisms,
but in practice it may be more useful to use some situation-specific
approach, just as in practice it is better to find out the breaking stress
on the steel bar by looking up the results of experiments in which similar
steel bars were broken.
The importance of the model probably can not be overstressed. PCT treats
theory in exactly the same fashion as do other "hard" sciences. If the
model fails to match observed reality, then there is a MAJOR problem.
In principle, again. But in practice, there are far too many variables
at play in most real situations, and one has to combine them in intuitively
reasonable ways that are not inherent in the model itself. I think Bill
Powers' oft-expressed hope is wishful thinking that in centuries to come
PCT will predictively be able to model complex situations exactly.
Unlike ALL other psychological work, PCT is NEVER a matter of opinion.
Much of the discussion that takes place here is a matter for discussion
but the tested model IS NOT! It either does match reality or it does not
and if not then PCT is WRONG as a theory.
No. The model tested might just have the wrong parameter values, or have
a linear output function when a nonlinear one would have fitted, or
there might have been only one ECS when two were needed for a good fit,
or ... None of these failures of the model to match the data would
suggest that PCT is wrong as a theory. They would only say that the
particular parameter values or control network configurations tested
was inappropriate for the situation.
Advances in experimental proof of PCT are a very slow process
and probably always will be slow. The point in PCT is that where direct
experimental processes have been possible, the theory has been right even
when the experimenters were wrong! It is the only theory of behaviour
that can make that claim.
There's some truth int that!
Mathematicians have often created models that have no relationship to any
known reality (and often pride themselves upon the fact that most
mathematics consists of systems that have only incidental realtionships
to reality as we know it). There is absolutely nothing wrong with this
and indeed much can be learned in such processes but the difference is
that these mathematicians ARE FULLY AWARE that what they are doing has no
requirement to match the real world.
Yes, that's true. But what has always amazed me about mathematics is how
often a mathematician has worked on a problem that might have intrigued
him partly BECAUSE it had no relation to the real world, and later along
comes someone with a real-world problem that is solved by that abstract
mathematics. It's THE mystery of the world, to me. But of course it
goes the other way, too. Fourier developed Fourier transforms as a
part of trying to understand the diffusion of heat through solids, for
example. And sometimes the two ways meet; a "practical" scientist develops
some mathematics to solve a problem, and discovers that the abstract
mathematician has beat him to it at least part way: nobody was much
concerned with the work of Poincare or Julia on fractals, until Mandelbrot
was trying to understand the statistics of noise bursts on phone lines.
Persistance with an open mind is worth a great deal here.
Yes. "Behaviour is the control of perception" is easy to say, but its
ramifications may take years to really embed themselves in your mind so
that you don't slip into contradictory ways of thinking that lead to
phrases like "decisions about actions to be taken." Which our language
leads us readily to do without even realizing we have done it.
===================
Dag Forssell (940623 1030)
A wonderful post, Martin. Thinking of _society_ as an artifact in the
same way as _language_ is helpful to me. The only material difference
I see is that with society, we have much more and highly localized
variation -- dialects.
Why, thank you. I do agree, heartily. We have different social conventions
in different roles (and to some degree different languages). We have
some conventions we use only within our family group, and we have no
idea whether other families have the same conventions. Those conventions
are not what we use at work, when walking in the street, at a sporting
entertainment... Different social dialects, indeed. Different societies
in the sense of a society being an artifactual set of conventions that
work together. And in each society, we play a different role--we control
different sets of perceptions that relate to the conventions of the society
in which we perceive ourselves at any moment.
It can be life-threatening to play the wrong role for the moment, or to be
ignorant of the conventions of the society in which you are.
Enough. I'm going home.
Martin