control vs. pseudo-control

------------------- ASCII.ASC follows --------------------
[Hans Blom, 950529]

(Rick Marken (950524.1000))

... Hans' program exhibits no "model-based" control at all

(Rick Marken (950524.2145))

... but I was also wrong about Hans' model -- it does control

Thank you. So does it or doesn't it? Let's see...

... Hans' model does control but it controls poorly. Most important,
what control this system does exhibit is a result of control of the
perceptual representation of xt.

The "poorly" needs to be better defined. And, of course, all control
ultimately depends on information that is somehow available about "the
world out there".

What is new in my demo is, in comparison to the "standard" PCT-control-
lers:

1) an internal model of the regularities of "the world out there"
           can be built, so that for instance a square wave pattern
           reference level can be accurately tracked with essentially no
           error and zero response delay;

2) this internal model is also available to control (Bill Powers:
           "pseudo-control") in periods where no sensory information at
           all is available;

3) this internal model may become so trustworthy that it is more
           relied on than the perceptions; this, as well as 2) above, was
           my counter-example where the generalization "control of per-
           ception" breaks down and must be replaced by "control of (by?)
           internal representations".

Nothing of this is new, of course. 1) Tracking a regular ("predictable")
pattern with approximately zero delay has frequently been encountered in
human operator response speed studies. 2) An example: because humans have
forward looking eyes, the complete visual field cannot be surveyed at any
single moment of time. Yet, especially in inimical environments, it is
important to always have an accurate "map"/model of what goes on behind
our backs. This "map" must be periodically updated, but after each update
it may serve us for some time -- or not, if not updated soon enough. 3)
An example: we have learned to rely on Newton's (or Einstein's) laws more
than on our inaccurate perceptions re positions/speeds/accelerations of
falling bodies. In fact, Newton himself trusted "regularities" more than
his (rather poor) observations; curve fitting isn't new...

(Bill Powers (950525.0530 MDT)) replying to Rick Marken (950524.2145)

One problem is that when you set the reference signal to a constant
value, the Kalman filter has no basis for calculating partial
derivatives and can't adapt properly until it has has some experience
with the faster disturbance.

You touch a crucial point: learning occurs only when there is error.
Error has to be introduced somehow (e.g. in system noise/"unmodelled
dynamics" or in a varying reference level) for learning to be able to
occur.

I believe the net result is that hidden inside this adaptive model with
a Kalman filter is an ordinary control system that works through the
external world in the usual way once adaptation is complete.

That depends upon what you consider an "ordinary" control system. In the
demo, control works as follows. When adaptation is complete, a=at, b=bt,
c=ct, and pax=0. The "world" acts according to

  xt [k+1] = ct + at * xt [k] + bt * u [k] + system noise

The model offers the prediction

  x [k+1] = ct + at * x [k] + bt * u [k]

The system noise, being zero on average, is disregarded. Set x [k+1] =
xopt, specifying that we want x [k+1] to be at xopt in one step. Remember
also that model variable x [k] is known; it is our best estimate thus far
of the "world" variable xt. In fact, if the observation is noise-free, y
= xt = x. The first (y = xt) because the observation is noise-free, the
second (x = y) because it is KNOWN to the model that the observation is
noise-free.

  xopt = ct + at * x [k] + bt * u [k]

Now solve for u [k]:

  u [k] = (xopt - ct - at * x [k]) / bt
        = (xopt - ct - at * y [k]) / bt,

the latter line for noise-free observations. This is what the control
routine, in all its simplicity, does.

The primary difference from the simple control system is that if
present-time inputs are lost, the controller can continue to produce the
same pattern of outputs and thus continue to drive the external system
in the appropriate way, as long as there are no independent disturbances
and the external parameters remain the same.

In the above control algorithm, no direct reference is made to the
observation y. Whenever it is missing, we can still generate new values
for x [k+1], x [k+2], etc, using the prediction equation

  x [i+1] = c + a * x [i] + b * u [i] for any i

and we can still solve for u [i]. Thus it is the internal model that
generates "pseudo"-observations x [i] that serve as a basis for control.
The quality of the control depends on a) the fact that some of the
world's dynamics may not be modelled and hence cannot be predicted, and
2) the accuracy of the current estimates of the parameters a, b and c.

Under a loose definition of the word "control", the system can be said
to continue controlling without present-time sensory feedback. It does
not continue controlling under the meaning of control that we use in
PCT, where an essential part of the definition is resistance to
disturbances.

If you define "control" as equivalent to "control of perceptions" or,
equivalently, to resistance to disturbances (which can of course only be
resisted if they are observable, then my demo does not "control". It also
controls when there are no perceptions or when perceptions are corrupted
by random or "arbitrary" variations. So, given my "loose" definition of
the word "control" (which I take from the control literature), can we
agree that we indeed have found a counter-example of "control of percept-
ions" that functions well under some (albeit maybe severely restricted
:slight_smile: circumstances?

We can now see that the primary effect of Hans' model is to enable the
controlling system to continue the same pattern of output variations in
the absence of direct sensory feedback from the real controlled
variable.

Not just in the complete ABSENCE of direct sensory feedback; also when
direct sensory feedback is inferior in quality compared to the feedback
provided by the internal model.

The same result can, I believe, be achieved by a hierarchical control
system.

But this statement is in direct conflict with your axiom that control is
"control of perceptions". In my opinion, and as the demo shows, control
should not be "control of perceptions" when those perceptions are
untrustworthy.

Maybe we can salvage the term "control of perception" if we take "in-
ternal observations" of model variables such as x to be "internal" per-
ceptions. This may not be so funny as it sounds. In order to know about
the world, we consult both our (momentary) perceptions of that world AND
our accumulated knowledge about the world. What else would we have memory
for?

However, before we put too much emphasis on this ability to operate
blind we should ask whether it has an important role in explaining human
behavior (which is, after all, the primary purpose of PCT).

Again, it is not only the ability to operate blind that I find interest-
ing. It is the ability to somehow store knowledge about the world that we
can use in our control tasks IN ADDITION TO our direct perceptions.

It is one thing to design an adaptive control system; it is another to
show that it is a model of real human behavior.

I consider it superfluous to demonstrate that (real-time) adaptation/
learning exists in organisms with a nervous system. It remains to be
shown HOW adaptation works in such organisms. I maintain that all kinds
of (real-time) adaptation have common underlying features, and that my
demo shows some of those, be it in a very crude way. That brings us to a
more philosophical point:

There is a tendency in computer-science circles to design a system that
operates in some clever way, and then look around for behaviors that can
be interpreted (and often overinterpreted) as fitting it. This is the
wrong way around. We must start with observations of real human behavior
in circumstances where we can measure accurately what is going on, and
then search for a model, clever or simple, that will reproduce the
behavior under all reasonable variations in circumstances that we can
think of.

This tendency exists not only in computer-science circles (which, by the
way, I do not consider myself to belong to), but in all of science. There
simply is no other way. I have accepted by now that "observations" are
always model-driven: there is so very, very much to observe that we must
set limits to what we WILL observe, what we will describe (in language or
in formula), what we deem important and what not. And what we can observe
is restricted, as you note, by what we can accurately measure, i.e. in
terms of what is already known to us. This embeds observations within a
culture: the type of observations that we can make will always depend on
whichever instruments we have and on our pre-existing knowledge of what
is observation-worthy.

Science, whether we like it or not, is mostly hypothesis- or theorem-
driven: someone invents a theory about how things relate (in terms of
measurables and ought-to-be measurables) and then must proceed to
demonstrate that this particular theory works well. Theoretical physics
is, I think, much more a driving force than observational physics. It is
a stroke of genius to invent "invisible" concepts like "acceleration",
"black hole" or "continental drift". We have no inkling yet, I think, in
which terms to describe what we intuitively feel to be the most important
aspects of "real human behavior".

(Rick Marken (950525.0900)) to Bill Powers (950525.0530 MDT)

... as Hans himself noted in his post, there is no control of xt at all
when the model is blind (y is essentially all noise).

I have noted no such thing. When the model is blind, there is control of
x (see above: the controller attempts to bring x toward xopt). And inas-
much as x is an accurate representation of xt, xt is controlled as well.
If you define "control" as being identical with "control of perception",
there can of course be no control if there is no perception. But you must
realize that you then use the term "control" in a very idiosyncratic way,
which, moreover, makes the P in the title BCP tautological, to say the
least. So what do you mean when you say:

... So Hans' model can control. But it controls only when it has
available a perceptual representation (y) of the controlled variable:
it can't control blind.

One thing that my demo was meant to show is that there may be an
"internal equivalent" x to the perception y that can be used when y is
missing, and that, as long as the internal representation is accurate,
control can proceed even without observations. So is this control or not?
We seem to have a terminology problem. Perhaps Bill's "pseudo-control" is
a good idea. Then we can rephrase

So I was wrong to conclude that Hans' model cannot control. But that
mistake occurred because I assumed that Hans was saying that his model
could control xt without perceiving it.

into something like

Hans' model can pseudo-control but not control xt without perceiving it.

How is that?

But there is something more where I said

... "control of perception" does not apply to model-based control if
the perception is disturbed and the model adjustment algorithm is
given the information that it is.

The "perception" is y = xt + noise/disturbance vt. I propose that it
would be better to attempt to control xt than y. This can be done by a
'filter' operation. This 'filter' is based upon the information that is
accumulated by somehow "processing" observations and fitting them into
the internal model, which can subsequently filter out (some of) the
noise. Of course, xt is not directly accessible, but x might be xt's
accurate internal representation.

... It looks like there is control of a real world variable (xt) while
the perception (y) of that variable is ignored.

Yes, LOOKS LIKE phrases it well. Of course it is x that is controlled and
not xt. But x might represent xt better than y does.

The missing Post from Bill Powers (950523.2115 MDT), that came to us by
way of Rick Marken (950525.1500) [thanks, Rick!] deserves more thought
than I have time for at this moment. I'll come back to it later.

Greetings,

Hans

<[Bill Leach 950529.22:13 U.S. Eastern Time Zone]

[Hans Blom, 950529]

The "poorly" needs to be better defined. And, of course, all control
ultimately depends on information that is somehow available about "the
world out there".

I don't particularly feel like retrieving Rick's post but he most
certainly did give "poorly" quantitative values that is, specific numbers
indicating the degree of control.

square wave pattern

Probably due only to my own ignorance it the field but I have always
thought that "square wave pattern" functions were pretty much limited to
contrived experiment and otherwise do not exist in the real world (and I
am using the loose definition of square wave not the mathematical one).

I have a "bunch" of trouble with "3) An example: we have learned to rely
on Newton's ...".

When I design some sort of control system then yes, absolutely I rely
upon mechanics to calculate the relevent parameters (required capability
and the like). But as a control system myself, I do not believe that my
knowledge of either Newton or Einstein has any effect upon my control
with respect to moving objects that I encounter in my environment.

Bill P.

One problem is that when you set the reference signal to a constant
value, the Kalman filter has no basis for calculating partial
derivatives and can't adapt properly until it has has some experience
with the faster disturbance.

Hans

You touch a crucial point: learning occurs only when there is error.
Error has to be introduced somehow (e.g. in system noise/"unmodelled
dynamics" or in a varying reference level) for learning to be able to
occur.

Again, I may well be speaking from my ignorance but I don't see the
relevence of Bill's statement based upon the model schematic that Rick
posted.

It seems to me that if the reference is varied _AND_ the adaptive
algorithm "clever" enough then the model could well be improved based
upon the difference between the model output and "y". However, even with
a constant reference, disturbance showes up as a difference between the
model output and "y" (affected by the Kalman filter of course).

The system noise, being zero on average, is disregarded. Set x [k+1] =
xopt, specifying that we want x [k+1] to be at xopt in one step.
Remember also that model variable x [k] is known; it is our best
estimate thus far of the "world" variable xt. In fact, if the
observation is noise-free, y = xt = x. The first (y = xt) because the
observation is noise-free, the second (x = y) because it is KNOWN to the
model that the observation is noise-free.

Though I am sure that you know this, this paragraph is true ONLY if there
is no disturbance to xt (even if the observation is noise free).

Bill P.

The primary difference from the simple control system is that if
present-time inputs are lost, the controller can continue to produce
the same pattern of outputs and thus continue to drive the external
system in the appropriate way, as long as there are no independent
disturbances and the external parameters remain the same.

Hans

In the above control algorithm, no direct reference is made to the
observation y. Whenever it is missing, we can still generate new values
for x [k+1], x [k+2], etc, using the prediction equation ...

BUT HANS!! You are ignoring the essence of what Bill has said! The fact
that the controller is controlling an internal signal passed through a
model such that the result matches a reference is NOT denied so you don't
need to defend the fact.

The fact is still true that the model based controller does not control a
CEV against disturbance (or even model error) without a perceptual
signal.

This is the essence of almost all of the "disagreements" concerning your
example. I don't believe that there has really ever been a serious
arguement concerning what you model will do under specific limited
conditions.

This disagreements have centered upon two problems (I believe). The
first is that in uncontrolled or "unlimited" environments, that is the
sort of thing that people and animals encounters regularly, model based
control without perception is highly unlikely (with the fine example
provided by Avery a couple of days ago possibly the limiting example).

That some form of model based control may exist for momentary
interruption of perception is probably not an unreasonable assertion for
future testing.

If you define "control" ...

It seems that the loose definition of control is a serious impediment to
CSG-L discussions.

When I set a reference for "making more money" by moving funds from a
Bond to a Mutual fund, I don't personally doubt that it would be
reasonable to even talk in terms of my use of "models" to both make my
"action decision" and actually execute the intended actions. However,
the whole loop is still a negative feedback controller and the perception
under control is making more money than could be achieved from the Bond
fund. That I may loose money or make other errors does not, I think,
change the concept that it really IS negative feedback control.

It also controls when there are no perceptions or when perceptions are
corrupted by random or "arbitrary" variations.

Two problems with this. The first is that it does not control (again in
the sense that it resists disturbance to the CEV). The second is that
for any cases of "corrupted perceptions" that I can think of, human
control fails.

I don't accept either the "heat shimmer" or "water spear fisher" examples
in that I believe that both are examples of learned compensation based
upon experience. That is the control system generates an offset for the
reference based upon the additional perceptual input.

So, given my "loose" definition of the word "control" (which I take from
the control literature), can we agree that we indeed have found a
counter-example of "control of percept- ions" that functions well under
some (albeit maybe severely restricted :slight_smile: circumstances?

I personally again, do agree that we do attempt to control things that we
can not necessarily perceive and that we "sorta use a model" to do this.
The simple, perception to buy a gallon of milk certain envolves the
generation of a whole bunch of references to control perceptions that in
reality may or may not result in achieving the original perception. Many
of these are model or at least "world model" based references (such as I
fully expect the store to be located today where it was located
yesterday).

Bill P.

The same result can, I believe, be achieved by a hierarchical control
system.

Hans

But this statement is in direct conflict with your axiom that control is
"control of perceptions".

I agree with you here Hans but personally will let Bill "off the hook"
since he has been very up front about memory operation doubts and limits.

There has been a reasonable amount of discussion about memory and
imagination in the past. As soon as the loops become entirely internal,
the discussion can become very confusing but when the loops might be both
we can get into a nearly impossible situation without labeled diagrams.

In my opinion, and as the demo shows, control should not be "control of
perceptions" when those perceptions are untrustworthy.

This is an interesting statement in itself. I seriously doubt that _ANY_
living system will ever "control" without using perception unless the
perception is just plain not available and will attempt to replace a
missing perception with any other available perceptions...

I am reminded of when I was driving from the Power Burst Facility to the
"main road" to Central Facilities in a quality government pickup truck
around midnight several years ago. Considering the time, traffic, view,
almost absolutely straight road, and such I was traveling somewhat above
the posted 45 miles/hour when the hood of the truck decided to wrap
itself over the top of the cab.

At probably close to 70 MPH a sudden complete abscence of forward vision
is rather "disturbing" (both meanings intended). Thinking about it now,
I realize that a number of references changed a bit abruptly and the
original "ancillary" reference for keeping the truch in the approximate
center of the lane both changed to keeping the truck on the road
somewhere (hopefully with the dirtier side down) _and_ became one of if
not the most important reference in the whole process.

Even today, almost 20 years later, I remember that even the fibers of the
seat suddenly "were noticed" as I frantically searched for visual
indication that I was "staying on the road" (one learned not to "jam" on
the brakes on those vehicles as such an action usually resulted in a
sudden radical departure from the previous path).

In a sense, it seems reasonable that some of my actions were "model
based", for example I remember pulling the steering wheel slightly to the
left immediately as I began breaking even as I was looking for anything
out the drivers side window with which to gage the vehicle behaviour
(which of course included trying to discern the left edge of the
roadway).

"Model based thinking" itself might also have been in part responsible
for my frantic search for visual clues (ie: realizing that at anywhere
near 88 ft/sec the model would not have to be very far off for disaster
to occur).

Maybe we can salvage the term "control of perception" if we take
"internal observations" of model variables such as x to be "internal"
perceptions. This may not be so funny as it sounds. In order to know ...

I don't think it is funny at all. It is indeed my understanding of the
designation for output of imagination "circuits".

Again, it is not only the ability to operate blind that I find interest-
ing. It is the ability to somehow store knowledge about the world that
we can use in our control tasks IN ADDITION TO our direct perceptions.

If you accept that in a normally functioning organism the lower lever
systems are controlling perceptions and that the model (if any) is
supplying perceptions to high level systems then I don't have any trouble
with the idea. These high level systems would have to be creating
references for lower level systems that are controlling perception in the
usual meaning and if error for those systems does not exist then the
higher level system is satisfied.

Science Philosophy

I find myself in agreement with you here. The science break-through
comes not from refinement of observation or more data. It is when
someone challenges existing understanding with a new view and the new
view turns out to be supported by the evidence.

While often it is known data anomolies that "prompt" the search for a new
view (such as Max Plank's theory of quantized energy and the black box
anomoly) it is also true that sometimes the evidence is not present
first (I believe that was essentially the case for the prediction of the
existance of the neutrino - not many people were bothered by the
non-quantized behaviour of beta decay at the time).

Science is both the stepwise advancement created by better data
acquisition and analysis AND discontinuous "leaps" of intuition (that
stand the test of verification against observation).

One thing that my demo was meant to show is that there may be an
"internal equivalent" x to the perception y that can be used when y is
missing, and that, as long as the internal representation is ...

The problem may be terminology but I think that it is important to
remember that in PCT what is always controlled is the perception. This
even if the living system is attempting to control a perception with the
exclusive use of a model then the perception is NOT controlled.

When I use a "world model" to achieve a goal (to use the popular
phrasology) but have no perceptual input relative to the goal itself then
I am NOT _controlling_ WITH RESPECT to the goal.

You will probably really "light a fire" with this one:

... "control of perception" does not apply to model-based control if
the perception is disturbed and the model adjustment algorithm is
given the information that it is.

The "perception" is y = xt + noise/disturbance vt. I propose that it
would be better to attempt to control xt than y. This can be done by a
'filter' operation. This 'filter' is based upon the information that is
accumulated by somehow "processing" observations and fitting them into
the internal model, which can subsequently filter out (some of) the
noise. Of course, xt is not directly accessible, but x might be xt's
accurate internal representation.

The last sentence straightens out the first sentence in a sense but
overall this whole paragraph is confusing. Lets start with the important
fundamental idea that what is ACTUALLY DESIRED by the control system is
the control of the PERCEPTION of xt. This living system may even
verbally express a desire to control xt but that is never correct from
any viewpoint. The living control system is ALWAYS trying no more than
to control its own perception of xt.

I find myself having a great deal of trouble with the concept of noisy
perceptions from the standpoint of needing model based control.

I know from experience that in many "noisy" situations I have been able
to maintain excellent control, both in situations where a "secondary"
perception was very noisy and when the "primary" perception was noisy (or
even also noisy). Though without any real research support, I believe
that in effect I was able to ignore "high frequency" disturbance (for
example flying a plane in rough weather). In these situations, I am not
personally satisfied that there was any need for "model based" control to
reduce the effects of the noise. I see a model coming into play when I
set a specific course to reach a destination for which I have no direct
perception.

There were other comments on this but after typing, cutting, retyping and
cutting a few more times without satisfactory expression of thoughts I
just deleted them (flu symptoms and the late hour may be significant).

[Martin Taylor 950530 20:30]

Bill Leach 950529.22:13 U.S. Eastern Time Zone

I don't accept either the "heat shimmer" or "water spear fisher" examples
in that I believe that both are examples of learned compensation based
upon experience.

Assuredly they are learned. I'm not clear what you "don't accept," but
I'm making a guess in order to make a point. If my comment doesn't apply
to you, let it pass, because it may help someone else.

That is the control system generates an offset for the
reference based upon the additional perceptual input.

If I read you aright, you are saying that the learning is in the output
function, and that the relevant perceptions are not altered by the learning.
What you "don't accept" is that the perceptual signals change.

It is always a little dangerous to rely too much on subjective impressions
of what one perceives, but conscious perception presumably is based on the
perceptual signals that we (may) control, and not on output signals or
reference signals as such. If that is the case, then this kind of learning
should have no effect on what people say they perceive.

In the 1930's and for the next three or four decades, several people studied
what happens when one wears distorting spectacles of various kinds. The
general finding is that if the wearer is active in the world, the distortion
seems to go away over time, but if the wearer is only moved passively
around (as in a wheelchair), the perceived distortion goes away slowly
if at all. These distortions might be rainbow spectra and bent vertical
lines with everything displaced to the left (prism spectacles), inversions
of the visual field up and down or left and right, etc. One can get quite
illogical perceptions, such as (with up-down inverting spectacles) of a
man's face looking right-side up, with the smoke from the cigarette going
downward, even though it passes from the mouth up in front of the forehead.

Subjectively, it is the perception that changes. There may well be changes
in the output functions as well, but as far as I know nobody has looked at
the situation from a PCT viewpoint, and I don't know what studies one might
do to distinguish perceptual from output effects.

Somewhere, there exists a fascinating movie of Seymour Papert (MIT) learning
to ride a bicycle while wearing left-right inverting spectacles. Initially
he just falls off as soon as he starts, but after a while he can ride with
no problem UNTIL HE TAKES THE GLASSES OFF, and then he falls off immediately.
After a further period, he can happily ride while putting the glasses on and
off. Then, unknown to him, Jim Taylor (J.G., with whom he was collaborating)
took the inverting lenses out of the glasses. Then, as soon as he put the
glasses on, he fell off the bike! No distortion whatever in the visual
field. One can certainly account for these observed results by appealing
to changing reference signals altering the outputs, but Jim said that Papert
said he saw the world as normal after he had learned, and as inverted when
at the last stage he put on the spectacles with no distortion. I don't see
a reason for Jim (or Papert) to lie about it, or for the various others
who have written that their perceptions changed in unexpected ways while
renormalizing.

If this isn't related to what you "don't accept," I hope it might nevertheless
be a little illuminating.

It's fun, at the very least. And it does seem related to what I think
Hans has been talking about. But maybe not.

Martin

<[Bill Leach 950531.20:45 U.S. Eastern Time Zone]

[Martin Taylor 950530 20:30]

I do need to comment because you bring up an excellent point.

I have also heard of the "lenses" study before (which raises interesting
questions about certain "visual anomolies" that are identified as
affliction).

I am not a spear fisherman (though I have tried) and I don't recall ever
trying to shoot across a fire. I have done a reasonable amount of
"pigeon busting", a little game hunting and can attest to the fact that
at least in my own case, if my perception does not include a "lead" I
will miss (obviously we are talking only about objects with some amount
of motion perpendicular to the projectile path).

Shooting Skeet may be the better example since it is a rather "consistent"
operation (at least as consistent as riding a bicyle). Based both upon
my own experience and comments by shooters that were vastly superior to
myself, I can only conclude that none of us ever reached a point where
the perception was a "dead on" aim (that actually was even true for
"birds" without lateral motion).

Since in my rather abortive attempts at spear fishing, I was guided by an
accomplished fisherman that was directing me "to throw the spear _x_
inches offset from the image of where the fish appears" that the
fisherman also did not "see" the fish at the "proper" location.

As to "control of output"... I am glad that you raised that issue
because that is specifically not what I intended.

In the fishing example, what I suspect happens once someone is actually
proficient at the task is that the perceptions of the fish include many
seperate perceptions.

The accomplished speer fisherman (fisherperson? UGH!) probably develops
perceptual control systems that adjust the reference for the target.
Perceptions probably include, visual angle (or at least appearent
distance across the surface of the water) and estimated depth which may
be derived through such things as appearent relative motion across the
background or even relative size verses motion of the fish.

I don't remember discussing how changes in the environment (deeper river,
faster flow, etc.) might influence any particular speer fisherman (if at
all). For example does ability to see objects on the bottom influence
the ability or not (in my case all of the fishing was done in
exceptionaly clear water no deeper than about 5 feet and I don't count
anyway since I never speered one of the puppies... er, fishies).

Subjectively, it is the perception that changes. There may well be
changes in the output functions as well, but as far as I know nobody
has looked at the situation from a PCT viewpoint, and I don't know what
studies one might do to distinguish perceptual from output effects.

Probably the simplest way would be to ask.

If this isn't related to what you "don't accept," I hope it might
nevertheless be a little illuminating.

Yep, definately interesting. It would appear that either or both can or
will change.

-bill