HPTC and Tacit Knowledge

[From Rick Marken (2013.02.10.1030)]

Martin Taylor (2013.02.10,09.24)–

MT: No matter what the science, if it is

science, it must accept the possibility that its suppositions might
need radical alteration, just as Galileo destroyed Aristotle’s
theory of motion and Einstein superseded Newton.

RM: Yes, but Galileo and Einstein did this based on observations that posed problems for the existing models. Galileo did these observations himself; Einstein knew of these observations from the literature. But in both cases the proposed changes came after the tests, not vice versa.[By the way, my birthday is right between Galileo’s and Einstein’s and the same as Paul Krugman’s. Coincidence? I don’t think so;-)]

MT: To say that there

are other possibilities is not to say that the current set of
beliefs is wrong. The greater the variety of other possibilities to
be tried and found wanting, the stronger the likelihood that the
current beliefs are near being correct.

RM: I think this statement really clarifies the difference between us in how we think about doing science. I think it’s done this way:

  1. Observe.

  2. Develop a theory, X, to account for observations.

  3. Test theory X. Over and over again.

  4. When theory X cannot account for results of a test modify theory X so that it accounts for all previous results as well as the one that X couldn’t handle. Call new theory X’.

  5. .Go to step 2 with X = X’

You apparently think it’s done this way:

  1. Observe.

  2. Develop one preferred theory, X, and alternatives, Y,Z…, to account for observations.

  3. Test theory Y, Z…

When one of the alternative theories can account for the results of a test adopt that theory.
5. Return to step 2 with the new theory as the preferred one.

This seems to be consistent with your statement that "The greater the variety of other possibilities to

be tried and found wanting, the stronger the likelihood that the
current beliefs are near being correct". It seems like a poor approach – it doesn’t allow you to directly test the “preferred”: theory. The main problem I have with your approach, though, is that you have never reported any results from step 3. This means that, in practice, your approach really involves only the first two steps above. You have never really shown evidence that another possibility has been “tried and found wanting”.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.02.10.14.30]

I think the difference between us is in that you think linearly,

whereas I think we should be treating the relationship between
theory and science as a closed loop. You are right that I am
primarily a theorist. I wasn’t always, as my many published
experimental papers will attest. But where I worked, in a Defence
Laboratory, I developed a very productive working relationship with
someone who, like you, was very practically oriented. Data and
theory working together are much more productive than either working
alone.
Think of “science” as a formalization of HPCT. There is an
environment on which we act. The (presumably) same environment
influences our sensory systems. Some of our actions influence some
of our perceptions in consistent ways. We reorganize so that we can
best find consistencies in the way our actions can influence our
perceptions so that we can minimize error overall, though probably
our reorganization works more in modular fashion than globally
(another open question in HPCT).
Now reword that to talk of “science”. There is an environment on
which we act and on which we can make observations and measurements.
Some of our actions influence our observations in consistent ways.
We theorize so that we can best find consistencies in the way our
actions affect our observations and measurements, minimizing
anomalies between theories and between theories and observations,
though our theories usually apply more locally than globally.
Your approach (step 3) of sticking with a theory X so long as it
explains all the results is fine, if you have come across a good
theory X by some happy chance. How do you do that without having
first gone through what you characterize as my approach? Did theory
X spring fully formed from your mind, as Aphrodite from the brow of
Zeus?
In point of fact, you violate your step 4 in what you describe as
your approach, when theory X is “HPCT using the “standard” type of
control unit”. We all know of the characteristic deviations between
this control model and human performance, but instead of following
your “step 4”, you say “It fit well enough for me – isn’t it
perfect”.
Quite so. Does this not suggest to you that my approach is not as
you characterize it to be?
I am not at all averse to your hill-climbing approach. It certainly
complements the “genetic algorithm” approach you designate as mine.
So long as you don’t get caught on a local maximum, as physics
almost was around 1900, the hill-climbing approach is just fine. But
I am averse to recognizing that there is a hill and refusing to try
to climb it, and I am averse to refusing to look across the valley
to see whether there might be higher hills to climb.
Experimentalists can’t get very far without theorists. The converse
is not always true.
Martin

···

[From Rick Marken (2013.02.10.1030)]

        Martin Taylor (2013.02.10,09.24)-- 
        MT: To say that there

are other possibilities is not to say that the current set
of beliefs is wrong. The greater the variety of other
possibilities to be tried and found wanting, the stronger
the likelihood that the current beliefs are near being
correct.

  RM: I think this statement really clarifies the difference between

us in how we think about doing science. I think it’s done this
way:

  1. Observe.

  2. Develop a theory, X, to account for observations.

  3. Test theory X. Over and over again.

  4.  When theory X cannot account for results of a test modify

theory X so that it accounts for all previous results as well as
the one that X couldn’t handle. Call new theory X’.

  5. .Go to step 2 with X = X'



  You apparently think it's done this way:



  1. Observe.

  2. Develop one preferred theory, X, and alternatives, Y,Z..., to

account for observations.

  3. Test theory Y, Z...

  4.  When one of the alternative theories can account for the

results of a test adopt that theory.

  5. Return to step 2 with the new theory as the preferred one.
  This seems to be consistent with your statement that "The greater

the variety of other possibilities to be tried and found wanting,
the stronger the likelihood that the current beliefs are near
being correct". It
seems like a poor approach – it doesn’t allow you to directly
test the “preferred”: theory. The main problem I have with your
approach, though, is that you have never reported any results from
step 3. This means that, in practice, your approach really
involves only the first two steps above. You have never really
shown evidence that another possibility has been “tried and found
wanting”.

[From Rick Marken (2013.02.10.2000)]

Martin Taylor (2013.02.10.14.30)–

MT: I think the difference between us is in that you think linearly,

whereas I think we should be treating the relationship between
theory and science as a closed loop.

RM: I just described science as a closed loop process, thus:

  1. Observe.

    1. Develop a theory, X, to account for observations.

    2. Test theory X. Over and over again.

    3. When theory X cannot account for results of a test modify
      theory X so that it account for all previous results as well as
      the one that X couldn’t handle. Call new theory X’.

    4. Go to step 3 with X = X’

Going from step 5 back to step 3 (I had it as step 2 but 3 is better) closes the loop.

MT: Your approach (step 3) of sticking with a theory X so long as it

explains all the results is fine, if you have come across a good
theory X by some happy chance.

RM: Yes, I came across PCT by happy chance.

MT: How do you do that without having

first gone through what you characterize as my approach? Did theory
X spring fully formed from your mind, as Aphrodite from the brow of
Zeus?

RM: Well, I wasn’t the one who came up with PCT but however it was come up with, it was come up with and so far it works great.

MT: In point of fact, you violate your step 4 in what you describe as

your approach, when theory X is “HPCT using the “standard” type of
control unit”. We all know of the characteristic deviations between
this control model and human performance, but instead of following
your “step 4”, you say “It fit well enough for me – isn’t it
perfect”.

RM: Actually, I don't "know of the characteristic deviations between

this control model [PCT/HPCT I presume] and human performance". Please tell me what they are; and tell me what changes to the PCT model should be (or have been) made to account for these deviations. Finding deviations from PCT would be very exciting; I can’t believe I haven’t noticed them.

MT: Experimentalists can't get very far without theorists. The converse

is not always true.

RM: I don’t think either can do without the other. I think you have to be both theorist and experimentalist in order to be a scientist. An experimentalist who is not a theorist is just making disconnected observations; a theorist who is not an experimentalist is a religionist.

If one’s talents run more toward theory than experimentation then at least the theorist should be able to show how any theory he develops accounts for existing data and what data it predicts. If one’s talents run more toward experimentation than theory then at least the experimenter should know how a theory works and, thus, what can be done to test its predictions.

I don’t think either skill is really more important than the other; you have to be able to do both to do science, even if you are better at doing one than the other.

Best

Rick

···


Richard S. Marken PhD

rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.02.11.09.10]

[From Rick Marken (2013.02.10.2000)]

Martin Taylor (2013.02.10.14.30)–

        MT: Your approach (step
  1. of sticking with a theory X so long as it explains all
    the results is fine, if you have come across a good theory X
    by some happy chance.
      RM: Yes, I came across PCT by happy chance.
        MT: How do you do that

without having first gone through what you characterize as
my approach? Did theory X spring fully formed from your
mind, as Aphrodite from the brow of Zeus?

      RM: Well, I wasn't the one who came up with PCT but however it

was come up with, it was come up with and so far it works
great.

I used "you" in the generic sense, not meaning "you -- Rick". And I

agree that it “works great”. I stipulated that in the message to
which you were responding. The point was that it is usually the
small anomalies in a theory that “works great” that lead to greater
understanding.

        MT: In point of fact,

you violate your step 4 in what you describe as your
approach, when theory X is “HPCT using the “standard” type
of control unit”. We all know of the characteristic
deviations between this control model and human performance,
but instead of following your “step 4”, you say “It fit well
enough for me – isn’t it perfect”.

      RM: Actually, I don't "know of the characteristic deviations

between this control model [PCT/HPCT I presume] and human
performance". Please tell me what they are; and tell me what
changes to the PCT model should be (or have been) made to
account for these deviations. Finding deviations from PCT
would be very exciting; I can’t believe I haven’t noticed
them.

Nor can I.

Have you never looked closely at the fitted tracks at places where

the target changes location or velocity abruptly?

Actually, as I said before, I wasn't talking about deviations from

PCT or even from the HPCT model, but from the leaky-ingtegrator
output function. When I discussed the characteristic anomalies with
Bill a few years ago, I had the impression that the little
deviations between human and model in low-level tracking tasks were
well known and ill-understood. And if I knew what changes to the
output function would resolve the anomalies, I would not have
mentioned them as an open question within the HPCT structure.

          MT: Experimentalists can't get very far

without theorists. The converse is not always true.

  RM: I don't think either can do without the other.
It depends what you mean by "go far". If you mean "gain general

acceptance", I agree with you. If you mean “advance understanding of
some aspect of science” I don’t. Think of relativity, which was a
novel way of interpreting the idea that what matters is perception.
Einstein realized that this was important and theorized a whole lot
of things that would never have been thought of in the “nearly
finished” physics of 1900. There were no data to suggest relativity
might be a more accurate way of describing the world than the
prevailing “absolute universe” view, so it didn’t catch on generally
for nearly 20 years, until an out-of-left-field prediction turned
out to be correct. Then, with data that preferred relativity to the
Newton-Maxwell view, it did gain general acceptance. But the
discrepancy that was predicted exactly by Einstein was
observationally tiny and the observation was sufficiently hard to
make that probably nobody would have looked for it if he had not
predicted it years earlier.

In respect of comparing theories, let's go back to a couple of

possible alternatives to the HPCT structure, and ask which of your
many experiments and demos would give different data under these
different alternative possibilities. I think that the answer is
“none”, but I expect you to correct me if I am wrong.

The alternatives are based on the HPCT structure, each with a slight

modification (your preferred “hill-climbing” approach).

Possibility 1: Lateral feedback loops are permitted within a level,

perceptual signal to perceptual input or output signal to reference
input at the same level, rather than having lateral connections
limited to “the imagination loop” connection between an output and a
perceptual input at the same level. Such connections can
theoretically create sharpened perceptions or decisive outputs that
are hard to achieve with a strictly tree-structured perceptual or
output configuration. Experiments would be needed to show that such
lateral connections cannot exist, or theory is needed to argue that
if they existed, reorganization would eliminate them. I know of no
such experiments or theory.

Possibility 2: Rather than there being a category level, above which

all perceptual types are based in their specific kind of logical
relationship, there is a category boundary that can be visualised as
lying vertically alongside all the ascending analogue levels. There
is no well-defined “top level” type on either the analogue or the
logical side of the category boundary (though the number of levels
is necessarily finite). Such a configuration allows for categories
of intensities, transitions, sequences, etc. without “level jumping”
perceptual input structures. Such categories do occur in my own
subjective perception (remember that his subjective perception is
the basis on which Bill’s HPCT levels have been developed).

I rather think that all the available experimental data would give

each of these possibilities (and probably many more) exactly the
same cachet of “working great”, because with those studies they
would all give the same predictions. There’s nothing particularly
novel about these possibilities, but it might be possible to develop
experiments to discriminate among them, if anyone was interested in
seeing whether HPCT is a unique “working great” configuration of
PCT. Maybe it has been done already, but not to my knowledge.

Martin

Hi Martin,

I’m sorry I didn’t understand you right. If I’m honest I didn’t wholly understand what your answer is about.

My assumption that you are maybe proposing a novelty was your sentence :

MT :

It’s not easy for me to see how, but if that’s not it, then there must be another place in some revised HPCT circuitry where motor and sensory inputs are used in coordination.

HB :

It must be that I really misunderstood everything. Whatever. Please accept my appolgies.

Best,

Boris

···

----- Original Message -----

From:
Martin Taylor

To: CSGNET@LISTSERV.ILLINOIS.EDU

Sent: Sunday, February 10, 2013 4:01 PM

Subject: Re: HPTC and Tacit Knowledge

[Martin Taylor 2013.02.10,09.24]
On 2013/02/10 7:32 AM, boris_upc wrote:

And as I see debates on CSGnet Martin is trying all the time to improve PCT, but Rick tries somehow to stop him.

I don’t see myself as trying to improve PCT, so much as trying to follow the implications of understanding PCT.

If PCT is correct, as I think it must be, what are the implications? As I said (I believe over on the ECACS forum) I always look first to see whether an observation or experiment can be interpreted, or better, predicted, within HPCT. Usually it can. Even the dendrite study you referenced can be interpreted within HPCT, since HPCT does require an energy system and some way of time-multiplexing what perceptions are controlled out of the myriads that could be controlled at any one moment.

I don’t think ANYONE understands the full implications of the HPCT structure. And even within that structure there are lots of questions. For example, how are patterns of higher-level outputs converged to form reference values for the control units at the next lower level? That question has been simmering on CSGnet ever since I first joined 20 years ago. It is finessed in the simulations by making the reference input a simple summation of the converging output values from a higher level. but this cannot be the whole answer at all levels of the hierarchy, even if it is the correct answer at the lowest levels. If you have your feet, a bicycle, and a car as effordances for getting you to where you want to go, an output that produces 40% car and 60% bicycle doesn’t leave you with any means of transport.

A related question is the role of associative memory. In B:CP Bill suggested a possibility – that rather than reference signals being derived directly from the outputs at higher levels, they come from the outputs of associative memories that are addressed by the higher-level control unit outputs. The effect is to set the lower-level control units with reference values that would regenerate a perception that existed when control had been successful on a previous occasion. Such a system would produce a “car” reference or a “bicycle” reference, but would never produce “40% car and 60% bicycle”.

I think that Bill’s suggestion has many possible ramifications that solve not only the question of how reference values are derived from higher-level outputs, but also addresses some other issues, such as time-binding and context-dependent learning independent of the reorganization system that forms part of HPCT. But one can’t study these aspects of HPCT (they aren’t additions to HPCT) without being able to figure out theoretical predictions that could be tested against experiments designed to reject them. That track has hardly been followed, even by Bill.

HPCT does not require that output functions of control units be leaky integrators. within HPCT, output functions could be any function at all, and could be functions that vary over time (for example, under control of a “Gain” signal, as has been tried in some experiments) or that have different characteristics in different control units of levels of perception. Leaky integrators work well in the low-level tracking studies, but they show characteristic patterns of error when compared against real tracking data when the target has abrupt changes of position or velocity. What form of output function might track equally well in the global sense while eliminating these tiny but consistent patterns of mismatch with human performance? Just as in physics, in PCT it is the small consistent discrepancies between theory and observation that usually lead to big insights. Here’s a place where such an insight is needed.

None of what I have said, nor of what I might add in the same vein, in any way suggests deficiencies in HPCT. They are questions within HPCT. The point I raised in my earlier message was that HPCT isn’t the only possible structure of control systems that would legitimately be called PCT. No matter what the science, if it is science, it must accept the possibility that its suppositions might need radical alteration, just as Galileo destroyed Aristotle’s theory of motion and Einstein superseded Newton. To say that there are other possibilities is not to say that the current set of beliefs is wrong. The greater the variety of other possibilities to be tried and found wanting, the stronger the likelihood that the current beliefs are near being correct.

Martin

Hi Rick,

I’m really surprised about your “mature” answer :)). Maybe I didin’t understnd you right too, as it seems to me that our stand points aren’t so far as I thought. I’ll let you at your good work. But maybe you could consider one more time your thought :

RM earlier :

The problem is that research results that are obtained without an understanding that the system under study may be a control system are difficult to make sense of in terms of closed loop models. The main missing variable is, of course, the controlled variable. So we can interpret these results in terms of PCT but it is always just wild guesses (about the controlled variable(s)) unless the researchers themselves have collected the data in the context of a control model of the system under study.

HB:

Maybe is not everything as it seems to be. Maybe there is some “hiiden” potential knowledge which is not seen on first look :))

About Carver and Scheier I agree with you about 1st book : **Attention and self-regulation:******a control-theory approach to human behavior. It only seems to me that it was 1981 not 1982.

Me and Bill spent quite some time in analyzing their book 1998 : ON THE SELF-REGULATION OF BEHAVIOR. Bill was somehow satisfied with their explanation of hierarchy, where Bill is the only serious reference. But I thought that they have stolen him a generic diagram, which is clearly the same as his. The problem was that they didn’t mention him as reference.

So I don’t think that their studies don’t recognizes the nature of a goal attaining system – a control system.I think that they are recognizing the control hierarchy and they “incorporate” emotions in interesting way into hierarchy as control system that is “following” the magnitude of error in comparator. I think that their main problem was that they wanted to “still” the diagram. Bill was really surprised when he noticed that they “forgot” him as main reference. So I talked some words with them. They didn’t manage to prove me, where else could that diagram be taken from, although they mentioned some authors, amomg them also Wiener. But Bill gave me really good sugestion. He asked to ask them where they can find input function in Wieners diagram. On the end they didn’t manage to prove me anything and they break the conversation. So we have to settle the problem somewhere else.

Best,

Boris

···

----- Original Message -----

From:
Richard Marken

To: CSGNET@LISTSERV.ILLINOIS.EDU

Sent: Sunday, February 10, 2013 6:58 PM

Subject: Re: HPTC and Tacit Knowledge

[From Rick Marken (2013.02.10.1000)]

On Sun, Feb 10, 2013 at 4:32 AM, boris_upc boris.hartman@masicom.net

HB :Rick I think that you are missing the point Martin tries to tell you
If I understood Martin right ….
…Martin is probably trying to tell us that PCT is a general control unit (Bill’s) and one of the structural blocks that is used in specific organization of PCT units (Bill’s HPCT) which appears in specific part of nervous system (behavioral part).
Now the question, as I understand everything, is : do all parts of nervous system behave as PCT predicts and are all parts following logic of Bill’s HPCT ? Or can all parts of organism be described with “PCT tools” and different HPCT, what could mean different organizations of PCT units ?

RM: Yes, I agree that that’s pretty much what Martin is trying to say. And I agree with it; there are other possible ways for control systems to be organized than the one Bill proposed, which we are calling HPCT. All I’m trying to do is encourage people to test the existing proposal – HPCT – which was not just pulled out of thin air but is based on a lot of data and careful reasoning (documented in B:CP) before proposing all the other possibilities.

My beef with Martin is and has always been over how best to move the science of PCT forward. I think the best course is testing; Martin thinks it’s theorizing. I know I’m not going to change Martin’s mind about this; I just want to put in my two cents about it whenever I see the opportunity and Martin’s post about PCT and HPCT being different just provided that opportunity. I don’t think there is anything wrong with Martin’s approach, by the way. Indeed, it’s probably the one that most people find most interesting, not least because you can do it in the comfort of your own armchair. I just think it’s a waste of time. But then what isn’t?:wink:

HB: So it’s somehow obvious to me that Martin doesn’t doubt in PCT, but doubt in specific HPCT that make construction of organism ("real machine").

RM: I know that Martin doesn’t doubt PCT. That’s the problem (from my perspective). I think it’s important to doubt PCT and subject those doubts to empirical test. Simply speculating about alternative ways the nervous system might be organized does not count (for me) as a worthwhile way to doubt the current theory.

BH:  So it seems to me, that Martin is trying to tell us something else. That maybe there are parts of nervous system which could have some other principles or mechanisms of working, different from HPCT,

RM: Yes, I too think that’s what Martin is saying. But we already know that. I’m just saying that it makes more sense to test the existing model before going off and speculating about alternatives. If the tests show that a revision of the model is needed then that is the time to try one of these other possibilities.

HB :My personal oppinion is that PCT, if it wants to be general theory of nervous system or even general theory of how organisms work, should be able to “incorporate” all results of experiments and specific PCT testings.

RM: The problem is that research results that are obtained without an understanding that the system under study may be a control system are difficult to make sense of in terms of closed loop models. The main missing variable is, of course, the controlled variable. So we can interpret these results in terms of PCT but it is always just wild guesses (about the controlled variable(s)) unless the researchers themselves have collected the data in the context of a control model of the system under study.

BH: P.S. oh just that I don't forget question about Carver and Scheier. What makes you think Rick, that they are "input-output thinkers" or as it's put in "never-ending" story here on CSG-net - "S-R". Could you show me, where in their books or their arcticles you made your "S-R" conclusion about them ???

RM: In their first book, forgot the title, that came out in 1982, they gave a wonderful description of Powers’ theory but then went on to describe research, all of which was based on the causal model. So they themselves may not think in input-output terms but their research is done in input-output terms; and such research (as I noted above) neglects the possibility that any observed relationship between independent and dependent variable is mediated by controlled variable(s). So it as great that Carver and Scheier saw the merits of PCT early on; they just ended up understanding PCT in the context of the causal model of psychology. So they talk a lot about goals and goal attainment; they just don’t do their studies in a way that recognizes the nature of a goal attaining system – a control system.

Best

Rick

Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2013.02.11.1810)]

Martin Taylor (2013.02.11.09.10)–

MT: ...The point was that it is usually the

small anomalies in a theory that “works great” that lead to greater
understanding.

RM: I think you must mean “small deviations of theoretical predictions from data” not “small anomalies in a theory”. The “anomaly” in a theory that results in small deviations from the data may actually be quite large. This is what happened with relativity, no? As I understand it,the deviations of Newtonian predictions from data were quite small; the theoretical change (from Newton’s to Einstein’s equations) that corrected this “anomaly” was huge.

      RM: Actually, I don't "know of the characteristic deviations

between this control model [PCT/HPCT I presume] and human
performance". Please tell me what they are;

MT: Have you never looked closely at the fitted tracks at places where

the target changes location or velocity abruptly?

Actually, as I said before, I wasn't talking about deviations from

PCT or even from the HPCT model, but from the leaky-ingtegrator
output function. When I discussed the characteristic anomalies with
Bill a few years ago, I had the impression that the little
deviations between human and model in low-level tracking tasks were
well known and ill-understood. And if I knew what changes to the
output function would resolve the anomalies, I would not have
mentioned them as an open question within the HPCT structure.

RM: I doubt that these small deviations are of the same character as the small deviations the led to relativity. If you look at it in terms of predictable variance I think you will see that the PCT model is accounting for all the variance in behavior (of the output and cursor in a tracking task) that can be predicted. I know of no evidence that the “little deviations” of which you speak represent anything other than noise. One piece of evidence that they are not noise would be data that shows these deviations to be exactly the same on repeated tracking trials with the same disturbance. If you could present some evidence like this suggesting that the deviations of data from model are systematic I would be very interested in seeing it.

MT:  Einstein realized that this was important and theorized a whole lot

of things that would never have been thought of in the “nearly
finished” physics of 1900. There were no data to suggest relativity
might be a more accurate way of describing the world than the
prevailing “absolute universe” view, so it didn’t catch on generally
for nearly 20 years, until an out-of-left-field prediction turned
out to be correct.

RM: I think you are telling this story to suggest that theorists can get along without experimentalists. You imply that Einstein just started theorizing about relativity out of nowhere. But Einstein was much smarter than that. Behind this theorizing were some experimental observations having to do with light moving in space (eg. Michaelson-Morley) that didn’t fit the Newtonian model. These small (but troublesome) deviations of observation from theory were being handled by what I believe were basically kludges (eg. the Lorentz transformation). So Einstein’s theory was developed to handle these “anomalous” observations (anomalous in the context of the Newtonian model). Einstein developed his theories to account for data (all the existing physics data, including the anomalous results). He didn’t do the experiments himself but he certainly did his theorizing with the experimental data in mind. Theorists cannot get along without experimentalists, and vice versa. Theorists who work in the absence of theory are called philosophers or novelists or theologians. They are not scientists.

MT: In respect of comparing theories, let's go back to a couple of

possible alternatives to the HPCT structure, and ask which of your
many experiments and demos would give different data under these
different alternative possibilities. I think that the answer is
“none”, but I expect you to correct me if I am wrong.

The alternatives are based on the HPCT structure, each with a slight

modification (your preferred “hill-climbing” approach).

Possibility 1: Lateral feedback loops are permitted within a level...



Possibility 2: Rather than there being a category level....



I rather think that all the available experimental data would give

each of these possibilities (and probably many more) exactly the
same cachet of “working great”, because with those studies they
would all give the same predictions.

RM: This strikes me as a somewhat bizarre way to go about testing theories. And I’m not even convinced that these verbal descriptions of models would actually produce behavior that matches the behavior of real people. How about implementing just the “lateral feedback loops” model as a computer program and show us some predictions of how it would behave in a control task.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.02.11.23.17]

Yes, thanks for the correction.

Well, you are good at programming tracking studies. Why don’t you
just try a few in which the location or the velocity of the target
changes abruptly. See for yourself what happens to the match between
an optimized model and the human track immediately after the abrupt
change. As I said, when Bill and I discussed these consistent
deviations between theory and data a few years ago, I got the
impression that they were well known but ill understood. Martin

···

[From Rick Marken (2013.02.11.1810)]

        Martin Taylor

(2013.02.11.09.10)–

        MT: ...The point was

that it is usually the small anomalies in a theory that
“works great” that lead to greater understanding.

      RM: I think you must mean "small deviations of theoretical

predictions from data" not “small anomalies in a theory”. The
“anomaly” in a theory that results in small deviations from
the data may actually be quite large. This is what happened
with relativity, no? As I understand it,the deviations of
Newtonian predictions from data were quite small; the
theoretical change (from Newton’s to Einstein’s equations)
that corrected this “anomaly” was huge.

          RM: Actually, I don't "know of the

characteristic deviations between this control model
[PCT/HPCT I presume] and human performance". Please tell
me what they are;

        MT: Have you never

looked closely at the fitted tracks at places where the
target changes location or velocity abruptly?

        Actually, as I said before, I wasn't talking about

deviations from PCT or even from the HPCT model, but from
the leaky-ingtegrator output function. When I discussed the
characteristic anomalies with Bill a few years ago, I had
the impression that the little deviations between human and
model in low-level tracking tasks were well known and
ill-understood. And if I knew what changes to the output
function would resolve the anomalies, I would not have
mentioned them as an open question within the HPCT
structure.

      RM: I doubt that these small deviations are of the same

character as the small deviations the led to relativity. If
you look at it in terms of predictable variance I think you
will see that the PCT model is accounting for all the variance
in behavior (of the output and cursor in a tracking task) that
can be predicted. I know of no evidence that the “little
deviations” of which you speak represent anything other than
noise.

[From Rick Marken (2013.02.12.1330)]

Martin Taylor (2013.02.11.23.17)

      RM: I doubt that these small deviations are of the same

character as the small deviations the led to relativity. If
you look at it in terms of predictable variance I think you
will see that the PCT model is accounting for all the variance
in behavior (of the output and cursor in a tracking task) that
can be predicted. I know of no evidence that the “little
deviations” of which you speak represent anything other than
noise.

MT: Well, you are good at programming tracking studies. Why don't you

just try a few in which the location or the velocity of the target
changes abruptly. See for yourself what happens to the match between
an optimized model and the human track immediately after the abrupt
change. As I said, when Bill and I discussed these consistent
deviations between theory and data a few years ago, I got the
impression that they were well known but ill understood

RM: OK, what I did was two runs of a compensatory tracking task using the same disturbance. I added the outputs (O) and cursor (C) tracks together on the two runs. Based on the correlation between O on trial 1 and 2 (r.12 = .9976) – using the following formula for predictable variance: r.p = r.12/(r.12+.5(1-r.12)) – I calculate that the predictable variance in the sum of outputs on the two trials is 99.88%. A simple control model with best fitting gain and slowing accounts for 99.82% of the variance, so the model accounts for just about all the predictable variance in handle movements. Doing the same for cursor movements (C), only 37% of the variance in the sum of cursor movements on the two trials is predictable and the model accounts for only 11% of the variance in cursor movements. The RMS error for predicting actual H with model H is only 1% of the total possible error; the RMS error for predicting actual C from model C is 12% of the total possible error.

So it looks like there is room for some theoretical work to improve the fit of the model to the cursor movements. If you can give me some suggestions for how to do that I’d appreciate it.

But I would rather have you exercise your theoretical talents on a problem I was unable to solve so I never tried to publish my PCT model of a coordinated movement task that is described here:

http://www.mindreadings.com/Coordination.html

The model works pretty well in accounting for the coordinated movements that produce symmetric and asymmetric movements of the flags. But as you’ll see if you read through the whole piece, I was unable to get the model to make the spontaneous changes from asymmetric to symmetric movements. I didn’t want to kluge anything up to make that happened; I thought (and still think) that the reversal shoudl “fall out” of the normal operation of the model, as the speed of rotatoin of the flags is increased (that’s when it happens). It must have something to do with how the relative movements of the flags are perceived but I can’t solve it. If you can solve it we might have very nice paper to publish. If you are game, I can send you my versions of the program, which I’ve written in both Visual Basic and Java. The VB program is in Excel so it might be easiest for you to use. I haven’t done anything with this since 2004 so it might take me a while to figure out what the heck the program does. But it really would be nice to see if you could come up with something that would allow the PCT model match that one aspect of the data that I can’t quite match. Then I could be Michelson to your Einstein;-)

Best regards

Rick

···

Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.02.12.23.08]

When I brought this up, I used it as an example of an ongoing

puzzle, though your experiment is not the one that I expected you to
try. When I first came across the “abrupt change moment” effect, I
asked Bill whether he had a solution. As I remember, he said he had
often noticed it but hadn’t really worried about trying to solve it
because it seemed a very small thing in a system that overall worked
very well, which is true. I had no solution then, and I have none
now, which is why it remains a puzzle.
I came to ask Bill about this because I had been investigating
another phenomenon that it would be nice if you could verify or
contradict. The basic theory says that the parameters of a control
loop are independent of the waveform of the disturbance. I was
finding that the parameters of the optimized model differed
according to whether the disturbance waveform was built by adding
sinusoids, triangular waves, or square waves, and that seemed to
present another puzzle that I didn’t mention in my earlier messages.
It’s a long time ago and I don’t remember the details, though I
could probably find them with some difficulty. But maybe you could
very quickly run a few trials and see whether you get the same
effect (and whether you see the “abrupt change moment” consistent
difference between model and human when you use the triangle and
square-based disturbances).
All I can do is guess. Here is my first guess, which I may well
repudiate in the morning.
My presupposition is that under neither condition do the subjects
control rotation. In the “symmetry” condition they control the
orientation of the imaginary line between the flags, keeping the
flags “level” (equidistant from the subject). Under the “antiphase”
condition they cannot control this. What they control (remember,
this is just a guess) is the relative angle between the two arms of
the rotating flags, with a reference that they be parallel.
Comparing two angles when both are changing is a higher level
(slower, presumably) perception than what amounts to a simple
pursuit tracking experiment with the two flags. You did an
experiment showing the effects of control at different levels as the
speed of presentation changes. I suggest that the reversion may be
happening because there comes a speed at which the rotating parallel
perception cannot be controlled, while the simple “level” tracking
perception can.
It’s just a guess, and needs some kind of experiment to test it. One
possible experiment might be to have the flags on arms of different
lengths, with pivot points at different distances from the subject,
which would make the “symmetry” condition a nearer match in
perceptual level to the “antiphase” condition, since the subject
could not use the orientation of the line between the flags as a
controlled perception, but would have to use some perception related
to the relative angles of the two arms. I suppose it would be easier
to do it with a digital simulation these days. In simulation, you
could present or hide the flag arms and the pivot point. You would
still have to have the under-table input devices, though, so it’s
not something you could do with a simple mouse or trackpad.
I don’t think this experimental suggestion is a very good one, but
it’s my best for the moment, as is my guess about the reason for the
spontaneous change of control behaviour.
Martin

···

[From Rick Marken (2013.02.12.1330)]

        Martin Taylor

(2013.02.11.23.17)

          RM: I doubt that these small deviations are of the same

character as the small deviations the led to relativity.
If you look at it in terms of predictable variance I think
you will see that the PCT model is accounting for all the
variance in behavior (of the output and cursor in a
tracking task) that can be predicted. I know of no
evidence that the “little deviations” of which you speak
represent anything other than noise.

        MT: Well, you are good

at programming tracking studies. Why don’t you just try a
few in which the location or the velocity of the target
changes abruptly.

      RM: OK,  what I did was two runs of a compensatory tracking

task using the same disturbance. I added the outputs (O) and
cursor (C) tracks together on the two runs. …

      So it looks like there is room for some theoretical work to

improve the fit of the model to the cursor movements. If you
can give me some suggestions for how to do that I’d appreciate
it.

      But I would rather have you exercise your theoretical talents

on a problem I was unable to solve so I never tried to publish
my PCT model of a coordinated movement task that is described
here:

      [http://www.mindreadings.com/Coordination.html](http://www.mindreadings.com/Coordination.html)
  The model works pretty well in accounting for the coordinated

movements that produce symmetric and asymmetric movements of the
flags. But as you’ll see if you read through the whole piece, I
was unable to get the model to make the spontaneous changes from
asymmetric to symmetric movements.

[From Rick Marken (2013.02.13.1740)]

Martin Taylor (2013.02.12.23.08)–

      RM: But I would rather have you exercise your theoretical talents

on a problem I was unable to solve so I never tried to publish
my PCT model of a coordinated movement task that is described
here:

      [http://www.mindreadings.com/Coordination.html](http://www.mindreadings.com/Coordination.html)
  The model works pretty well in accounting for the coordinated

movements that produce symmetric and asymmetric movements of the
flags. But as you’ll see if you read through the whole piece, I
was unable to get the model to make the spontaneous changes from
asymmetric to symmetric movements.

MT: All I can do is guess. Here is my first guess, which I may well

repudiate in the morning.

My presupposition is that under neither condition do the subjects

control rotation. In the “symmetry” condition they control the
orientation of the imaginary line between the flags, keeping the
flags “level” (equidistant from the subject). Under the “antiphase”
condition they cannot control this. What they control (remember,
this is just a guess) is the relative angle between the two arms of
the rotating flags, with a reference that they be parallel.
Comparing two angles when both are changing is a higher level
(slower, presumably) perception than what amounts to a simple
pursuit tracking experiment with the two flags. You did an
experiment showing the effects of control at different levels as the
speed of presentation changes. I suggest that the reversion may be
happening because there comes a speed at which the rotating parallel
perception cannot be controlled, while the simple “level” tracking
perception can.

RM: This is great Martin! Just what I wanted; a suggestion regarding an alternative controlled variable!

If I can work up the energy to work on this again I’ll try it out.

Thanks.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com