Measuring Input-Output Characteristics of Components of a Closed Loop: Redux

[From Bruce Abbott (2014.03.18.1330 EDT)]

Adam Matic 2014.03.17 1645 cet –

Bruce Abbott (2014.03.18.0945 EDT)

BA: But as Fechner demonstrated, one can use difference thresholds to construct an actual input function relating the intensity of a sound wave to its perceived loudness. (By the way, in that function, the zero-point falls at the absolute threshold for intensity – that’s how the absolute threshold factors in to the mathematical function.)

AM:

What the subject is controlling is the difference between two intensities, as in the method of adjustment. Would you say that the various difference thresholds are related to the gain of this difference controlling system?

I mean, if the gain of this system is low, then it won’t find small differences. If it is high, it will find very small differences.

BA: It could relate to gain, but the resolving power of a perceptual system is affected by other factors as well, such as how much “noise” is present in the system and, in the case of the visual system, the type and density of photoreceptors (somewhat like pixel size in a digital camera).

BA: I was going to use the same example to distinguish between controlled versus uncontrolled perceptions. Where the target appears on the screen is not under your control; the computer program “decides” that. What is under your control is the cursor position, which you control by moving the mouse.

AM:

I don’t think that position of the cursor on the screen is a controlled variable.

BA: Let me ask you this: can you put the cursor in whatever vertical position you wish (within the limits determined by the program)? If so, then you are controlling the position of the cursor.

“Position of the cursor on the screen” is a computer variable related to the coordinate system of the screen, as is the “position of the target on the screen”, as is the “difference between positions of target and cursor”. Those are quantities in the model of the environment. The perceptual signal of the model is inputGain*positionDifference, and only this quantity is what is being controlled by our model control system. The system is varying position of the handle, which is related to the cursor position and disturbance variables, which is related to the position difference variable. All the variables in the last sentence are parts of the environment of the model control system.

BA: And just how is the model controlling that difference? By moving the simulated handle (the actions) to move the (computed) position of the cursor closer to the (computed) position of the target. It doesn’t just directly change a positional difference. The program computes target position without accounting for the user’s actions – it simply applies a computed disturbance value to target position, iteration by iteration. It changes the difference between target position and cursor position only by moving the cursor in accordance with the model’s simulated actions. In other words, it reduces the error between (computed) target position and (computed) cursor position by controlling the position of the cursor.

That would make retinal position of the target stick a controlled and perceived variable as much as retinal position of the cursor stick. Hypothetically, both are being controlled by some eye-focusing-on-object system, or a different type of position-difference-producing function. Nevertheless, they are both controlled variables. Experimentally supported is only the hypothesis that the controlled variable is this difference between their position.

BA: Adam, it’s a bit disconcerting to me that you haven’t entered into a discussion with me relating to the points I’ve raised; I have no idea whether you agree or disagree with them or even whether you have understood them correctly. Thus it’s is hard to know what else to say that might help you to understand the difference between controlled and uncontrolled perceptions.

In my previous post I noted the we do perceive both the target line and cursor line as separate objects on the screen, each with its own set of properties – position, length, thickness, color, etc. By design, the only property of those objects that the user is able to control for the purpose of tracking the target is the vertical position of the cursor – through the action of moving the mouse in the appropriate direction. That’s all there is to it. The user does not control the vertical position of the target.

I’ve also noted that other aspects of these objects may be under the user’s control, such as whether they are visible or not (you can look at them or not), or where the image of these objects is focused on the retina. But that is irrelevant to the fact that, for purposes of keeping the cursor aligned with the target, the user can only do that by moving the mouse and thereby moving the cursor. But in response you point out that retinal position is a controlled variable, etc., as if that were a telling argument against the view I’m defending, which is that the user has control only over cursor position as the means of keeping the cursor aligned with the target.

Bruce

[Martin Taylor 2014.03.18.11.35]

That's one example. The paragraph you quoted from me is rather wider

in scope. But let’s pursue your example.
I would make one change to this, and make one comment. The change is
that I would alter “the possible controlled variable” to “a variable
possible to control”.
The comment is that the word “threshold” says something about
control systems that might use that particular perception. If the
range of the input is below threshold, control is impossible, which
makes it likely that the function relating d-o to error has a flat
region around zero – a “tolerance zone”. Tolerance zones are useful
in allowing many control systems to function without conflict in
spite of restriction in the available degrees of freedom.
I don’t see that it is even a search for a controlled variable, let
alone a test for one. The experiment gives the same result (or
close; long ago I published a paper explaining some subtle
differences) whether the tested perception is being controlled or
not.
I am confused about what could be confusing about the concept of a
perception that you are not at this moment controlling, so I can’t
help you with that. To me it seems like such an obvious concept. I
look out my window at the neighbour’s house wall. I perceive that it
is made of brick. I am not trying to change it into a concrete wall.
In fact I am not trying to make it “not brick” or to keep it brick
if he were to decide to change it. I am not controlling my
perception that his house wall is made of brick. Does that confuse
you? It seems straightforward to me.
Two perceptions are under control here. The perceived difference in
cursor and reference is controlled, and the means of controlling the
difference perception is to control the cursor position perception.
The difference perception control unit outputs a value that is the
reference for the cursor position controller. It’s the same as the
car-in-lane perception so often trotted out as an example; the
position of the car in its lane is controlled, the output being fed
as a reference value for the controller of steering wheel angle.
Who knows? What matters for the analysis is that we CAN perceive the
difference in position. Rick thinks he has shown that what we
actually perceive is the angle of an imaginary line connecting
cursor and reference, rather than the actual position difference. It
doesn’t matter which, for the control analysis, though as a question
of how perceptual systems work, it is a very interesting question.
It’s just a different question.
The last sentence seems totally unconnected with the rest of the
paragraph. Could you clarify how you make this leap?
Martin

···
          >From

Adam Matic 2014.03.17]

          MT:

Sure, but these are all part of the environment of the
control unit that controls for equality between the tested
perception and the intended output answer. If they are
good controllers, they are irrelevant to the quesiton at
issue. All they do is ensure that the answer output is the
one for which the relationship perceiver’s output created
the reference value. If they affect the perception of
following stimuli, that’s irrelevant to the one that just
happened.

AM:

Let me backtrack a second.

          The issue is whether we've learned something about

input functions from the experiment with sound frequency
thresholds.

          What we get, following either of mentioned procedures

of measuring absolute thresholds, is a curve, a list of
limits for respective frequencies. That is it. That is not
an input function. It can be viewed as useful information
about control, since we found what is the possible
controlled variable - i.e. for each frequency it is any
sound above the threshold intensity.

          That is why I think this experiment is no more than

the TCV.

MT:

              The other problem is with how the higher levels of

perception are constructed from signals from lower
levels. The perception of “relationship between finger
and sound” is constructed from perceptions of finger
position and intensity of tone, so intensity of tone
can’t be an uncontrolled perception - perhaps its
reference is changed after rising the finger. Or gain?

          Why can't it be an uncontrolled perception? Lots of the

inputs to any higher-level perception are unconrtrolled.
Why do you have a problems with that?

AM:

          I have problems with this because I don't know how to

make a model that would pick and choose different
perceptions to control or not control, so I don’t know how
that would work, and I’m confused what even means “an
uncontrolled perception”.

          In a standard tracking task like the TrackAnalyse, one

of the sticks is moving as the reference, one of them is
moving as the cursor. The reference stick is not ‘under
control’. It is not even perceived on this level. The
cursor stick is also not ‘under control’ on this level. We
have found out that what is under control can can be
approximated by a function: difference = cursorPosition -
referencePosition. That is an approximation of what is
under control, and a very good one, since we get good RMS
and correlation measures.

          Is there really a perception of cursor position as a

neural magnitude and another perception of ‘reference
stick position’, and then a neuron that takes a difference
between those two? There could be, and that would mean the
retina has some sort of coordinate system and tracks
positions of objects relative to something else.

          There could also be a function that directly perceives

this difference as the length of an invisible stick that
comes between the cursor and the reference. Perhaps this
perception of difference is related to time that it takes
eyes to focus from one stick to another. Perhaps it is
related to eye muscle effort in moving from one position
to another. Or both of those things. I don’t know, the
visual system is very complex, as is the auditory system.
I don’t think we can just say, ‘this is clearly an
uncontrolled perception’.

Adam

[From Adam Matic, 2014.03.17]

Bruce Abbott (2014.03.18.1330 EDT)

BA: It could relate to gain, but the resolving power of a perceptual system is affected by other factors as well, such as how much “noise” is present in the system and, in the case of the visual system, the type and density of photoreceptors (somewhat like pixel size in a digital camera).

AM:
Ok, so if it is related to gain, and also to other properties of the whole control system, then can we make sure somehow which part is related to just the input function, and which part is related to other components of the control system?
>

AM:

I don't think that position of the cursor _on the screen_ is a controlled variable.

BA: Let me ask you this: can you put the cursor in whatever vertical position you wish (within the limits determined by the program)? If so, then you are controlling the position of the cursor.

BA: And just how is the model controlling that difference? By moving the simulated handle (the actions) to move the (computed) position of the cursor closer to the (computed) position of the target. It doesn’t just directly change a positional difference. The program computes target position without accounting for the user’s actions – it simply applies a computed disturbance value to target position, iteration by iteration. It changes the difference between target position and cursor position only by moving the cursor in accordance with the model’s simulated actions. In other words, it reduces the error between (computed) target position and (computed) cursor position by controlling the position of the cursor.

That would make _retinal position_ of the target stick a controlled and perceived variable as much as _retinal position_ of the cursor stick. Hypothetically, both are being controlled by some eye-focusing-on-object system, or a different type of position-difference-producing function. Nevertheless, they are both controlled variables. Experimentally supported is only the hypothesis that the controlled variable is this difference between their position.

BA: Adam, it’s a bit disconcerting to me that you haven’t entered into a discussion with me relating to the points I’ve raised; I have no idea whether you agree or disagree with them or even whether you have understood them correctly. Thus it’s is hard to know what else to say that might help you to understand the difference between controlled and uncontrolled perceptions.

In my previous post I noted the we do perceive both the target line and cursor line as separate objects on the screen, each with its own set of properties – position, length, thickness, color, etc. By design, the only property of those objects that the user is able to control for the purpose of tracking the target is the vertical position of the cursor – through the action of moving the mouse in the appropriate direction. That’s all there is to it. The user does not control the vertical position of the target.

I’ve also noted that other aspects of these objects may be under the user’s control, such as whether they are visible or not (you can look at them or not), or where the image of these objects is focused on the retina. But that is irrelevant to the fact that, for purposes of keeping the cursor aligned with the target, the user can only do that by moving the mouse and thereby moving the cursor. But in response you point out that retinal position is a controlled variable, etc., as if that were a telling argument against the view I’m defending, which is that the user has control only over cursor position as the means of keeping the cursor aligned with the target.

AM:
Well, I told you I was confused. :slight_smile:
I will have to think hard about this. You raise some interesting points and I don't know if this is just mismatch in words or is it different understanding of mechanics.
I'm trying to show why I think that this concept of uncontrolled perceptions can't really be used to state that some part of behavior is open-loop.
Adam

[From Adam Matic 2014.03.18. 2040]

MT: Two perceptions are under control here. The perceived difference in cursor and reference is controlled, and the means of controlling the difference perception is to control the cursor position perception. The difference perception control unit outputs a value that is the reference for the cursor position controller. It's the same as the car-in-lane perception so often trotted out as an example; the position of the car in its lane is controlled, the output being fed as a reference value for the controller of steering wheel angle.

AM:
I should have said cursor and target, as Bruce later did. The point was that what is being controlled are retinal variables, so nothing we do in research will give us an output of an uncontrolled variable.

MT: The last sentence seems totally unconnected with the rest of the paragraph. Could you clarify how you make this leap?

AM:
It seems straightforward to me, not as a leap. I'm not very good with words, I suppose. I'm also not very good at understanding what other people think of these complex processes when I read what they say, since I might very well be misunderstanding what you and Bruce are saying.
I forgot what the issue was. Does it matter what SR research found? I don't care. There might be some interesting things if one looks hard enough, so if someone cares and looks that's fine by me.
Adam

[Martin Taylor 2014.03.18.15.53]

Control happens at all levels of perception, and at all levels of

perception more perceptions are uncontrolled than are controlled.
Still, I’m not clear what actions would allow one to control a
retinal variable, or at what level of the hierarchy retinal
variables might be situated. Are you thinking of rod and cone
outputs, ganglion outputs (there are lots of types, I gather), optic
nerve signals, or what? Whichever you are thinking of, there are
millions to hundreds of millions of them, and at most you could
control only tens of those.
You say you are not good at words, though I am not sure I would
agree. However, I can’t figure out what you mean by “the output of a
variable”. A variable has a value. A function or an operation has an
output for a given input. Do you mean “nothing we do in research
will allow us to measure the value of an uncontrolled variable”, or
“…to detect the output of a function involved in constructing the
value of an uncontrolled variable”, or something else?
Martin

···

[From Adam Matic 2014.03.18. 2040]

            MT: Two perceptions

are under control here. The perceived difference in
cursor and reference is controlled, and the means of
controlling the difference perception is to control the
cursor position perception. The difference perception
control unit outputs a value that is the reference for
the cursor position controller. It’s the same as the
car-in-lane perception so often trotted out as an
example; the position of the car in its lane is
controlled, the output being fed as a reference value
for the controller of steering wheel angle.

AM:

          I should have said cursor and target, as Bruce later

did. The point was that what is being controlled are
retinal variables, so nothing we do in research will give
us an output of an uncontrolled variable.

[Martin Taylor 2014.03.18.15.17]

MT: From one measurement, it is impossible to tell anything more

than that the entire component (which includes the perceptual and
the output function and all the more peripheral input functions and
output control units) has some measured property. You must do a
variety of measurements. For example, you could change the output
mechanism while keeping the presentation the same, or you could vary
the presentation while keeping the output the same. If your
measurement doesn’t change when you change one of the sub-components
– for example changing from asking the subject to push a button to
raising a finger to speaking – then you might be justified in
saying your measurement is largely of what didn’t change.
MT: I think there’s more confusion than you recognize. I don’t think
anyone here has suggested that any part of behaviour is open loop.
Nor would they, if they understood much of PCT.
When a subject is in a psychophysical experiment, for example, the
presentation of a “stimulus” is a disturbance to some perception
relating to following instructions. Error will persist in that
control unit until the subject emits a response. The question is
what response should be emitted. Here again there is control, now of
a relationship perception. At some level being studied the input is
perceived, and in imagination different outputs are tried until one
matches the perceived input. When a match is found, that choice is
used to implement the output of the control unit that has error
until a response has been made.
What is open-loop is the relation between choice of stimulus and
choice of response. Was the tone in the first or second noise burst
that time? Whichever the subject chooses, it won’t affect the fact
that on this occasion it was actually in the first interval. There’s
no way the choice of response can influence the choice of that
particular stimulus. Depending on the particular experiment, it
might influence the choice of the next stimulus, but it doesn’t have
to. If it doesn’t then the perception being studied is uncontrolled
by the subject, though it is controlled by the experimenter, either
directly or by the instructions given to a machine.
Martin

···

[From Adam Matic, 2014.03.17]

                Bruce

Abbott (2014.03.18.1330 EDT)

                    BA:

It could relate to gain, but the resolving power
of a perceptual system is affected by other
factors as well, such as how much “noise” is
present in the system and, in the case of the
visual system, the type and density of
photoreceptors (somewhat like pixel size in a
digital camera).

AM:

          Ok, so if it is related to gain, and also to other

properties of the whole control system, then can we make
sure somehow which part is related to just the input
function, and which part is related to other components of
the control system?

AM:

          I'm trying to show why I think that this concept of

uncontrolled perceptions can’t really be used to state
that some part of behavior is open-loop.

Adam

[From Adam Matic 2014.03.19 0700cet]
AM:
Ok, so I got some sleep and decided to go back a few posts.

Bruce Abbott (2014.03.14.1305 EDT)
BA: Keep in mind that most perceptions are not controlled perceptions; for example, you may see many people walking along a street. The retinas of your eyes convert the patterns of illumination produced by this scene into neural signals that are analyzed at higher levels within the brain to produce perceptions of the people, street, other objects. You are not controlling the expressions on those people’s faces, nor the style of clothing they are wearing, to mention just a couple aspects of the scene that you are not controlling. It is possible to test the perceptual system to determine what basic properties of the image go into the analysis that produces those recognizable objects, their apparent motions, colors, shading, and so on. Such tests can be done by giving participants control over those aspects, but it is not necessary to do so. Thus I disagree with your statement above that investigating perceptual functions is “like a search for the controlled variable” –unless by “like” you mean that it shares certain features of the Test. To control a perception you must be able to perceive it; investigations of perceptual functions, whether or not under control during the investigation, can help to establish exactly how perceptions are created by the relevant sensory and brain systems.

AM:
This is the first mention of uncontrolled perceptions on this topic, and you give examples of them in the visual system.
I see many problems with this explanation of how the visual system works.
The visual system is as much perceptual as it is motoric. It is as much perceptual and motoric as the tradicionaly called motoric system (kinestetic sense and muscle movement system). There is, in theory, a similar hierarchical structure of controlled perceptions in both of these.
I see no uncontrolled perception in the Arm Demo, neither version 1 or 2 or CoordinationDemo or ArmTrackTarget, the Little Man.. Nowhere. At all times, at all levels, as I'm sure you can appreciate, since you are working on a version of the Little Man, all perceptions are - controlled.
How exactly a person chooses what to control is beyond current models, as you mentioned. In the Choose Control demo, we see what the output looks like when the person is controlling one of the variables. We don't find out anything about input functions of other variables that are not under control.

BA:
In my previous post I noted the we do perceive both the target line and cursor line as separate objects on the screen, each with its own set of properties – position, length, thickness, color, etc. By design, the only property of those objects that the user is able to control for the purpose of tracking the target is the vertical position of the cursor – through the action of moving the mouse in the appropriate direction. That’s all there is to it. The user does not control the vertical position of the target.
BA:
I’ve also noted that other aspects of these objects may be under the user’s control, such as whether they are visible or not (you can look at them or not), or where the image of these objects is focused on the retina. But that is irrelevant to the fact that, for purposes of keeping the cursor aligned with the target, the user can only do that by moving the mouse and thereby moving the cursor. But in response you point out that retinal position is a controlled variable, etc., as if that were a telling argument against the view I’m defending, which is that the user has control only over cursor position as the means of keeping the cursor aligned with the target

AM:
I agree, of course, that the user is not controlling the position of the target. That is not the issue.
I am trying to illustrate why I think there are no uncontrolled perceptions in the lower levels of the visual system. It is your hypothesis that some of these variables are uncontrolled. I disagree. All visual variables start with the retina, and are controlled by varying eye position, head position, body position..
They are controlled at all times.
Adam

[From Adam Matic 2014.03.19 0707 cet]

···

Martin Taylor 2014.03.18.15.53
Control happens at all levels of perception, and at all levels of
perception more perceptions are uncontrolled than are controlled.
Still, I’m not clear what actions would allow one to control a
retinal variable, or at what level of the hierarchy retinal
variables might be situated. Are you thinking of rod and cone
outputs, ganglion outputs (there are lots of types, I gather), optic
nerve signals, or what? Whichever you are thinking of, there are
millions to hundreds of millions of them, and at most you could
control only tens of those.

AM:

What I mean by ‘retinal variable’ is any variable that is constructed from signals coming from the retina. So, retinal intensities, sensations from combinations of intensities, configurations, transitions… I think all of those are controlled at all times by the visual system.

For an explicit illustration, look at Rick’s simulation of Catching Fly Balls. What the runner is controlling is the vertical and lateral velocity of the optical projection of the ball on the retina.

MT: At some level being studied the input is perceived, and in imagination different outputs are tried until one matches the perceived input. When a match is found, that choice is used to implement the output of the control unit that has error until a response has been made.

AM:

Martin, you talk of imagination and matching and implementing output if it those were facts, and not a hypothesis to test. There are no working models you can base your claims on. The same is true of uncontrolled perceptions.

I have never seen a model with even a single uncontrolled perception.

There might be such models, but before one is constructed and verified, any talk about what happens inside the human system is strictly hypothetical.

Adam

[Martin Taylor 2014.03.19.09.55]

[From Adam Matic 2014.03.19 0700cet]

I am trying to illustrate why I think there are no uncontrolled perceptions in the lower levels of the visual system. It is your hypothesis that some of these variables are uncontrolled. I disagree. All visual variables start with the retina, and are controlled by varying eye position, head position, body position..

They are controlled at all times.

Let me test your assertion by asking a question.

Suppose we label each individual retinal rod and cone with a label, say lat-long on the eyeball inner surface. So we could have a cone labelled 15663-34153 or something like that. Let us accept your assertion that the outputs of all the rods and cones "are controlled by varying eye position, head position, body position".

At some moment t0, cone 15663-34153 has an output that is 3 units above its reference value. The near neighbour cone 15664-34152 has an output 4 units below its reference value. Another nearby cone 15613-34462 has an output equal to its reference value, and so on for all hundred million rods and cones. All of these "are controlled by varying eye position, head position, body position".

The question is: "How do these hundred million control systems bring their individual controlled perceptions to their reference values simultaneously?"

Martin

[From Bruce Abbott (2014.03.19.1005 EDT)]

Adam Matic 2014.03.19 0700cet –

AM:

Ok, so I got some sleep and decided to go back a few posts.

Bruce Abbott (2014.03.14.1305 EDT)

BA: Keep in mind that most perceptions are not controlled perceptions; for example, you may see many people walking along a street. The retinas of your eyes convert the patterns of illumination produced by this scene into neural signals that are analyzed at higher levels within the brain to produce perceptions of the people, street, other objects. You are not controlling the expressions on those people’s faces, nor the style of clothing they are wearing, to mention just a couple aspects of the scene that you are not controlling. It is possible to test the perceptual system to determine what basic properties of the image go into the analysis that produces those recognizable objects, their apparent motions, colors, shading, and so on. Such tests can be done by giving participants control over those aspects, but it is not necessary to do so. Thus I disagree with your statement above that investigating perceptual functions is “like a search for the controlled variable” –unless by “like” you mean that it shares certain features of the Test. To control a perception you must be able to perceive it; investigations of perceptual functions, whether or not under control during the investigation, can help to establish exactly how perceptions are created by the relevant sensory and brain systems.

AM:

This is the first mention of uncontrolled perceptions on this topic, and you give examples of them in the visual system.

I see many problems with this explanation of how the visual system works.

BA: That was not an explanation of how the visual system works – at least not a complete one. I left out all the details that are not relevant to the point I was making, which is that, when looking at an object, you are generally not in control of the properties of the objects in the visual scene. Yes, you are in control of the looking – you can choose to view this or that object – and there are control systems that will adjust the focus and brightness of the resulting perception. But those facts are not relevant to the question of whether you are controlling the color of a passerby’s shirt or whether the sky is cloudy or clear.

AM: The visual system is as much perceptual as it is motoric. It is as much perceptual and motoric as the tradicionaly called motoric system (kinestetic sense and muscle movement system). There is, in theory, a similar hierarchical structure of controlled perceptions in both of these.

BA: Yes, that’s true. (By the way, that fact was first pointed out by John Dewey (1896) in his paper The Reflex Arc Concept in Psychology.) But I don’t see any relevance of that point to the fact that, at any given moment, numerous variables are not under our control. Perceptions of those variables are created by the sensory/perceptual systems and aspects of those perceptions, such as whether the image of a person is in focus, are controlled, but not every aspect is controlled. I can’t just will your voice to change pitch, your hair to turn gray, or the pants I perceive you to be wearing to turn into a skirt. Or if I can, someone should see that I get proper medical treatment as I seem to be hallucinating!

AM: I see no uncontrolled perception in the Arm Demo, neither version 1 or 2 or CoordinationDemo or ArmTrackTarget, the Little Man… Nowhere. At all times, at all levels, as I’m sure you can appreciate, since you are working on a version of the Little Man, all perceptions are - controlled.

We must have an important difference in the way we define “control.” After admitting (below) that you don’t control the position of the target in the TrackAnalyze demo, you then assert that all perceptions are controlled. That blanket assertion would have to include the position of the target in the TrackAnalyze demo, would it not? And there are also numerous variables that are not controlled in those other demos you mention – the shape of Little Man’s chest, for example.

AM: How exactly a person chooses what to control is beyond current models, as you mentioned. In the Choose Control demo, we see what the output looks like when the person is controlling one of the variables. We don’t find out anything about input functions of other variables that are not under control.

BA: So now there are variables that are not under control? What am I missing in my understanding of your position?

BA: As to your second sentence, we don’t find out anything about the input functions of variables that are not under control because the demo was not designed to determine anything about those input functions. You can learn something about the input function of the variable you are controlling in ChooseControl (namely, that you can perceive that aspect of the object). If you concluded from this that, generally speaking, you can’t learn something about an input function of a variable unless the variable is under control, then you would be committing a logical fallacy known as “affirming the consequent.” In fact you can learn about input functions of variables that are not under control, as I’ve illustrated previously with psychophysical examples.

BA:

In my previous post I noted the we do perceive both the target line and cursor line as separate objects on the screen, each with its own set of properties – position, length, thickness, color, etc. By design, the only property of those objects that the user is able to control for the purpose of tracking the target is the vertical position of the cursor – through the action of moving the mouse in the appropriate direction. That’s all there is to it. The user does not control the vertical position of the target.

BA:

I’ve also noted that other aspects of these objects may be under the user’s control, such as whether they are visible or not (you can look at them or not), or where the image of these objects is focused on the retina. But that is irrelevant to the fact that, for purposes of keeping the cursor aligned with the target, the user can only do that by moving the mouse and thereby moving the cursor. But in response you point out that retinal position is a controlled variable, etc., as if that were a telling argument against the view I’m defending, which is that the user has control only over cursor position as the means of keeping the cursor aligned with the target

AM:

I agree, of course, that the user is not controlling the position of the target. That is not the issue.

BA: But it is precisely the issue we’ve been arguing about!

I am trying to illustrate why I think there are no uncontrolled perceptions in the lower levels of the visual system. It is your hypothesis that some of these variables are uncontrolled. I disagree. All visual variables start with the retina, and are controlled by varying eye position, head position, body position.

You fail to distinguish what aspects of those perceptions are under control. The position of the image in the retina is controlled. That says nothing about other aspects of the image that you perceive. Many of them are not under your control, (even at the level of the retina), which invalidates your claim (see below).

They are controlled at all times.

BA: Based on this, I believe that the word “they” is at the heart of our apparent disagreement. It’s too broad a term. “They” seems to include all aspects the visual input, but it’s easy to demonstrate that some aspects are not controlled. If I have you look at a circle drawn on paper, you turn your eyes to center the image of the circle on your retinal fovea, bring the image into focus by changing the shape of the crystalline lens, and adjust the intensity of the light illuminating the retina by changing the size of your pupils. You do not control the shape of the circle’s image on the retina – that is determined by optical physics. That image will be analyzed in several stages, beginning in the retina itself, ultimately to produce a perception of the circle. Unless your system is damaged or defective, it will not produce the perception of a square, a triangle, or a Scottish Terrier dog.

Remember, “controlled” is not synonymous with “processed.” The visual system processes the stimulation produced by the circle’s image, and some of that processing is done by control systems – as for focus, brightness, position on the retina, and so on. But if vision is to be useful, the scene presented by the perceptual system must bear a reasonably close relationship to what I presume to be the real world of matter and energy. If a circle is on the paper, you should be perceiving a circle. In this example, you are not actively controlling its shape, you are merely perceiving that shape.

Bruce

[Martin Taylor 2014.03.19.10.04]

[From Adam Matic 2014.03.19 0707 cet]

Is the runner controlling the configuration of arena stands, the

pattern of clouds against which the ball image crosses (I can attest
from personal experience that these matter a lot when catching a fly
ball, having nearly been killed by trying to catch a high fly
against a clear blue sky), the velocity of another fielder’s image
across the retina, etc. etc.?

Not true. It's a structural model based on the hypothesis that HPCT

is correct as modified up to LCSIII. Of course, a model based on a
hypothesis is by its nature a hypothesis, so you are correct in that
sense.

So far as I can see, there are only three reasons for including an

uncontrolled perception in a model. (1) The model is of pursuit
tracking, (2) the model is examining possible interference effects
of an uncontrolled perception on the control of some other
perception, and (3) the uncontrolled perception is part of the input
to a perceptual function that creates a perception being controlled.
There may be other reasons, but I can’t think of them off the top of
my head. I know of no examples of (2), and when control is being
modelled, (3) is usually ignored because the nature of the
perceptual function that creates the controlled perception is of
little interest to the modeller.

For (1), I suggest you go back in this thread. I grant you that the

thread doesn’t actually contain a model of pursuit tracking that you
can see, but it is well described.The subject is instructed to
maintain at some reference value a certain controlled perception of
a relationship between (a) an uncontrolled perception of some
property of a target and (b) a controlled perception of some
property of a cursor. Pursuit tracking tasks have been modelled many
times in different ways.

There are plenty of them. Rick makes them frequently to model

different tasks, they are incorporated in the fits for the tracking
demos on line or in LCSIII. Or you could check out
http://www.mmtaylor.net/PCT/CSG2005/CSG2005bFittingData.ppt
for two more complex examples that I used to test a couple of
hypotheses about the mechanism of pursuit tracking.

And one comment: in science, nothing is ever "verified". The best

that can be done is to increase or decrease the credibility of a
hypothesis.

Martin
···
            Martin Taylor

2014.03.18.15.53
Control happens at all levels of
perception, and at all levels of perception more
perceptions are uncontrolled than are controlled.
Still, I’m not clear what actions would allow one to
control a retinal variable, or at what level of the
hierarchy retinal variables might be situated. Are you
thinking of rod and cone outputs, ganglion outputs
(there are lots of types, I gather), optic nerve
signals, or what? Whichever you are thinking of, there
are millions to hundreds of millions of them, and at
most you could control only tens of those.

AM:

          What I mean by 'retinal variable' is any variable that

is constructed from signals coming from the retina. So,
retinal intensities, sensations from combinations of
intensities, configurations, transitions… I think all of
those are controlled at all times by the visual system.

          For an explicit illustration, look at Rick's simulation

of Catching Fly Balls. What the runner is controlling is
the vertical and lateral velocity of the optical
projection of the ball on the retina.

MT: At
some level being studied the input is perceived, and in
imagination different outputs are tried until one
matches the perceived input. When a match is found, that
choice is used to implement the output of the control
unit that has error until a response has been made.

AM:

          Martin, you talk of imagination and matching and

implementing output if it those were facts, and not a
hypothesis to test.

          There are no working models you can base your claims

on. The same is true of uncontrolled perceptions.

          I have never seen a model with even a single

uncontrolled perception.

          There might be such models, but before one is

constructed and verified, any talk about what happens
inside the human system is strictly hypothetical.

[From Adam Matic 2014.03.19. 1650]

Bruce, as you say, I think we do have a fundamental difference in how we understand ‘control of perception’.

All the perceptual variables that are present in the little man demo are controlled variables. There is no perceptual variable "shape of the little man’s chest’, and thus there are no ‘uncontrolled perceptions’. There are uncontrolled aspects of the environment, but we are never in control of the environment, only in control of our perception of the environment.

Adam

[From Adam Matic 2014.03.17 1700 cet]

···

Martin Taylor 2014.03.19.10.04

Is the runner controlling the configuration of arena stands, the

pattern of clouds against which the ball image crosses (I can attest
from personal experience that these matter a lot when catching a fly
ball, having nearly been killed by trying to catch a high fly
against a clear blue sky), the velocity of another fielder’s image
across the retina, etc. etc.?

AM:
No, he is not. You are talking about aspects of the environment.

Now, here is the important part - the runner is also not in control of “his position in the field” since he is not perceiving his position in the field. The perceptual variable he is controlling is the retinal velocity. He is varying his position in the field in order to control this retinal variable.

Adam

[From Bruce Abbott (2014.03.19.1220 EDT)]

Adam Matic 2014.03.19. 1650 –

Bruce, as you say, I think we do have a fundamental difference in how we understand ‘control of perception’.

All the perceptual variables that are present in the little man demo are controlled variables. There is no perceptual variable "shape of the little man’s chest’, and thus there are no ‘uncontrolled perceptions’. There are uncontrolled aspects of the environment, but we are never in control of the environment, only in control of our perception of the environment.

BA: Ah, there’s another source of confusion. I’m talking about what the user perceives and you are talking about what the program computes. The Little Man program does not need to include computations for perceptual variables that are not being controlled, so as you say, it does not include any uncontrolled perceptions, where perception is taken to mean a variable emerging from a perceptual input function. In contrast, the user of the program does perceive the Little Man figure on the screen and can control only some aspects of that figure, by manipulating the target position and the viewpoint. Little man’s control systems respond to the change of target position by moving head, eyes, and arm-joints to keep the head, eyes, and finger tip pointing at the target. Other aspects of little man, such as the shape of his chest as seen from a given point of view are determined by the program but are not produced by control systems within the program or within the user.

Some demos do compute a perception that is not controlled by the program. For example, the tracking demo does compute the perception of the target position as a function of the disturbance waveform that is being applied to target position. Target position is determined S-R fashion from these computations; the program contains no control system for target position.

With respect to your last sentence above, it is not true that “we are never in control of the environment.” If that were true, we wouldn’t last long. Although the only things I can know about the environment are those presented to me by my perceptual systems. However, if jumping back onto the (perceived) curb to avoid (the perception of) being hit by a (perceived) car did not prevent me from actually being hit by the very real car behind those perceptions, I’d soon be dead. It is true that that’s an article of faith (perhaps the only reality that exists is the one in our minds – er, my mind, since you would exist only in my mind), but I wouldn’t be willing to test that proposition by jumping in front of a speeding car. The phrase “it’s all perception” refers to what we actually experience, as opposed to directly experiencing Reality. It is not a denial that Reality exists.

Bruce

[From Rick Marken (2014.03.19.0940)]

···

Bruce Abbott (2014.03.19.1005 EDT)

BA: …You can learn something about the input function of the variable you are controlling in ChooseControl (namely, that you can perceive that aspect of the object). If you concluded from this that, generally speaking, you can’t learn something about an input function of a variable unless the variable is under control, then you would be committing a logical fallacy known as “affirming the consequent.” In fact you can learn about input functions of variables that are not under control, as I’ve illustrated previously with psychophysical examples.

RM: I agree that we can learn about the input functions that produce perceptions whether or not those perceptions are currently under control. But I must have missed your illustration of how this is done in psychophysical experiments. I can’t think of any psychophysical procedure that doesn’t involve asking the subject to either directly control the perceptual variable under study (method of adjustment, limits, etc) or directly control a perception that is disturbed by the perceptual variable under study (magnitude estimation). I guess I’m thinking that it’s not possible to learn about the characteristics of input function of any perceptual variable in psychophysical experiments without that variable being under control; if it’s just a disturbance to a variable that’s under control then you have the behavioral illusion problem (as you do have in magnitude estimation experiments; see http://www.mindreadings.com/BehavioralIllusion.pdf).

Best

Rick


Richard S. Marken PhD
www.mindreadings.com
It is difficult to get a man to understand something, when his salary depends upon his not understanding it. – Upton Sinclair

[From Adam Matic 2014.03.19]

···

Bruce Abbott (2014.03.19.1220 EDT)

Some demos do compute a perception that is not controlled by the program. For example, the tracking demo does compute the perception of the target position as a function of the disturbance waveform that is being applied to target position. Target position is determined S-R fashion from these computations; the program contains no control system for target position.

AM:

This is the way I see it:

The controlled variable is the difference, so neither the ‘cursor position’ nor the ‘target position’ are controlled variables. Cursor position is an aspect of the environment and is the output function that is varied to control the perceptual variable.

With respect to your last sentence above, it is not true that “we are never in control of the environment.” If that were true, we wouldn’t last long. Although the only things I can know about the environment are those presented to me by my perceptual systems. However, if jumping back onto the (perceived) curb to avoid (the perception of) being hit by a (perceived) car did not prevent me from actually being hit by the very real car behind those perceptions, I’d soon be dead. It is true that that’s an article of faith (perhaps the only reality that exists is the one in our minds – er, my mind, since you would exist only in my mind), but I wouldn’t be willing to test that proposition by jumping in front of a speeding car. The phrase “it’s all perception” refers to what we actually experience, as opposed to directly experiencing Reality. It is not a denial that Reality exists.

AM:

We do affect the environment. We do vary it. We are just not in control of it, if we take the strict meaning of “to control”.

Adam

[Martin Taylor 2014.03.19.12.45]

I am talking about _perceptions_ of aspects of the environment. All

of them are perceptions the runner might control, but when catching
a ball normally does not control.
How would you model “varying [one variable] in order to control [a
second variable]” without actually controlling [one variable]? It
might work open loop, but as Bill was never tired of reminding us,
open-loop doesn’t work very well if there are disturbances. In
multi-level control, varying one variable in order to control
another is usually done by sending a continuously changing reference
value to the controller of the second variable.
You can keep the retinal velocity of the image of a ball at zero
without moving anywhere. The ball may fall on the other side of the
field. Or you can keep it at some non-zero velocity until the ball
leaves the field of view. Neither is much use when the objective is
to catch the ball.
Martin

···

[From Adam Matic 2014.03.17 1700 cet]

              Martin Taylor

2014.03.19.10.04

              Is the runner controlling the configuration of arena

stands, the pattern of clouds against which the ball
image crosses (I can attest from personal experience
that these matter a lot when catching a fly ball,
having nearly been killed by trying to catch a high
fly against a clear blue sky), the velocity of another
fielder’s image across the retina, etc. etc.?

AM:

            No, he is not. You are talking about aspects of the

environment.

            Now, here is the important part - the runner is also not

in control of “his position in the field” since he is
not perceiving his position in the field. The perceptual
variable he is controlling is the retinal velocity. He
is varying his position in the field in order to
control this retinal variable.

[Martin Taylor 2014.03.19.12.56]

You did say you had a problem with words, but it is hard for me to

interpret “X” as an aspect of the environment, followed immediately
by “X” as a function. When “X” is a perceived position, I don’t
understand how it can be a function rather than a variable that is
the output value from a perceptual function. And though it is true
that functions can be varied, they usually are not. More normally
the operation of a function is altered by varying parameters, not by
changing the function. I am also confused by your use of “the” in “THE controlled variable
is the difference” followed by “so” nothing else is being
controlled, as though in a hierarchical structure all the control
systems below THE controlled variable were shut off. But I guess you are in process of devising an alternative to the
kinds of control system structures considered by Powers. That’s a
very laudable enterprise, particularly if your results fit what
organisms do better than the Powers structure does. Go to it. I’ll
be interested in your progress. But my comments will continue to be
based on comparing your ideas with those of Powers.
Martin

···

[From Adam Matic 2014.03.19]

        >

Bruce Abbott (2014.03.19.1220 EDT)

                          >

Some demos do compute a perception that is
not controlled by the program. For
example, the tracking demo does compute
the perception of the target position as a
function of the disturbance waveform that
is being applied to target position.
Target position is determined S-R fashion
from these computations; the program
contains no control system for target
position.

AM:

This is the way I see it:

          The controlled variable is the difference, so neither

the ‘cursor position’ nor the ‘target position’ are
controlled variables. Cursor position is an aspect of the
environment and is the output function that is varied to
control the perceptual variable.

[From Adam Matic 2014.03.17 1830]

Martin Taylor 2014.03.19.12.45
How would you model "varying [one variable] in order to control [a second variable]" without actually controlling [one variable]? It might work open loop, but as Bill was never tired of reminding us, open-loop doesn't work very well if there are disturbances. In multi-level control, varying one variable in order to control another is usually done by sending a continuously changing reference value to the controller of the second variable.

AM:
In every control loop, Qo is varied, not controlled. So, I would say every control loop controls p by varying Qo, or several Qo's.
No open loop.

Adam

[From Adam Matic 2014.03.19 1900CET]

> MT: You did say you had a problem with words, but it is hard for me to

interpret “X” as an aspect of the environment, followed immediately
by “X” as a function. When “X” is a perceived position, I don’t
understand how it can be a function rather than a variable that is
the output value from a perceptual function. And though it is true
that functions can be varied, they usually are not. More normally
the operation of a function is altered by varying parameters, not by
changing the function.

> MT: I am also confused by your use of "the" in "THE controlled variable

is the difference" followed by “so” nothing else is being
controlled, as though in a hierarchical structure all the control
systems below THE controlled variable were shut off.

> MT: But I guess you are in process of devising an alternative to the

kinds of control system structures considered by Powers. That’s a
very laudable enterprise, particularly if your results fit what
organisms do better than the Powers structure does. Go to it. I’ll
be interested in your progress. But my comments will continue to be
based on comparing your ideas with those of Powers.

AM:
Well, I guess we are going to continue not understanding each others position not matter how many words we use.

And I do get a sloppy using words, I’ll give you that.

Next up, robots! :smiley:

Best,

Adam