Prediction (Re: short non-technical summary)

[Rick Marken 2018-07-15_12:05:55]

···

[Bruce Nevin 2018-07-12_09:59:50 ET]

BN: ‘Prediction’ has become a loaded term in PCT discourse, apt to be a trigger for dismissal without further examination, but I agree with you, this looks worthy of closer consideration.

RM: Not at all. “Prediction” is clearly something people do so it’s a behavior that PCT should be able to explain. And this explanation will be in terms of the perceptual variables that are being controlled when people are doing the behavior that we see as “predicting”.

RM:Â Prediction is a problem for PCTers only when it is thought to be a necessary component of a control system. The control systems in the PCT model work sans prediction (of the future state of the controlled input or of disturbances to that input). There are situations where the fit of the PCT model to behavior can be improved by incorporating prediction into the model. For example, the behavior of tracking a “predictable” target, such as a target moving in a sinusoidal trajectory, is improved by incorporating prediction of future target position into the model. But this doesn’t mean that prediction is actually involved in the controlling done by the person doing the tracking. There is evidence that tracking a predictable target involves control of a higher level perception (an in-phase relationship between the harmonic motion of target and cursor), this explanation being consistent with the hierarchical control model of PCT. But this is still an area where research could clarify things.Â

BN: “Every want is a prophesy.”

RM: In PCT, every want is a reference signal, which best viewed as a demand (for a particular state of a perceptual variable), not a prophesy.Â

Â

BN: In the same way, an expectation that no one understands PCT correctly, taken as a stable part of the perceived environment, can become a part of the environmental feedback function for controlling other perceptions, and its destabilization may actually be resisted.

RM: Actually, such an expectation becomes a reference specification, so that one with that “expectation” is actually controlling for having people not understand PCT. Or so said Bill about my expectation that someone who had been on CSGNet for years would never really get it. So I no longer expect that of anyone. I just see if what people say about PCT seems to be correct.

Best

Rick

Looking back over the past almost 30 years I am wondering if this might sometimes have been an instance of collective control. I have no doubt that it has been sometimes an instance of individual control, because I have seen it in myself. Perceptions controlled by means of this perceived environmental stability (“others do not understand PCT correctly”) include perceiving myself (and perceiving myself being perceived) as part of the in-group rather than those “others”, and probably yes some of the ‘invigorating’ body states associated with conflict, so you are certainly not alone in that, Rick.

Warren has some remarkable skills in this area, which I admire and which I want to observe more closely in the hope of developing more refined and accurate perceptual input functions (‘recognizers’) for them, so that maybe I might improve my control of like perceptions. PCT learning theory applied–what a concept!

/Bruce

On Thu, Jul 12, 2018 at 9:19 AM Bartley Madden bartjm43@gmail.com wrote:

I’ll try to make the Evanston conference.

Here is an excellent summary of a trend in neuroscience to treat the brain as a prediction machine that minimizes error.

https://www.quantamagazine.org/to-make-sense-of-the-present-brains-may-predict-the-future-20180710/

Screams out PCT.

Is anyone in PCT working with this stuff??

Bart


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

On Fri, Apr 27, 2018 at 8:53 PM, Bruce Nevin bnhpct@gmail.com wrote:

Ah, yes, I used the address that gmail pulled out of its memory, and I should have copied the one from Dag’s fwd.

The conference is earlier in October, the 12th and 13th. Hope you can make it. Good luck with your worthy legislative agenda.

/Bruce

On Fri, Apr 27, 2018 at 8:33 PM, Dag Forssell dag@livingcontrolsystems.com wrote:

Bart,

I forwarded your email but not your gmail address. Bruce Nevin must have
had an old yahoo address. Interestingly enough, you replied using the
yahoo address.

With this mail, Bruce gets your gmail address.

I am glad I referred you to Bruce. I wish you could benefit from my
presentation. It is carefully structured to explain to a lay audience –
anyone – and get the significance of PCT across.

At 04:43 PM 4/27/2018, Bart Madden wrote:

Bruce

This is helpful. Thank you.

I am not sure if I can make the PCT conference.

I am committed to a major campaign now to pass my Free To Choose Medicine
proposal in the U.S.

I know that I’ll be going to make presentations to large groups in late
Oct and Nov … don’t know exact

dates yet.

FTCM has some “control” aspects to it …

An early version of these ideas was translated into Japanese and
apparently played an important role

in Japan’s 2014 passage of FTCM-type legislation for regenerative
medicine drugs. This and other new

developments are reported in my new 3rd edition FTCM book


https://www.amazon.com/Free-Choose-Medicine-Better-Sooner/dp/1934791679/ref=sr_1_1?ie=UTF8&qid=1524872193&sr=8-1&keywords=free+to+choose+medicine

Finally, please use my gmail address noted above.

Bart


On Fri, 4/27/18, Bruce Nevin bnhpct@gmail.com wrote:

 Subject: short non-technical summary

 To: “Bart Madden” bartmadden@yahoo.com

 Cc: “Dag Forssell”
dag@livingcontrolsystems.com

 Date: Friday, April 27, 2018, 6:05 PM

Â

 Hi, Bart,

 As Dag has told you, he passed your email on to

 me. You say you need a one-sentence non-technical statement

 about PCT. Your first cut:

 Human behavior

 is best understood, not as responses to stimuli, but as

 taking actions to control the perceptions of variables that

 are important in keeping us on track to achieve our

 goals.Â

 The best way to say it is the best way for your

 reader to understand it, and that depends on who you’re

 talking to. This talks to someone who presupposes that

 behavior is responses to stimuli. But for your intended

 “nontechnical” audience that might not be so. The

 typical response to me is "well, isn’t that kind of

 obvious?"

 Your sentence above has the technical words

 responses, stimuli, perceptions, variables, and (though the

 reader doesn’t know it yet) control.

 Where does the limit to one sentence come from?

 Whether one sentence or six, “nontechnical” calls

 for simple, direct, familiar language, and trying to

 shrink-wrap too much in one bundle makes that harder to do.

 Here are five sentences, for

 example:Â To state

 the blindingly obvious, when something isn’t the way we

 want it to be, we act so as to fix that. Can you think of

 any human purposes that don’t follow that general rule?

 But all too often we get at cross purposes. We can even get

 at cross purposes with ourselves. With PCT, learn how to see

 the purposes behind behavior and resolve

 conflicts.

 I’m not proposing that you use this. You need

 your own words that say what your book is about in a direct,

 non-technical way.

 I hope this is helpful, Bart. Are you planning on

 coming to the conference at Northwestern in

 October?

 /Bruce

Â

Â

[Martin Taylor 2018.07.15.15.53]

[Rick Marken 2018-07-15_12:05:55]

Huzzah! I love it when Rick posts something with which I entirely

agree. The appearance to an external observer/experimenter of
prediction can be produced internally in at least three independent
ways, of which Rick mentioned two, the third being the Powers
“Artificial Cerebellum”.

  There is evidence that tracking a

predictable target involves control of a higher level perception
(an in-phase relationship between the harmonic motion of target
and cursor), this explanation being consistent with the
hierarchical control model of PCT. But this is still an area where
research could clarify things.

Yes, indeed!

Martin
···
                [Bruce Nevin

2018-07-12_09:59:50 ET]

                BN: 'Prediction'

has become a loaded term in PCT discourse, apt to be
a trigger for dismissal without further examination,
but I agree with you, this looks worthy of closer
consideration.

            RM: Not at all. "Prediction" is clearly something

people do so it’s a behavior that PCT should be able to
explain. And this explanation will be in terms of the
perceptual variables that are being controlled when
people are doing the behavior that we see as
“predicting”.

            RM:  Prediction is a problem for PCTers only when it

is thought to be a necessary component of a control
system. The control systems in the PCT model work sans
prediction (of the future state of the controlled input
or of disturbances to that input). There are situations
where the fit of the PCT model to behavior can be
improved by incorporating prediction into the model. For
example, the behavior of tracking a “predictable”
target, such as a target moving in a sinusoidal
trajectory, is improved by incorporating prediction of
future target position into the model. But this doesn’t
mean that prediction is actually involved in the
controlling done by the person doing the tracking. There
is evidence that tracking a predictable target involves
control of a higher level perception (an in-phase
relationship between the harmonic motion of target and
cursor), this explanation being consistent with the
hierarchical control model of PCT. But this is still an
area where research could clarify things.

[Bruce Nevin 2018-07-15_16:17:11 ET]

Thank you, Rick and Martin. I am glad to have my concern corrected.

···
                [Bruce Nevin

2018-07-12_09:59:50 ET]

                BN: 'Prediction'

has become a loaded term in PCT discourse, apt to be
a trigger for dismissal without further examination,
but I agree with you, this looks worthy of closer
consideration.

            RM: Not at all. "Prediction" is clearly something

people do so it’s a behavior that PCT should be able to
explain. And this explanation will be in terms of the
perceptual variables that are being controlled when
people are doing the behavior that we see as
“predicting”.

            RM:  Prediction is a problem for PCTers only when it

is thought to be a necessary component of a control
system. The control systems in the PCT model work sans
prediction (of the future state of the controlled input
or of disturbances to that input). There are situations
where the fit of the PCT model to behavior can be
improved by incorporating prediction into the model. For
example, the behavior of tracking a “predictable”
target, such as a target moving in a sinusoidal
trajectory, is improved by incorporating prediction of
future target position into the model. But this doesn’t
mean that prediction is actually involved in the
controlling done by the person doing the tracking. There
is evidence that tracking a predictable target involves
control of a higher level perception (an in-phase
relationship between the harmonic motion of target and
cursor), this explanation being consistent with the
hierarchical control model of PCT. But this is still an
area where research could clarify things.

[Rupert Young (2018.07.17 16.20)]

(Rick Marken 2018-07-15_12:05:55]

I think the distinction between a prediction and a goal (want) is an

important one to make and is a crux of the difference between PCT
and CCT (computational control theory; conventional approach).
However, the difference is quite subtle, and I find it tricky to put
into words. So how can we unequivocally describe that difference? [I think part of the reason is that “prediction” is used in very loose terms; sometimes it can mean a future value derived from a computational model, sometimes it can mean preparing before an event (getting out keys before opening door)]. The conceptual difference,
is important, I think, because it leads on to fundamentally
different architectures of behavioural systems; definition of
outputs from computational models v. feedback control of input
variables.

A while back, at home, I went to pick up a can of cola from the

table to throw in the bin. As I did so the can slipped out of my
fingers and fell to the floor spilling its contents. I hadn’t
applied enough force in my grasp to hold on to it. I had thought it
was empty, but it was still half-full.

So, my interpretation of this was that I had set too low a force

reference for the grasping system. Now, once I realised the can
wasn’t empty I may have changed the reference for the grasp force,
but, as living systems are not able to change outputs
instantaneously, there was insufficient time to increase the
perceived force in the half-second that that the can was slipping. I
also assume that there is a (learned) relationship (memory?) between
a can being empty and the grasp reference, as there is between a
non-empty can and a different value for the grasp reference. If I
had thought the can was half-full the initial grasp reference would
have been higher and I would have squeezed harder to begin with.

Does this sound like a valid interpretation in terms of PCT?

CCT people, I think, would interpret this as making as a

“prediction” about what force would be required to pick up an empty
can. And that the system has learned a “model” of the relationship
between can-fullness and force.

So, how would we describe this example as not being a prediction?

A goal (want) specifies what the end result should be, but not how

to achieve it. A prediction specifies how to achieve something (but
fails due to unknown disturbances). Is that the difference?

Is the terminology of an "expectation" in PCT sufficient to

distinguish it from a “prediction”?

Regards,

Rupert
···
                [Bruce Nevin

2018-07-12_09:59:50 ET]

                BN: 'Prediction'

has become a loaded term in PCT discourse, apt to be
a trigger for dismissal without further examination,
but I agree with you, this looks worthy of closer
consideration.

            RM: Not at all. "Prediction" is clearly something

people do so it’s a behavior that PCT should be able to
explain. And this explanation will be in terms of the
perceptual variables that are being controlled when
people are doing the behavior that we see as
“predicting”.

                BN: "Every want

is a prophesy."

            RM: In PCT, every want is a reference signal, which

best viewed as a demand (for a particular state of a
perceptual variable), not a prophesy.

            RM: Actually, such an expectation becomes a reference

specification, so that one with that “expectation” is
actually controlling for having people not understand
PCT.

[Bruce Nevin 2018-07-17_15:16:04 ET]

This is nice, Rupert.

RY: my interpretation of this was that I had set too low a force reference for the grasping system. Now, once I realised the can wasn’t empty I may have changed the reference for the grasp force, but, as living systems are not able to change outputs instantaneously, there was insufficient time to increase the perceived force in the half-second that that the can was slipping. I also assume that there is a (learned) relationship (memory?) between a can being empty and the grasp reference, as there is between a non-empty can and a different value for the grasp reference. If I had thought the can was half-full the initial grasp reference would have been higher and I would have squeezed harder to begin with.

RY: Does this sound like a valid interpretation in terms of PCT?

It does to me.

RY: CCT people, I think, would interpret this as making as a “prediction” about what force would be required to pick up an empty can. And that the system has learned a “model” of the relationship between can-fullness and force.

RY: So, how would we describe this example as not being a prediction?

RY: A goal (want) specifies what the end result should be, but not how to achieve it. A prediction specifies how to achieve something (but fails due to unknown disturbances). Is that the difference?

RY:

Is the terminology of an “expectation” in PCT sufficient to distinguish it from a “prediction”?

Rupert, what I was playing with is how easy it is to do a Necker’s Cube flip from one way of talking about this to the other. A reference signal is a prediction when control is successful and succeeding reference signals in a Sequence or Program control system depend upon the prior references continuing to be controlled successfully. You might say that the predicting is being done at the Sequence or Program level. Ad hoc interruptions occur at those levels. I’ll just toss that empty can in the bin while I’m going over here to … oops!

How often is our control at Category/Relationship and lower levels not in the course of ongoing Sequence and Program control? So there, I think, is where the experience of expectation arises.

That’s one part of your question, where ways of talking about it can flip from one Necker’s cube orientation to the other. The other part has to do with how a simulation or working model is implemented, where the verbal rubber meets the road of demonstration.

There’s a similar Necker’s cube flip around the notion of the brain constructing a model of the world. In PCT, the ‘model’ is not a symbolic representation, the ‘model’ is immanent in what perceptual input functions exist, how their perceptual signals are connected to comparators, what error outputs feed into the reference input of those comparators, the weighting of these diverse inputs and outputs, etc. the density at each synapse of receptors for this or that neurochemical, and doubtless much more. This immanent ‘model’ of the world is very different from computational notions of frames and representations. That kind of model supports “definition of outputs”, as you say.

So that’s where it seems to me appropriate to engage them, not on their ground of spinning nice word-pictures in the air, where the words and word-linked concepts are too easily interconvertible. Where does their mode of implementation by defining outputs fail? When we see BD ‘dogs’ trotting up a uniform hillside, is that actually a relatively controlled environment that an uninformed military officer will assume can be generalized?

···

BN: “Every want is a prophesy.”

RM: In PCT, every want is a reference signal, which best viewed as a demand (for a particular state of a perceptual variable), not a prophesy.

RM: Actually, such an expectation becomes a reference specification, so that one with that “expectation” is actually controlling for having people not understand PCT.

                [Bruce Nevin

2018-07-12_09:59:50 ET]

                BN: 'Prediction'

has become a loaded term in PCT discourse, apt to be
a trigger for dismissal without further examination,
but I agree with you, this looks worthy of closer
consideration.

            RM: Not at all. "Prediction" is clearly something

people do so it’s a behavior that PCT should be able to
explain. And this explanation will be in terms of the
perceptual variables that are being controlled when
people are doing the behavior that we see as
“predicting”.

                BN: "Every want

is a prophesy."

            RM: In PCT, every want is a reference signal, which

best viewed as a demand (for a particular state of a
perceptual variable), not a prophesy.

            RM: Actually, such an expectation becomes a reference

specification, so that one with that “expectation” is
actually controlling for having people not understand
PCT.

[Rupert Young (2018.07.21 10.05)]

(Bruce Nevin 2018-07-17_15:16:04 ET]

This is nice, Rupert.

          RY:

my interpretation of this was that I had set too low a
force reference for the grasping system. Now, once I
realised the can wasn’t empty I may have changed the
reference for the grasp force, but, as living systems are
not able to change outputs instantaneously, there was
insufficient time to increase the perceived force in the
half-second that that the can was slipping. I also assume
that there is a (learned) relationship (memory?) between a
can being empty and the grasp reference, as there is
between a non-empty can and a different value for the
grasp reference. If I had thought the can was half-full
the initial grasp reference would have been higher and I
would have squeezed harder to begin with.

          RY:

Does this sound like a valid interpretation in terms of
PCT?

      It does to me.
          RY:

CCT people, I think, would interpret this as making as a
“prediction” about what force would be required to pick up
an empty can. And that the system has learned a “model” of
the relationship between can-fullness and force.

          RY:

So, how would we describe this example as not being a
prediction?

            RY:

A goal (want) specifies what the end result should be,
but not how to achieve it. A prediction specifies how to
achieve something (but fails due to unknown
disturbances). Is that the difference?

            RY: 
            Is the terminology of an "expectation" in PCT sufficient

to distinguish it from a “prediction”?

      Rupert, what I was playing with is how easy it is to do a

Necker’s Cube flip from one way of talking about this
to the other. A reference signal is a prediction when control
is successful and succeeding reference signals in a Sequence
or Program control system depend upon the prior references
continuing to be controlled successfully. You might say that
the predicting is being done at the Sequence or Program level.
Ad hoc interruptions occur at those levels. I’ll just toss
that empty can in the bin while I’m going over here to …
oops!

Yes, I think you are right. It does seem easy (and valid) to talk

about either way. But is there actually an underlying fundamental
difference? I think PCT would say there is. So, if there is any
literature that discusses the difference between a goal and a
prediction that would be most welcome.
Regards,
Rupert

···

BN: “Every want is a prophesy.”

                      RM: In PCT, every want is a reference

signal, which best viewed as a demand (for a
particular state of a perceptual variable),
not a prophesy.

                        RM: Actually, such an expectation becomes

a reference specification, so that one with
that “expectation” is actually controlling
for having people not understand PCT.

[Bruce Nevin 2018-07-12_09:59:50 ET]

                        BN: 'Prediction' has become a loaded term

in PCT discourse, apt to be a trigger for
dismissal without further examination, but I
agree with you, this looks worthy of closer
consideration.

                    RM: Not at all. "Prediction" is clearly

something people do so it’s a behavior that PCT
should be able to explain. And this explanation
will be in terms of the perceptual variables
that are being controlled when people are doing
the behavior that we see as “predicting”.

BN: “Every want is a prophesy.”

                    RM: In PCT, every want is a reference signal,

which best viewed as a demand (for a particular
state of a perceptual variable), not a
prophesy.

                    RM: Actually, such an expectation becomes a

reference specification, so that one with that
“expectation” is actually controlling for having
people not understand PCT.

[Rupert Young (2018.07.21 10.40)]

Hey, Rick!

I’d be very interested in your views on this.

(Rupert Young (2018.07.17 16.20)]

Does this sound like a valid interpretation in terms of PCT? Does
this situation require a learned association between an empty can
and an initial grasp reference (memorised value of the reference)?
So, this association is a sort of “model” of what to do in a
particular situation (or rather what to perceive). [I’m going to call it a “bond” as I don’t like the word “model” due to its conventional robotics connotation as a model of the external world.]
Initially a system doesn’t have this bond, which invokes a
particular value of a reference. But it is learned with experience.
The consequence of learning this bond is that the system is more
efficient in that it doesn’t have to start from scratch when
performing a task. In the can case if I didn’t assume anything about
the weight of the can I would try to pick it up in a much slower way
until I am exerting the appropriate force. If it is empty, and I
assume it, then I can pick it up in a much quicker way. Though in
this case my assumption was wrong. [I guess control of speed may be involved here, which I didn’t include above].
Does this make sense?
Regards,
Rupert

···

(Rick Marken 2018-07-15_12:05:55]

              RM: Not at all. "Prediction" is clearly something

people do so it’s a behavior that PCT should be able
to explain. And this explanation will be in terms of
the perceptual variables that are being controlled
when people are doing the behavior that we see as
“predicting”.

  ...



  A while back, at home, I went to pick up a can of cola from the

table to throw in the bin. As I did so the can slipped out of my
fingers and fell to the floor spilling its contents. I hadn’t
applied enough force in my grasp to hold on to it. I had thought
it was empty, but it was still half-full.

  So, my interpretation of this was that I had set too low a force

reference for the grasping system. Now, once I realised the can
wasn’t empty I may have changed the reference for the grasp force,
but, as living systems are not able to change outputs
instantaneously, there was insufficient time to increase the
perceived force in the half-second that that the can was slipping.
I also assume that there is a (learned) relationship (memory?)
between a can being empty and the grasp reference, as there is
between a non-empty can and a different value for the grasp
reference. If I had thought the can was half-full the initial
grasp reference would have been higher and I would have squeezed
harder to begin with.

[Rick Marken 2018-07-22_10:06:27]

···

[Rupert Young (2018.07.21 10.40)]

Hey, Rick!

I’d be very interested in your views on this.

RM: Yes, me too;-)

  RY: A while back, at home, I went to pick up a can of cola from the

table to throw in the bin. As I did so the can slipped out of my
fingers and fell to the floor spilling its contents. I hadn’t
applied enough force in my grasp to hold on to it. I had thought
it was empty, but it was still half-full.

  RY: So, my interpretation of this was that I had set too low a force

reference for the grasping system. Now, once I realised the can
wasn’t empty I may have changed the reference for the grasp force,
but, as living systems are not able to change outputs
instantaneously, there was insufficient time to increase the
perceived force in the half-second that that the can was slipping.
I also assume that there is a (learned) relationship (memory?)
between a can being empty and the grasp reference, as there is
between a non-empty can and a different value for the grasp
reference. If I had thought the can was half-full the initial
grasp reference would have been higher and I would have squeezed
harder to begin with.Â

RM: I have certainly done similar things. My take on this is a bit different than yours. I would say that you were controlling for an event perception: throwing away an empty can. This event is created by varying a sequence of lower level references for perceptions of grasping, lifting and tossing. These references are varied by the event control system in a way that is consistent with producing that event. So the grasping reference is varied to bring about a perception of the amount of pressure that would be appropriate for lifting an empty can; then the lifting reference specifies upward movement of the arm. But in this case the can drops due to the grasp being insufficient. So the event control system never gets to the point of varying the tossing reference and, thus, doesn’t get the specified perception of the can being thrown away. Indeed, a high level system “calls of” control of the “throwing empty can away” event and starts a new even – clean up the mess.Â

RM: The error in the event control system was created, I would argue, because you were controlling for a perception of the wrong event – throwing away an empty can. You might have been controlling for this event because producing the perception of this event was part of control of a program called “cleaning up”. And controlling for “cleaning up” involves controlling for events like “throwing away empty cans” not “throwing away full cans”. The can was probably in a place and/or in a place at a time when it could be seen as an empty can – trash – and, thus, a disturbance to “cleaning up”.Â

RM: So I wouldn’t say the error was a result of an incorrect prediction of the forces that should be used to lift the can. Rather, I would say it was a result of controlling for the “wrong” higher level perception (throwing away an empty can) because controlling for that event was the “right” thing to do as part of control of a still higher level perception (“cleaning up”).Â

RM: This discussion of prediction in control led me to realize that, if prediction is involved at all in control then, per PCT, it is involved on the input and not the output side of a control system. In essence, “prediction” refers to computing what the future state of a variable will be based on it present and past states: that is, x’(t+1) = f(x(t), x(t-1),…x(t-n)) where x’(t) is the predicted future state of variable x and x(t), x(t-1),…x(t-n) are the present and prior states of x. In PCT, there is no prediction of output; output is simply driven by error and this results in very good control without prediction, which is usually not feasible anyway; indeed, prediction can actually interfere with error driven control. Sine reference signals are the outputs of higher level systems, in PCT there there is no prediction in setting reference signals also; reference signals are not predicted nor are they predictions. Reference signal are error driven specifications for the states of perceptions.

RM: However, I think something like prediction could be seen as being involved in the computation of higher level perceptions. These higher level perceptions – such as sequences, events and programs – are defined over time; sometimes over rather long periods of time. So the process of perceiving the state of such perceptual variables occurs over time and each step in the process of producing such a perception could be seen as the basis for a prediction of the next step.Â

RM: For example, in my “Hierarchical Behavior of Perception” demo (http://www.mindreadings.com/ControlDemo/Hierarchy.html) in order to control the sequence perception, keeping it a “small, medium, large"Â you have to be able to perceive the state of the sequence that is occurring. So the perceptual function has to perceive that a small object has occurred and then “predict” that a medium object will occur. If that happens then this might be the sequence you want but the perceptual function then has see that a large object has occurred next. As long as the sequence of objects actually occurs the output of the perceptual function produces an output that means that the “small, medium, large” sequence” is occurring. Powers describes a neural circuit that can perceive such sequences in the “Control of Sequence” chapter of B:CP.Â

RM: I don’t think circuits like the one Powers describes for sequence perception actually involve prediction in the sense that I describe about (where a predicted future state of a variable is derived from its present and past states) but I think that we consciously experience higher level perceptions that develop over time as involving prediction. This is my experience when I play racquetball, where shots often ricochet off two or three walls before bouncing on the floor. I am now able to get to the place where a ricochet shot is going to land so I am position to take my shot. If I become conscious of what I’m doing (instead of just staying in the zen of it) then it seems to me like I am making this amazing prediction of where a complexly ricocheting shot will end up. But I think what is actually happening is that I have learned how to perceive what is basically a sequence perception (of the temporal sequence of angle of a ricocheting ball) and control for my being a located at the end of that sequence.Â

RM: Anyway, that’s my take on it. I would say that the way to develop robots that seem to have amazing prediction abilities is to develop robots that have amazing perceptual abilities – the ability to perceive and thus control the higher level perceptions like sequences, programs and principles that we know that people control.Â

Best

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Rupert Young (2018.07.22 20.45)]

(Rick Marken 2018-07-22_10:06:27]

Thanks!

Sure. But the question I am trying to get to is, why was the

grasping reference set such that the grasp was insufficient?

What is the relation between the empty-can event and the grasping

reference that was used? How did that relation come about?

I didn't say it was a prediction either, in case you thought so.

Yep, certainly "prediction" would not apply to output. But I don't

think it need to apply to input either, within PCT. Instead, that
place is taken by memory, which is stored values of the perceptions,
and the values of their references, that the system needs to perform
a task. So, to sweeten my tea I remember that I need to control my
perception of adding three sugars. Or to play a note piano
I have (memorised) a different reference value (for pressure to
perceive) than that to play a note forte.

Is this not the role of (associative) memory within PCT?

This is not *predicting* what the input will be, but

specifying what you want the input to be.

Yes, but is that not memory? In other words, knowledge gained

through experience, in this case, that when the ball is hit in a
certain way off one wall it will end up bouncing of the third wall.
So, you control your position according to your memory/knowledge of
what balls do.

Quite!

Regards,
Rupert

···

[Rupert Young (2018.07.21 10.40)]

Hey, Rick!

I’d be very interested in your views on this.

RM: Yes, me too;-)

                RY: A while back, at home,

I went to pick up a can of cola from the table to
throw in the bin. As I did so the can slipped out of
my fingers and fell to the floor spilling its
contents. I hadn’t applied enough force in my grasp
to hold on to it. I had thought it was empty, but it
was still half-full.

                RY: So, my interpretation of this was that I had set

too low a force reference for the grasping system.
Now, once I realised the can wasn’t empty I may have
changed the reference for the grasp force, but, as
living systems are not able to change outputs
instantaneously, there was insufficient time to
increase the perceived force in the half-second that
that the can was slipping. I also assume that there
is a (learned) relationship (memory?) between a can
being empty and the grasp reference, as there is
between a non-empty can and a different value for
the grasp reference. If I had thought the can was
half-full the initial grasp reference would have
been higher and I would have squeezed harder to
begin with.

      RM: I have certainly done similar things. My take on this is a

bit different than yours. I would say that you were
controlling for an event perception: throwing away an empty
can. This event is created by varying a sequence of lower
level references for perceptions of grasping, lifting and
tossing. These references are varied by the event control
system in a way that is consistent with producing that event.
So the grasping reference is varied to bring about a
perception of the amount of pressure that would be appropriate
for lifting an empty can; then the lifting reference specifies
upward movement of the arm. But in this case the can drops due
to the grasp being insufficient.

      So the event control system never gets

to the point of varying the tossing reference and, thus,
doesn’t get the specified perception of the can being thrown
away. Indeed, a high level system “calls of” control of the
“throwing empty can away” event and starts a new even – clean
up the mess.

      RM: The error in the event control

system was created, I would argue, because you were
controlling for a perception of the wrong event – throwing
away an empty can.

      You might have been controlling for

this event because producing the perception of this event was
part of control of a program called “cleaning up”. And
controlling for “cleaning up” involves controlling for events
like “throwing away empty cans” not “throwing away full cans”.
The can was probably in a place and/or in a place at a time
when it could be seen as an empty can – trash – and, thus,
a disturbance to “cleaning up”.

      RM: So I wouldn't say the error was a

result of an incorrect prediction of the forces that should be
used to lift the can. Rather, I would say it was a result of
controlling for the “wrong” higher level perception (throwing
away an empty can) because controlling for that event was the
“right” thing to do as part of control of a still higher level
perception (“cleaning up”).

      RM: This discussion of prediction in

control led me to realize that, if prediction is involved at
all in control then, per PCT, it is involved on the input and
not the output side of a control system. In essence,
“prediction” refers to computing what the future state of a
variable will be based on it present and past states: that is,
x’(t+1) = f(x(t), x(t-1),…x(t-n)) where x’(t) is the
predicted future state of variable x and x(t),
x(t-1),…x(t-n) are the present and prior states of x. In
PCT, there is no prediction of output; output is simply
driven by error and this results in very good control
without prediction, which is usually not feasible anyway;
indeed, prediction can actually interfere with error driven
control. Sine reference signals are the outputs of higher
level systems, in PCT there there is no prediction in
setting reference signals also; reference signals are not
predicted nor are they predictions. Reference signal are
error driven specifications for the states of
perceptions.

        RM:

However, I think something like prediction could be seen as
being involved in the computation of higher level
perceptions. These higher level perceptions – such as
sequences, events and programs – are defined over time ; sometimes
over rather long periods of time. So the process of
perceiving the state of such perceptual variables occurs
over time and each step in the process of producing such a
perception could be seen as the basis for a prediction of
the next step.

        RM:

For example, in my “Hierarchical Behavior of Perception”
demo (http://www.mindreadings.com/ControlDemo/Hierarchy.html )
in order to control the sequence perception, keeping it a
“small, medium, large” you
have to be able to perceive the state of the sequence
that is occurring. So the perceptual function has to
perceive that a small object has occurred and then
“predict” that a medium object will occur. If that
happens then this might be the sequence you want but the
perceptual function then has see that a large object has
occurred next. As long as the sequence of objects
actually occurs the output of the perceptual function
produces an output that means that the “small, medium,
large” sequence" is occurring. Powers describes a neural
circuit that can perceive such sequences in the “Control
of Sequence” chapter of B:CP.

            RM:

I don’t think circuits like the one Powers describes for
sequence perception actually involve prediction in the
sense that I describe about (where a predicted future
state of a variable is derived from its present and past
states) but I think that we consciously experience
higher level perceptions that develop over time as
involving prediction. This is my experience when I play
racquetball, where shots often ricochet off two or three
walls before bouncing on the floor. I am now able to get
to the place where a ricochet shot is going to land so I
am position to take my shot. If I become conscious of
what I’m doing (instead of just staying in the zen of
it) then it seems to me like I am making this amazing
prediction of where a complexly ricocheting shot will
end up. But I think what is actually happening is that I
have learned how to perceive what is basically a
sequence perception (of the temporal sequence of angle
of a ricocheting ball) and control for my being a
located at the end of that sequence.

      RM: Anyway, that's my take on it. I

would say that the way to develop robots that seem to have
amazing prediction abilities is to develop robots that have
amazing perceptual abilities – the ability to perceive and
thus control the higher level perceptions like sequences,
programs and principles that we know that people control.

[Rick Marken 2018-07-22_14:31:04]

[Rupert Young (2018.07.22 20.45)]

RM: My take on this is a bit different than yours. I would say that you were controlling for an event perception: throwing away an empty can. This event is created by varying a sequence of lower level references for perceptions of grasping, lifting and tossing. These references are varied by the event control system in a way that is consistent with producing that event. So the grasping reference is varied to bring about a perception of the amount of pressure that would be appropriate for lifting an empty can; then the lifting reference specifies upward movement of the arm. But in this case the can drops due to the grasp being insufficient.

RY: Sure. But the question I am trying to get to is, why was the grasping reference set such that the grasp was insufficient?

RM: My theory (which is just a theory; we'd have to test this) was that you were controlling for the event "throwing away an empty can." I was assuming that the grasping phase of that event required a perception of a looser grasp than what would have been required had the grasping been part of a different event, such as "lifting the (possible full) can".

RM: The error in the event control system was created, I would argue, because you were controlling for a perception of the wrong event -- throwing away an empty can.RY:Â

RY: What is the relation between the empty-can event and the grasping reference that was used? How did that relation come about?

RM: By "empty can event" I presume you are referring to the fact that the can (which was not actually empty) slipped from your grasp as you began to lift it. The relation of the grasping reference to that slip was that the lifting phase of producing the event perception "throwing away an empty can" was initiated while the grasp was too weak, the weakness of the grasp being a result of the fact that you were controlling for a perception of "throwing away an empty can" rather than "  "throwing away a filled can".

RM: This discussion of prediction in control led me to realize that, if prediction is involved at all in control then, per PCT, it is involved on the input and not the output side of a control system...

RY: Yep, certainly "prediction" would not apply to output. But I don't think it need to apply to input either, within PCT.

RM: Yes. The input function model I was thinking of -- the model of sequence perception on pp. 144-145 of B:CP (2nd Edition) -- does not involve prediction in the sense of forecasting a future state of a variable based on its present and past states.Â

RY: Instead, that place is taken by memory, which is stored values of the perceptions, and the values of their references, that the system needs to perform a task.

RM: Yes, the model of a sequence perception function on pp. 144-145 of B:CP (2nd Edition) involves memory in the form of reverberating neural circuits. And certainly memory is involved in varying the references to lower level systems appropriately to produce the elements of the sequence.

RM: I don't think circuits like the one Powers describes for sequence perception actually involve prediction in the sense that I describe about (where a predicted future state of a variable is derived from its present and past states) but I think that we consciously experience higher level perceptions that develop over time as involving prediction. This is my experience when I play racquetball, where shots often ricochet off two or three walls before bouncing on the floor. I am now able to get to the place where a ricochet shot is going to land so I am position to take my shot. If I become conscious of what I'm doing (instead of just staying in the zen of it) then it seems to me like I am making this amazing prediction of where a complexly ricocheting shot will end up. But I think what is actually happening is that I have learned how to perceive what is basically a sequence perception (of the temporal sequence of angle of a ricocheting ball) and control for my being a located at the end of that sequence.

RY: Yes, but is that not memory?

RM: The perceptual function that produces the controlled perception (of the shape of a sequence of ricochets) includes memory circuits (as in the sequence perception function  on pp. 144-145 of B:CP (2nd Edition) ). Indeed, I'm sure any perceptual variable that is defined over time (like words, golf swings, musical phrases) and the output functions that vary the references for the lower perceptions that go into producing these higher level perceptions (such as the variations in references for the pressure used when pressing piano keys to control the time varying dynamics of a musical phase) involve memory.Â

RY: In other words, knowledge gained through experience, in this case, that when the ball is hit in a certain way off one wall it will end up bouncing of the third wall. So, you control your position according to your memory/knowledge of what balls do.

RM: I think the knowledge gained from experience is knowledge of what higher level perceptual variables to control in order to control for the perception of yourself in position to hit the ball. The memory is embedded in the higher level perceptual functions that produce the perceptual variables to be controlled (in this case the perception of y our relationship to the sequence of ricochets that will unfold) and in the output functions that vary the lower level references appropriately as the means of keeping the higher level perception under control. Â

RM: Anyway, that's my take on it. I would say that the way to develop robots that seem to have amazing prediction abilities is to develop robots that have amazing perceptual abilities -- the ability to perceive and thus control the higher level perceptions like sequences, programs and principles that we know that people control.Â

RY: Quite!

RM: Yes, it will not be an easy job developing such perceptual functions but I think that's where the money is, so to speak!
Best
Rick
Â

···

Regards,
Rupert

--
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

Hi Rick, I like your take on this!

···

[Rupert Young (2018.07.21 10.40)]

Hey, Rick!

I’d be very interested in your views on this.

RM: Yes, me too;-)

  RY: A while back, at home, I went to pick up a can of cola from the

table to throw in the bin. As I did so the can slipped out of my
fingers and fell to the floor spilling its contents. I hadn’t
applied enough force in my grasp to hold on to it. I had thought
it was empty, but it was still half-full.

  RY: So, my interpretation of this was that I had set too low a force

reference for the grasping system. Now, once I realised the can
wasn’t empty I may have changed the reference for the grasp force,
but, as living systems are not able to change outputs
instantaneously, there was insufficient time to increase the
perceived force in the half-second that that the can was slipping.
I also assume that there is a (learned) relationship (memory?)
between a can being empty and the grasp reference, as there is
between a non-empty can and a different value for the grasp
reference. If I had thought the can was half-full the initial
grasp reference would have been higher and I would have squeezed
harder to begin with.

RM: I have certainly done similar things. My take on this is a bit different than yours. I would say that you were controlling for an event perception: throwing away an empty can. This event is created by varying a sequence of lower level references for perceptions of grasping, lifting and tossing. These references are varied by the event control system in a way that is consistent with producing that event. So the grasping reference is varied to bring about a perception of the amount of pressure that would be appropriate for lifting an empty can; then the lifting reference specifies upward movement of the arm. But in this case the can drops due to the grasp being insufficient. So the event control system never gets to the point of varying the tossing reference and, thus, doesn’t get the specified perception of the can being thrown away. Indeed, a high level system “calls of” control of the “throwing empty can away” event and starts a new even – clean up the mess.

RM: The error in the event control system was created, I would argue, because you were controlling for a perception of the wrong event – throwing away an empty can. You might have been controlling for this event because producing the perception of this event was part of control of a program called “cleaning up”. And controlling for “cleaning up” involves controlling for events like “throwing away empty cans” not “throwing away full cans”. The can was probably in a place and/or in a place at a time when it could be seen as an empty can – trash – and, thus, a disturbance to “cleaning up”.

RM: So I wouldn’t say the error was a result of an incorrect prediction of the forces that should be used to lift the can. Rather, I would say it was a result of controlling for the “wrong” higher level perception (throwing away an empty can) because controlling for that event was the “right” thing to do as part of control of a still higher level perception (“cleaning up”).

RM: This discussion of prediction in control led me to realize that, if prediction is involved at all in control then, per PCT, it is involved on the input and not the output side of a control system. In essence, “prediction” refers to computing what the future state of a variable will be based on it present and past states: that is, x’(t+1) = f(x(t), x(t-1),…x(t-n)) where x’(t) is the predicted future state of variable x and x(t), x(t-1),…x(t-n) are the present and prior states of x. In PCT, there is no prediction of output; output is simply driven by error and this results in very good control without prediction, which is usually not feasible anyway; indeed, prediction can actually interfere with error driven control. Sine reference signals are the outputs of higher level systems, in PCT there there is no prediction in setting reference signals also; reference signals are not predicted nor are they predictions. Reference signal are error driven specifications for the states of perceptions.

RM: However, I think something like prediction could be seen as being involved in the computation of higher level perceptions. These higher level perceptions – such as sequences, events and programs – are defined over time; sometimes over rather long periods of time. So the process of perceiving the state of such perceptual variables occurs over time and each step in the process of producing such a perception could be seen as the basis for a prediction of the next step.

RM: For example, in my “Hierarchical Behavior of Perception” demo (http://www.mindreadings.com/ControlDemo/Hierarchy.html) in order to control the sequence perception, keeping it a “small, medium, large” you have to be able to perceive the state of the sequence that is occurring. So the perceptual function has to perceive that a small object has occurred and then “predict” that a medium object will occur. If that happens then this might be the sequence you want but the perceptual function then has see that a large object has occurred next. As long as the sequence of objects actually occurs the output of the perceptual function produces an output that means that the “small, medium, large” sequence" is occurring. Powers describes a neural circuit that can perceive such sequences in the “Control of Sequence” chapter of B:CP.

RM: I don’t think circuits like the one Powers describes for sequence perception actually involve prediction in the sense that I describe about (where a predicted future state of a variable is derived from its present and past states) but I think that we consciously experience higher level perceptions that develop over time as involving prediction. This is my experience when I play racquetball, where shots often ricochet off two or three walls before bouncing on the floor. I am now able to get to the place where a ricochet shot is going to land so I am position to take my shot. If I become conscious of what I’m doing (instead of just staying in the zen of it) then it seems to me like I am making this amazing prediction of where a complexly ricocheting shot will end up. But I think what is actually happening is that I have learned how to perceive what is basically a sequence perception (of the temporal sequence of angle of a ricocheting ball) and control for my being a located at the end of that sequence.

RM: Anyway, that’s my take on it. I would say that the way to develop robots that seem to have amazing prediction abilities is to develop robots that have amazing perceptual abilities – the ability to perceive and thus control the higher level perceptions like sequences, programs and principles that we know that people control.

Best

Rick


Richard S. Marken

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
–Antoine de Saint-Exupery

[Rupert Young (2018.07.26 20.20)]

(Rick Marken 2018-07-22_14:31:04]

These aren’t addressing my question, so perhaps I am not asking it
well, let me rephrase. Let’s forget about whether it slipped. If I am controlling for an
empty-can event I require a grasp reference of 10, say. If I am
controlling for an full-can event I require a grasp reference of 15,
say. So, depending upon the event a different reference would be
set. There would seem to be a relationship between the type of event
and a grasp reference. Is this because we have learned an
associative memory relationship between the higher-level goal and
the lower-level reference? That is, the lower-level reference is set
through memory rather than a direct function of the higher-level
output.
Regards,
Rupert

···

[Rupert Young (2018.07.22 20.45)]

                    RM: My take on this is a

bit different than yours. I would say that you
were controlling for an event perception:
throwing away an empty can. This event is
created by varying a sequence of lower level
references for perceptions of grasping, lifting
and tossing. These references are varied by the
event control system in a way that is consistent
with producing that event. So the grasping
reference is varied to bring about a perception
of the amount of pressure that would be
appropriate for lifting an empty can; then the
lifting reference specifies upward movement of
the arm. But in this case the can drops due to
the grasp being insufficient.

            RY: Sure. But the question I am trying to get to

is, why was the grasping reference set such that the
grasp was insufficient?

          RM: My theory (which is just a theory; we'd have to

test this) was that you were controlling for the event " throwing
away an empty can."
I was assuming that the grasping phase of that event
required a perception of a looser grasp than what would
have been required had the grasping been part of a
different event, such as “lifting the (possible full)
can”.

                    RM: The error in the

event control system was created, I would argue,
because you were controlling for a perception of
the wrong event – throwing away an empty
can.RY:

            RY: What is the relation between the empty-can

event and the grasping reference that was used? How did
that relation come about?

          RM: By "empty can event" I presume you are referring to

the fact that the can (which was not actually empty)
slipped from your grasp as you began to lift it. The
relation of the grasping reference to that slip was that
the lifting phase of producing the event perception
" throwing
away an empty can"
was initiated while the grasp was too weak, the weakness
of the grasp being a result of the fact that you were
controlling for a perception of " throwing
away an empty can"
rather than "
" throwing
away a filled can".

Fred Nickols (2018.07.26.1523 ET)

I’ve been following this and I have a related instance that might help. It is the reverse of Rupert’s example. I actually did it myself once and I’ve read about it as a prank. It goes like this:

You go to the refrigerator, open the door and grasp what you believe/expect to be a full quart of milk. Instead, the carton is almost empty and you bang the carton against the top of the refrigerator.

A plain language explanation for this is that you exert the upward force necessary to lift what you believe to be a full quart of milk and it is way too much.

Was I predicting how heavy the milk carton would be? Maybe. Was I exerting force consistent with what I was expecting or believed to be the case? I think so.

In PCT terms, I would probably say something like this: I set a reference level for the upward force I exerted based on my (1) recollection and expectation of how heavy the carton would be and (2) my memories of how much force is required for that weight.

I’m probably way off but that’s the way it looks to me.

···

Regards,

Fred Nickols

Managing Partner

Distance Consulting LLC

“Assistance at A Distance”

[Rupert Young (2018.07.22 20.45)]

            RY: Sure. But the question I am trying to get to

is, why was the grasping reference set such that the
grasp was insufficient?

          RM: My theory (which is just a theory; we'd have to

test this) was that you were controlling for the event " throwing
away an empty can."
I was assuming that the grasping phase of that event
required a perception of a looser grasp than what would
have been required had the grasping been part of a
different event, such as “lifting the (possible full)
can”.

            RY: What is the relation between the empty-can

event and the grasping reference that was used? How did
that relation come about?

          RM: By "empty can event" I presume you are referring to

the fact that the can (which was not actually empty)
slipped from your grasp as you began to lift it. The
relation of the grasping reference to that slip was that
the lifting phase of producing the event perception
" throwing
away an empty can"
was initiated while the grasp was too weak, the weakness
of the grasp being a result of the fact that you were
controlling for a perception of " throwing
away an empty can"
rather than "
" throwing
away a filled can".

                    RM: My take on this is a

bit different than yours. I would say that you
were controlling for an event perception:
throwing away an empty can. This event is
created by varying a sequence of lower level
references for perceptions of grasping, lifting
and tossing. These references are varied by the
event control system in a way that is consistent
with producing that event. So the grasping
reference is varied to bring about a perception
of the amount of pressure that would be
appropriate for lifting an empty can; then the
lifting reference specifies upward movement of
the arm. But in this case the can drops due to
the grasp being insufficient.

                    RM: The error in the

event control system was created, I would argue,
because you were controlling for a perception of
the wrong event – throwing away an empty
can.RY:

[Rick Marken 2018-07-27_21:20:52]

···

On Thu, Jul 26, 2018 at 12:17 PM, Rupert Young csgnet@lists.illinois.edu wrote:

[Rupert Young (2018.07.26 20.20)]

RY: These aren't addressing my question, so perhaps I am not asking it

well, let me rephrase.

RY: Let's forget about whether it slipped. If I am controlling for an

empty-can event I require a grasp reference of 10, say. If I am
controlling for an full-can event I require a grasp reference of 15,
say. So, depending upon the event a different reference would be
set. There would seem to be a relationship between the type of event
and a grasp reference. Is this because we have learned an
associative memory relationship between the higher-level goal and
the lower-level reference? That is, the lower-level reference is set
through memory rather than a direct function of the higher-level
output.

RM: I don’t think of learning as involving learning of specific associations between higher level goals and lower level references. I think we learn, through reorganization, the functional relationships between error in higher level systems and the outputs of these systems that are the references for lower level systems that produce good control. So an error in the “throw away an empty can” event control system first leads to increasing the lower level reference for the tightness of “grasping the can”; but that error doesn’t lead to as much of an increase in the tightness of “grasping the can” reference as the same error would lead to in the throw away a full can" event control system. I guess you could call this associative memory – where particular values of error are being “associated” with particular values of output. But I like thinking of it in terms of learning functions rather than associations.Â

BestÂ

Rick

Â

Regards,

Rupert


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

          RM: By "empty can event" I presume you are referring to

the fact that the can (which was not actually empty)
slipped from your grasp as you began to lift it. The
relation of the grasping reference to that slip was that
the lifting phase of producing the event perceptionÂ
" throwing
away an empty  can"
was initiated while the grasp was too weak, the weakness
of the grasp being a result of the fact that you were
controlling for a perception of " throwing
away an empty  can"
rather than "
 " throwing
away a filled can".

[Bruce Nevin 2018-07-28_11:07:51 ET]

Rick Marken 2018-07-27_21:20:52 –

I’m puzzled, and may be misunderstanding you.

RM: an error in the “throw away an empty can” event control system first leads to increasing the lower level reference for the tightness of “grasping the can”; but that error doesn’t lead to as much of an increase in the tightness of “grasping the can” reference as the same error would lead to in the throw away a full can" event control system. Â

This sounds like you are positing one control system for throwing away an empty can and another for throwing away a full can.

This implies a proliferation of control systems for ‘throw away X’ where X is an infinite (indefinineably large) number of things that might be grasped and thrown away; and a further infinity of control systems for doing other things with that infinite set of things. Obviously, that’s not what you mean. So I’m puzzled.

Rupert Young (2018.07.17 16.20) –

I went to pick up a can of cola from the table to throw in the bin. As I did so the can slipped out of my fingers and fell to the floor spilling its contents. I hadn’t applied enough force in my grasp to hold on to it. I had thought it was empty, but it was still half-full.Â

In B:CP, Bill proposed that perceptual signals stored in memory in the past become reference values in the present. He did not elaborate this proposal, and I think could not. Rupert’s example involves memory of pressure sensations in the fingers of the hand that is grasping the drink can.

If I pick up an empty can, too tight a grasp crushes it. A paper cup is flimsy compared to a can. If I pick up a full paper cup I use less grasping pressure than if I pick up a full drink can, even though the effort of lifting it be the same; and the difference between the grasping effort for the empty cup is less vs. the full cup is less than the difference in effort grasping an empty vs. full drink can; this even though the lifting effort for an empty/full cup is about the same as that for a correspondingly empty/full can.Â

(There are other differences depending on perceived circumstances, e.g. I pick up a full cup by the rim if it’s hot.)

Where does the brain/body store memories of the pressure sensations that are controlled while grasping different objects for different purposes, so that appropriate references are set in the present?

···

On Thu, Jul 26, 2018 at 12:17 PM, Rupert Young csgnet@lists.illinois.edu wrote:

[Rupert Young (2018.07.26 20.20)]

RY: These aren't addressing my question, so perhaps I am not asking it

well, let me rephrase.

RY: Let's forget about whether it slipped. If I am controlling for an

empty-can event I require a grasp reference of 10, say. If I am
controlling for an full-can event I require a grasp reference of 15,
say. So, depending upon the event a different reference would be
set. There would seem to be a relationship between the type of event
and a grasp reference. Is this because we have learned an
associative memory relationship between the higher-level goal and
the lower-level reference? That is, the lower-level reference is set
through memory rather than a direct function of the higher-level
output.

RM: I don’t think of learning as involving learning of specific associations between higher level goals and lower level references. I think we learn, through reorganization, the functional relationships between error in higher level systems and the outputs of these systems that are the references for lower level systems that produce good control. So an error in the “throw away an empty can” event control system first leads to increasing the lower level reference for the tightness of “grasping the can”; but that error doesn’t lead to as much of an increase in the tightness of “grasping the can” reference as the same error would lead to in the throw away a full can" event control system. I guess you could call this associative memory – where particular values of error are being “associated” with particular values of output. But I like thinking of it in terms of learning functions rather than associations.Â

BestÂ

Rick

Â

Regards,

Rupert


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

          RM: By "empty can event" I presume you are referring to

the fact that the can (which was not actually empty)
slipped from your grasp as you began to lift it. The
relation of the grasping reference to that slip was that
the lifting phase of producing the event perceptionÂ
" throwing
away an empty  can"
was initiated while the grasp was too weak, the weakness
of the grasp being a result of the fact that you were
controlling for a perception of " throwing
away an empty  can"
rather than "
 " throwing
away a filled can".

[Rupert Young (2018.07.28 15.15)]

(Rick Marken 2018-07-27_21:20:52]

I don’t see that it is sufficient for the functional relationship to
be between the error and the outputs that are the references (this
may not be what you are saying). After all why would an error of
value X in one situation lead to a different reference from value X
in a different situation? It would seem that the relationship would
need to be between the error of a particular system and the
lower-level. I.e. there is a different relationship between the
error in the system and the grasping reference and
between the error in the system and the grasping
reference. This would still seem to indicate that their is a
relationship (association) between and a
particular value of the grasping reference. Whether this is “memory”
or “reorganisation” I don’t know. Though I suspect these may turn
out to be the same thing.
Frankly, though, I think it was dreadfully remiss of Bill not to
leave behind a comprehensive blueprint of the learning and memory
system, which means I am going to have to think for myself!
Regards,
Rupert

···
        RM: I don't think of learning as

involving learning of specific associations between higher
level goals and lower level references. I think we learn,
through reorganization, the functional relationships between
error in higher level systems and the outputs of these
systems that are the references for lower level systems that
produce good control. So an error in the “throw away an
empty can” event control system first leads to increasing
the lower level reference for the tightness of “grasping the
can”; but that error doesn’t lead to as much of an increase
in the tightness of “grasping the can” reference as the same
error would lead to in the throw away a full can" event
control system. I guess you could call this associative
memory – where particular values of error are being
“associated” with particular values of output. But I like
thinking of it in terms of learning functions rather than
associations.

empty-canfull-canempty-can

[Rupert Young (2018.07.28 18.10)]

Sounds about right. Though, if it is a memory, I'd say it is a

memory of the of picking it up; a memory of what to
perceive. I also think the initial reference only need be a fairly
rough value that would allow some adjustment as the object wouldn’t be exact; and doesn’t need to be in PCT. Though if the
initial reference was far off then you’d drop it or hit the roof.
Rupert

···

On 26/07/2018 20:28, Fred Nickols
( via csgnet Mailing List) wrote:

Fred Nickols (2018.07.26.1523 ET)
In PCT terms, I would probably say something like this: I
set a reference level for the upward force I exerted based on
my (1) recollection and expectation of how heavy the carton
would be and (2) my memories of how much force is required for
that weight.

I’m probably way off but that’s the way it looks to me.

feeling estimate

fwnickols@gmail.com

[Rick Marken 2018-08-01_10:28:15]

[Bruce Nevin 2018-07-28_11:07:51 ET]
Rick Marken 2018-07-27_21:20:52 --

BN: I'm puzzled, and may be misunderstanding you.

RM: an error in the "throw away an empty can" event control system first leads to increasing the lower level reference for the tightness of "grasping the can"; but that error doesn't lead to as much of an increase in the tightness of "grasping the can" reference as the same error would lead to in the throw away a full can" event control system. Â

BN: This sounds like you are positing one control system for throwing away an empty can and another for throwing away a full can.

RM: Yes, and that seems like a rather inelegant way to conceive of it. I think it would be better to think of "throw away empty can" as one possible state of the reference for an event perception that is the means to controlling a still higher level perception -- which might be called "cleaning up", which probably would be considered a program perception. My aim was simply to try to propose a mode to explain the "error" Rupert described in terms of the "normal" operation of a control hierarchy.
BestÂ
Rick
Â

This implies a proliferation of control systems for 'throw away X' where X is an infinite (indefinineably large) number of things that might be grasped and thrown away; and a further infinity of control systems for doing other things with that infinite set of things. Obviously, that's not what you mean. So I'm puzzled.

Rupert Young (2018.07.17 16.20) --

I went to pick up a can of cola from the table to throw in the bin. As I did so the can slipped out of my fingers and fell to the floor spilling its contents. I hadn't applied enough force in my grasp to hold on to it. I had thought it was empty, but it was still half-full.Â

In B:CP, Bill proposed that perceptual signals stored in memory in the past become reference values in the present. He did not elaborate this proposal, and I think could not. Rupert's example involves memory of pressure sensations in the fingers of the hand that is grasping the drink can.
If I pick up an empty can, too tight a grasp crushes it. A paper cup is flimsy compared to a can. If I pick up a full paper cup I use less grasping pressure than if I pick up a full drink can, even though the effort of lifting it be the same; and the difference between the grasping effort for the empty cup is less vs. the full cup is less than the difference in effort grasping an empty vs. full drink can; this even though the lifting effort for an empty/full cup is about the same as that for a correspondingly empty/full can.Â
(There are other differences depending on perceived circumstances, e.g. I pick up a full cup by the rim if it's hot.)
Where does the brain/body store memories of the pressure sensations that are controlled while grasping different objects for different purposes, so that appropriate references are set in the present?

/Bruce

[Rick Marken 2018-07-27_21:20:52]

[Rupert Young (2018.07.26 20.20)]

RM: By "empty can event" I presume you are referring to the fact that the can (which was not actually empty) slipped from your grasp as you began to lift it. The relation of the grasping reference to that slip was that the lifting phase of producing the event perception "throwing away an empty can" was initiated while the grasp was too weak, the weakness of the grasp being a result of the fact that you were controlling for a perception of "throwing away an empty can" rather than "  "throwing away a filled can".

RY: These aren't addressing my question, so perhaps I am not asking it well, let me rephrase.

RY: Let's forget about whether it slipped. If I am controlling for an empty-can event I require a grasp reference of 10, say. If I am controlling for an full-can event I require a grasp reference of 15, say. So, depending upon the event a different reference would be set. There would seem to be a relationship between the type of event and a grasp reference. Is this because we have learned an associative memory relationship between the higher-level goal and the lower-level reference? That is, the lower-level reference is set through memory rather than a direct function of the higher-level output.

RM: I don't think of learning as involving learning of specific associations between higher level goals and lower level references. I think we learn, through reorganization, the functional relationships between error in higher level systems and the outputs of these systems that are the references for lower level systems that produce good control. So an error in the "throw away an empty can" event control system first leads to increasing the lower level reference for the tightness of "grasping the can"; but that error doesn't lead to as much of an increase in the tightness of "grasping the can" reference as the same error would lead to in the throw away a full can" event control system. I guess you could call this associative memory -- where particular values of error are being "associated" with particular values of output. But I like thinking of it in terms of learning functions rather than associations.Â
BestÂ
Rick
 >>>

Regards,
Rupert

--
Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you

···

On Sat, Jul 28, 2018 at 12:24 AM Richard Marken <<mailto:csgnet@lists.illinois.edu>csgnet@lists.illinois.edu> wrote:

On Thu, Jul 26, 2018 at 12:17 PM, Rupert Young <<mailto:csgnet@lists.illinois.edu>csgnet@lists.illinois.edu> wrote:
have nothing left to take away.�
                --Antoine de Saint-Exupery

--
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Rick Marken 2018-08-03_18:08:53]

[Rupert Young (2018.07.28 15.15)]

RM: I don't think of learning as involving learning of specific associations between higher level goals and lower level references. I think we learn, through reorganization, the functional relationships between error in higher level systems and the outputs of these systems that are the references for lower level systems that produce good control. So an error in the "throw away an empty can" event control system first leads to increasing the lower level reference for the tightness of "grasping the can"; but that error doesn't lead to as much of an increase in the tightness of "grasping the can" reference as the same error would lead to in the throw away a full can" event control system. I guess you could call this associative memory -- where particular values of error are being "associated" with particular values of output. But I like thinking of it in terms of learning functions rather than associations.Â

RY: I don't see that it is sufficient for the functional relationship to be between the _error_ and the outputs that are the references (this may not be what you are saying).

RM: I think it is what I am saying.Â

RY: After all why would an error of value X in one situation lead to a different reference from value X in a different situation?

RM:Â I was imagining a system that was controlling for a "throw out the can" event. The error I was talking about was the error in the event control system that sets the references for the components of the "throw out the can" control system. That error exists as long as that event is not completed. If the can were actually light, the event perception would have been readily produced and there would be failure to lift the can. When the can is actually heavy, the event control system fails because the "lift can" component of the "throw out the can" event fails to occur.Â

RY: It would seem that the relationship would need to be between the error of a particular _system_ and the lower-level. I.e. there is a different relationship between the error in the empty-can system and the grasping reference and between the error in the full-can system and the grasping reference. This would still seem to indicate that their is a relationship (association) between empty-can and a particular value of the grasping reference. Whether this is "memory" or "reorganisation" I don't know. Though I suspect these may turn out to be the same thing.

RM:Â I believe what you call the "empty can system" is my system that controls for perceiving the event "throw out the can". The event control system sets a reference for light grasping the can because grasping light cans is part of what is done to produce this event. If the can is actually full then when the system tries to produce this event it will fail. It fails because it was controlling for what turned out to be the wrong event; you don't throw out cans that are filled with liquid.Â
Â

RY: Frankly, though, I think it was dreadfully remiss of Bill not to leave behind a comprehensive blueprint of the learning and memory system, which means I am going to have to think for myself!

RM: Bill left a pretty good blueprint for the learning system (E. coli reorganization) but only a sketch of how memory might work. But my model of the error of failing to lift a full can doesn't rely on any assumptions about how learning and memory work. It is simply an event control system that is applied in what turned out to be the wrong situation.Â
BestÂ
Rick

···

Regards,
Rupert

--
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

Rick

RY: Frankly, though, I think it was dreadfully remiss of Bill not to leave behind a comprehensive blueprint of the learning and memory system, which means I am going to have to think for myself!

RM: Bill left a pretty good blueprint for the learning system (E. coli reorganization) but only a sketch of how memory might work. But my model of the error of failing to lift a full can doesn’t rely on any assumptions about how learning and memory work. It is simply an event control system that is applied in what turned out to be the wrong situation.

HB : My oppinion Rick is that the course of thinking is not bad. But I also think that you are wrong Rick, as you simply don’t understand how organisms function. You know diagram on p. 191… I see that also Rupert thinkss that PCT has to be upgraded. The number of »upgrader« is obviously growing.

Boris

···

From: Richard Marken (rsmarken@gmail.com via csgnet Mailing List) [mailto:csgnet@lists.illinois.edu]
Sent: Saturday, August 04, 2018 3:09 AM
To: csgnet
Subject: Re: Prediction (Re: short non-technical summary)

[Rick Marken 2018-08-03_18:08:53]

[Rupert Young (2018.07.28 15.15)]

RM: I don’t think of learning as involving learning of specific associations between higher level goals and lower level references. I think we learn, through reorganization, the functional relationships between error in higher level systems and the outputs of these systems that are the references for lower level systems that produce good control. So an error in the “throw away an empty can” event control system first leads to increasing the lower level reference for the tightness of “grasping the can”; but that error doesn’t lead to as much of an increase in the tightness of “grasping the can” reference as the same error would lead to in the throw away a full can" event control system. I guess you could call this associative memory – where particular values of error are being “associated” with particular values of output. But I like thinking of it in terms of learning functions rather than associations.

RY: I don’t see that it is sufficient for the functional relationship to be between the error and the outputs that are the references (this may not be what you are saying).

RM: I think it is what I am saying.

RY: After all why would an error of value X in one situation lead to a different reference from value X in a different situation?

RM: I was imagining a system that was controlling for a “throw out the can” event. The error I was talking about was the error in the event control system that sets the references for the components of the “throw out the can” control system. That error exists as long as that event is not completed. If the can were actually light, the event perception would have been readily produced and there would be failure to lift the can. When the can is actually heavy, the event control system fails because the “lift can” component of the “throw out the can” event fails to occur.

RY: It would seem that the relationship would need to be between the error of a particular system and the lower-level. I.e. there is a different relationship between the error in the empty-can system and the grasping reference and between the error in the full-can system and the grasping reference. This would still seem to indicate that their is a relationship (association) between empty-can and a particular value of the grasping reference. Whether this is “memory” or “reorganisation” I don’t know. Though I suspect these may turn out to be the same thing.

RM: I believe what you call the “empty can system” is my system that controls for perceiving the event “throw out the can”. The event control system sets a reference for light grasping the can because grasping light cans is part of what is done to produce this event. If the can is actually full then when the system tries to produce this event it will fail. It fails because it was controlling for what turned out to be the wrong event; you don’t throw out cans that are filled with liquid.

RY: Frankly, though, I think it was dreadfully remiss of Bill not to leave behind a comprehensive blueprint of the learning and memory system, which means I am going to have to think for myself!

RM: Bill left a pretty good blueprint for the learning system (E. coli reorganization) but only a sketch of how memory might work. But my model of the error of failing to lift a full can doesn’t rely on any assumptions about how learning and memory work. It is simply an event control system that is applied in what turned out to be the wrong situation.

Best

Rick

Regards,
Rupert

Richard S. Marken

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
–Antoine de Saint-Exupery