PCT by another name: Generative language framework?

[From Rupert Young (2017.12.05 12.30)]

I was reading section "A generative framework of language comprehension
and production in healthy individuals" of the attached (pp2-5) and it
was very reminiscent of PCT. Particularly this sentence "So, for
example, if we had predicted that our voice would sound loud, but the
auditory feedback we receive indicates that it sounds soft, then a
relatively large prediction error will be passed up the generative model
leading us to speak louder."

Is this just PCT by another name? And they are using "prediction" in
place of "reference"?

Rupert

fnhum-09-00643.pdf (448 KB)

Hi Rupert, they are using the term ‘prediction’ to mean ‘predicted given the selected actions’ which isn’t what the term prediction means in everyday language and makes for both a contrived way of describing perceptual control (when it ever might do so like here) and for many misunderstandings. But this is what all the hierarchical frameworks besides PCT do isn’t it? - free energy principle, predicting coding, etc. Karl Friston even told me he thought PCT and predictive coding might be mathematically identical. I don’t think we can ever know until they are directly compared. And if they are, then as PCT has been specified since 1960, shouldn’t they just give up, accept the intellectual credit is with Bill, and do PCT research?
Warren

···

On 5 Dec 2017, at 12:27, Rupert Young <rupert@perceptualrobots.com> wrote:

[From Rupert Young (2017.12.05 12.30)]

I was reading section "A generative framework of language comprehension and production in healthy individuals" of the attached (pp2-5) and it was very reminiscent of PCT. Particularly this sentence "So, for example, if we had predicted that our voice would sound loud, but the auditory feedback we receive indicates that it sounds soft, then a relatively large prediction error will be passed up the generative model leading us to speak louder."

Is this just PCT by another name? And they are using "prediction" in place of "reference"?

Rupert

<fnhum-09-00643.pdf>

[From Fred Nickols (2017.12.05.0934 ET)]

Close, perhaps, but not quite - at least in my estimation. Had it read as
follows, I would agree:

"So, for example, if we wanted our voice to sound loud but the auditory
feedback we receive indicates that it sounds soft, then an error between
what we hear and what we want to hear, will result in speaking louder."

I think the sticker for me is in fact the use of "prediction." That signals
to me a model of predict, compute, act, assess and back to predict again in
case of error. To me, that's not PCT at all. That said, I'll try reading
the attachment in its entirety.

Fred Nickols

···

-----Original Message-----
From: Rupert Young [mailto:rupert@perceptualrobots.com]
Sent: Tuesday, December 5, 2017 7:28 AM
To: csgnet@lists.illinois.edu
Subject: PCT by another name: Generative language framework?

[From Rupert Young (2017.12.05 12.30)]

I was reading section "A generative framework of language comprehension and
production in healthy individuals" of the attached (pp2-5) and it was very
reminiscent of PCT. Particularly this sentence "So, for example, if we had
predicted that our voice would sound loud, but the auditory feedback we
receive indicates that it sounds soft, then a relatively large prediction
error will be passed up the generative model leading us to speak louder."

Is this just PCT by another name? And they are using "prediction" in place
of "reference"?

Rupert

[Martin Taylor 2017.12.05.11.45]

[From Rupert Young (2017.12.05 12.30)]

I was reading section "A generative framework of language comprehension and production in healthy individuals" of the attached (pp2-5) and it was very reminiscent of PCT. Particularly this sentence "So, for example, if we had predicted that our voice would sound loud, but the auditory feedback we receive indicates that it sounds soft, then a relatively large prediction error will be passed up the generative model leading us to speak louder."

Is this just PCT by another name? And they are using "prediction" in place of "reference"?

If you keep all CSGnet messages, as I do, you might re-read [Martin Taylor 2017.09.15.17.52] reposted 2017.09.19.17.27. which treats the similarities and differences between the Predictive Coding Theory (PreCoT) and the Perceptual Control Theory (PerCoT) circuitry for a control loop including the imagination connection and the connections between levels. If not, I imagine it is in Dag's archives, or I could send a copy.

Martin

[From Rupert Young (2017.12.05 17.00)]

(Martin Taylor 2017.12.05.11.45]

Is this just PCT by another name? And they are using "prediction" in place of "reference"?

If you keep all CSGnet messages, as I do, you might re-read [Martin Taylor 2017.09.15.17.52] reposted 2017.09.19.17.27. which treats the similarities and differences between the Predictive Coding Theory (PreCoT) and the Perceptual Control Theory (PerCoT) circuitry for a control loop including the imagination connection and the connections between levels. If not, I imagine it is in Dag's archives, or I could send a copy.

I can see one "reposted 2017.09.19.17.25" entitled "What is revolutionary about PCT?", is that it?

Rupert

[Martin Taylor 2017.12.05.13.08]

  [From

Rupert Young (2017.12.05 17.00)]

  (Martin Taylor 2017.12.05.11.45]
      Is this just PCT by another name? And

they are using “prediction” in place of “reference”?

    If you keep all CSGnet messages, as I do, you might re-read

[Martin Taylor 2017.09.15.17.52] reposted 2017.09.19.17.27.
which treats the similarities and differences between the
Predictive Coding Theory (PreCoT) and the Perceptual Control
Theory (PerCoT) circuitry for a control loop including the
imagination connection and the connections between levels. If
not, I imagine it is in Dag’s archives, or I could send a copy.

  I can see one "reposted 2017.09.19.17.25" entitled "What is

revolutionary about PCT?", is that it?

No. The subject line is Re: Slatestarcodex, and the repost date ends

“.27”. I guess you don’t have it, so here’s the relevant part of the
text of it. “PreCoT” = Predictive Coding Theory. “PerCoT” =
Perceptual Coding Theory".

================quote===========

A few weeks ago I offered an alternative circuit for the inter-level

connections in PerCoT. I don’t remember whether I said so at the
time, but it was based on one by Seth and Friston to demonstrate
PreCoT. It isn’t exactly the same as their diagram, but it is my
interpretation of their diagram in PerCoT visual language. I used
this figure to show that different physical (neural) connections can
provide the same result. In it, the error is routed both up and down
(through the output function that provides the next lower layer’s
reference values.

![nlanhkjcpjgdpklb.png|762x666](upload://3UWLkUZ263dwP3Ocv1gpWP7QJ5C.png)

In the "standard" PerCoT connection (top row), what goes up to the

next level is the perceptual value. In the PreCoT-based circuit what
goes up is the error and the reference, from which the Perceptual
Function can produce the perceptual signal (if that loop requires
it). The left column shows this for the “standard” no-tolerance,
no-imagination condition. The middle column shows how the two
circuits treat tolerance, while the right column shows how they both
treat imagination.

The PreCoT-based circuit can do whatever the standard PerCoT circuit

can do, but it can do other things as well, such as inject either
reference or error individually back into the upgoing perceptual
circuitry. The PreCoT circuit also automatically allows for blending
of appropriately weighted sensory and imagination input, as Bruce
(Nevin?) was arguing for, many months ago. I consider both to be
advantages, because we certainly can (at least consciously) perceive
reference and error separately from the current perception, and the
current perception does use partial input from imagination if the
sensory input isn’t very clear or is interrupted…

I'm not guaranteeing that my interpretation of PreCoT would be

accepted as correct by one immersed in that theory, but I don’t
think it can be too terribly wrong. I haven’t really studied PreCot
beyond reading a few articles. If it is anything like correct, the
important difference between PreCoT and PerCoT isn’t in what the
circuitry does functionally, so much as in the labels attached to
the components by the proponents of the two theories.

There is one very big difference not captured in the above diagram,

however, and that is in the internal nature of the output functions,
which in PreCoT need to perform complex computations in real time to
work out what actions to perform to minimize what they call
“prediction error”, whereas PerCoT requires the output function
mainly to provide Gain, and if the environmental feedback path does
not provide it, time-binding in the form of integration. In PerCoT,
the equivalent of the complicated calculations is largely contained
in the reorganized structure of the hierarchy, which determines what
actions tend to work best in influencing a controlled perception.

=======end quote=======

Martin

[From Rupert Young (2017.12.07 10.00)]

(Martin Taylor 2017.12.05.13.08]

  No.

The subject line is Re: Slatestarcodex, and the repost date ends
“.27”. I guess you don’t have it, so here’s the relevant part of
the text of it. “PreCoT” = Predictive Coding Theory. “PerCoT” =
Perceptual Coding Theory".

  ================quote===========



  A few weeks ago I offered an alternative circuit for the

inter-level connections in PerCoT. I don’t remember whether I said
so at the time, but it was based on one by Seth and Friston to
demonstrate PreCoT. It isn’t exactly the same as their diagram,
but it is my interpretation of their diagram in PerCoT visual
language. I used this figure to show that different physical
(neural) connections can provide the same result. In it, the error
is routed both up and down (through the output function that
provides the next lower layer’s reference values.

  ![nlanhkjcpjgdpklb.png|762x666](upload://3UWLkUZ263dwP3Ocv1gpWP7QJ5C.png)
With reference to figure d, or an example, what is it trying to

predict, and what is involved in the complex computations PreCoT
needs to perform to minimize “prediction error”?

Rupert

[Martin Taylor 2017.12.07.10.15]

[From Rupert Young (2017.12.07 10.00)]

(Martin Taylor 2017.12.05.13.08]

    No. The subject line is Re: Slatestarcodex, and the repost date

ends “.27”. I guess you don’t have it, so here’s the relevant
part of the text of it. “PreCoT” = Predictive Coding Theory.
“PerCoT” = Perceptual Coding Theory".

    ================quote===========



    A few weeks ago I offered an alternative circuit for the

inter-level connections in PerCoT. I don’t remember whether I
said so at the time, but it was based on one by Seth and Friston
to demonstrate PreCoT. It isn’t exactly the same as their
diagram, but it is my interpretation of their diagram in PerCoT
visual language. I used this figure to show that different
physical (neural) connections can provide the same result. In
it, the error is routed both up and down (through the output
function that provides the next lower layer’s reference values.

    ![nlanhkjcpjgdpklb.png|762x666](upload://3UWLkUZ263dwP3Ocv1gpWP7QJ5C.png)
  With reference to figure d, or an example, what is it trying to

predict, and what is involved in the complex computations PreCoT
needs to perform to minimize “prediction error”?

It isn't trying to predict anything. It's a circuit that is

mathematically equivalent to (a), connecting the levels a bit
differently, but having the same result for the same reason.

The complex calculations are for the different joint angles and

muscle tensions in the arm, for example, that are needed to get the
hand around a glass and then bring it to the mouth. They cannot take
into account disturbances without remaking the calculations in real
time. At least, that’s what I gather from the little PreCoT
literature I have seen.

Martin

That’s of the main problems I have - the whole field seems to be a misnomer…

···

On 7 Dec 2017, at 15:22, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2017.12.07.10.15]

[From Rupert Young (2017.12.07 10.00)]

(Martin Taylor 2017.12.05.13.08]

    No. The subject line is Re: Slatestarcodex, and the repost date

ends “.27”. I guess you don’t have it, so here’s the relevant
part of the text of it. “PreCoT” = Predictive Coding Theory.
“PerCoT” = Perceptual Coding Theory".

    ================quote===========



    A few weeks ago I offered an alternative circuit for the

inter-level connections in PerCoT. I don’t remember whether I
said so at the time, but it was based on one by Seth and Friston
to demonstrate PreCoT. It isn’t exactly the same as their
diagram, but it is my interpretation of their diagram in PerCoT
visual language. I used this figure to show that different
physical (neural) connections can provide the same result. In
it, the error is routed both up and down (through the output
function that provides the next lower layer’s reference values.

    <nlanhkjcpjgdpklb.png>
  With reference to figure d, or an example, what is it trying to

predict, and what is involved in the complex computations PreCoT
needs to perform to minimize “prediction error”?

It isn't trying to predict anything. It's a circuit that is

mathematically equivalent to (a), connecting the levels a bit
differently, but having the same result for the same reason.

The complex calculations are for the different joint angles and

muscle tensions in the arm, for example, that are needed to get the
hand around a glass and then bring it to the mouth. They cannot take
into account disturbances without remaking the calculations in real
time. At least, that’s what I gather from the little PreCoT
literature I have seen.

Martin

[From Rupert Young (2017.12.07 17.40)]

(Martin Taylor 2017.12.07.10.15]

With reference to figure d, or an example, what is it trying to predict, and what is involved in the complex computations PreCoT needs to perform to minimize "prediction error"?

It isn't trying to predict anything. It's a circuit that is mathematically equivalent to (a), connecting the levels a bit differently, but having the same result for the same reason.

Looking at Predictive coding - Wikipedia it seems that it is about predicting what the sensory input should be. So it has a generative model model of what the sensory inputs should be to achieve a task.

If the reality (actual sensory inputs) doesn't match then there is "prediction error". In which case the model is updated probabilistically so it predicts better in the future. Qs: If it is not the predictions which are enabling you to perform a task then what is? How can you carry out a task if there is prediction error?

If the reality does match, and there is no prediction error, then the prediction signals are used rather than the sensory input. Q: Does this require computation of the actions required to realise the sensory inputs?

So, the difference with PerCoT seems to be that PreCoT is trying to model what sensory inputs are necessary to perform a task, whereas PerCoT affects the sensory input in order to make it match a desired value. As Martin says, the former would not account for disturbances. Q: The difference between a prediction and a goal is that the former requires a computational mapping of the action necessary to realise the prediction, whereas with the latter action is varied until the goal is realised?

It would be useful to model a sample problem with both methods: the only one I know of within PreCoT is the mountain car problem.

The complex calculations are for the different joint angles and muscle tensions in the arm, for example, that are needed to get the hand around a glass and then bring it to the mouth. They cannot take into account disturbances without remaking the calculations in real time. At least, that's what I gather from the little PreCoT literature I have seen.

So, the calculations of the actions based upon the sensory predictions?

Now, what's all this got to do with generative language frameworks; I guess they work on the same principle?

Rupert

[Martin Taylor 2017.12.07.14.16]

  [From

Rupert Young (2017.12.07 17.40)]

  (Martin Taylor 2017.12.07.10.15]
      With reference to figure d, or an

example, what is it trying to predict, and what is involved in
the complex computations PreCoT needs to perform to minimize
“prediction error”?

    It isn't trying to predict anything. It's a circuit that is

mathematically equivalent to (a), connecting the levels a bit
differently, but having the same result for the same reason.

  Looking at it

seems that it is about predicting what the sensory input should
be. So it has a generative model model of what the sensory inputs
should be to achieve a task.

What is the "it" in your sentence? Syntactically, you seem to be

referring to PreCot rather than to the figure you were asking about,
which would make your message a bit of a non-sequitur.
Pragmatically, your “it” should refer to my diagram, about which you
initially asked. The diagram has nothing to do with PreCot except
that the idea for the diagram was inspired by a paper on which Karl
Friston was a co-author. That’s the only way it connects with
PreCoT. If any of the rest of your message is about the circuit
diagrams that you initially questioned, I’ll be happy to try to
answer. But don’t look to me for expertise on PreCoT.

If you want to know about PreCot from a PerCoT viewpoint, maybe

Warren can answer best, because he has had a lot of dealings with
Karl Friston. Or you could look at, for example, slatestarcodex.com.

I'll take the liberty of copying Warren's post from September 16, in

which he introduced Slatestarcodex to CSGnet (Warren hates putting
ID tags on his messages, so I can’t point you to it in the archives
any better than that).

···

https://en.wikipedia.org/wiki/Predictive_coding
I’ve had a word with
these Slate Star people…

http://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/#comment-547161

Here below…

          Hi everyone, I really believe

that this thread is based on two false premises. First
that understanding depression is the way to help people
recover from mental health problems. Depression is just
one of many manifestations of chronic distress. We need to
understand the principles of what underlies all of these
because the categorisation of psychological distress into
disorders is a red herring. All of the evidence now points
to there being a core process underlying all mental health
difficulties. Some of these studies are published in the
highest impact psychiatry journals so it’s not just me
saying it! Second we need to ask ourselves – what is more
fundamental to well being? Predicting what will happen to
you? Or controlling what will happen to you? Regardless of
what you predict, if you can take control then your
predictions are just that, and may not be reality, or you
can make them your reality if you want to. This is
control. So we need a unified theory of how control works.
The fundamental mechanism of control – in all control
systems – is negative feedback – not predictiction. Under
some circumstances prediction can improve control, but it
needs updating as regularly as possible. That is negative
feedback and it is the route to recovery and adaptation.
Darwin and Wallace saw this a century ago by drawing
analogy between the Watts Governor (negative feedback
control system of a steam engine) and evolution by natural
selection.

          Perceptual Control Theory is the

implementation of negative feedback in the nervous system,
exploiting its capacity to extract regularities from the
environment, organise them in a hierarchy and use actions
to make current experience match memories of desired past
environmental regularities. Conflict between control
systems is the most pernicious reason of loss of control,
and mental health problems persist when people are
unwilling or unable to sustain their attention on the
system governing the conflict long enough to allow
spontaneous change to reduce error through reorganisation.
Biases in prediction are observed to reduce during this
process but this is largely epiphenomenal, and
idiosyncratic to the person. The universal principle is to
restore control through reorganisation of conflicted
control systems.Â

          We use Method of Levels as a

universal therapy to help people manage this. See http://www.methodoflevels.com.au .
We need to use mathematical/computational theories (of
which PCT and PP and natural selection are examples) to
push these pivotal changes rather than pondering within
our familiar frames of reference…

[From Rupert Young (2017.12.07 20.40)]

(Martin Taylor 2017.12.07.14.16]

[From Rupert Young (2017.12.07 17.40)]

(Martin Taylor 2017.12.07.10.15]

With reference to figure d, or an example, what is it trying to predict, and what is involved in the complex computations PreCoT needs to perform to minimize "prediction error"?

It isn't trying to predict anything. It's a circuit that is mathematically equivalent to (a), connecting the levels a bit differently, but having the same result for the same reason.

Looking at Predictive coding - Wikipedia it seems that it is about predicting what the sensory input should be. So it has a generative model model of what the sensory inputs should be to achieve a task.

What is the "it" in your sentence? Syntactically, you seem to be referring to PreCot rather than to the figure you were asking about, which would make your message a bit of a non-sequitur. Pragmatically, your "it" should refer to my diagram, about which you initially asked. The diagram has nothing to do with PreCot except that the idea for the diagram was inspired by a paper on which Karl Friston was a co-author. That's the only way it connects with PreCoT. If any of the rest of your message is about the circuit diagrams that you initially questioned, I'll be happy to try to answer. But don't look to me for expertise on PreCoT.

Well, my initial question was "[are they] using "prediction" in place of "reference"?" in the context of generative language frameworks. It was you that brought up PreCoT and sent me the message, with the diagrams, which was a comparison of PerCoT and PreCoT. So, you may understand why I thought the diagram was about PreCoT. And, following from that, the "it" in my sentence referred to PreCoT as I thought you had changed the discussion to that.

Now I am somewhat confused as to what the "Slatestarcodex" message had to do with my initial query.

Anyway. Maybe Warren can shed light on the discussion.

Rupert

[Martin Taylor 2017.12.07.17.31]

It's all too easy to talk at cross-purposes in e-mail, isn't it. We each think the other is working from the same basic assumptions as we are, and sometimes it's not true.

[From Rupert Young (2017.12.07 20.40)]

(Martin Taylor 2017.12.07.14.16]

[From Rupert Young (2017.12.07 17.40)]

(Martin Taylor 2017.12.07.10.15]

With reference to figure d, or an example, what is it trying to predict, and what is involved in the complex computations PreCoT needs to perform to minimize "prediction error"?

It isn't trying to predict anything. It's a circuit that is mathematically equivalent to (a), connecting the levels a bit differently, but having the same result for the same reason.

Looking at Predictive coding - Wikipedia it seems that it is about predicting what the sensory input should be. So it has a generative model model of what the sensory inputs should be to achieve a task.

What is the "it" in your sentence? Syntactically, you seem to be referring to PreCot rather than to the figure you were asking about, which would make your message a bit of a non-sequitur. Pragmatically, your "it" should refer to my diagram, about which you initially asked. The diagram has nothing to do with PreCot except that the idea for the diagram was inspired by a paper on which Karl Friston was a co-author. That's the only way it connects with PreCoT. If any of the rest of your message is about the circuit diagrams that you initially questioned, I'll be happy to try to answer. But don't look to me for expertise on PreCoT.

Well, my initial question was "[are they] using "prediction" in place of "reference"?" in the context of generative language frameworks.

Yes, that question is what got me to read the paper "Active interoceptive inference and the emotional brain", by
Anil K. Seth and Karl J. Friston, Phil. Trans. R. Soc. B 371: 20160007. http://dx.doi.org/10.1098/rstb.2016.0007, which is where I got the basic structure of my diagram. I'm not sure, but I remember that Warren posted a link to it. My answer now to that question is "If they are, they are not doing so intentionally." Prediction is an expectation of what ought to be seen if everything has been correctly computed, whereas reference is what is forced to be seen. But the circuit connection is the same.

It was you that brought up PreCoT and sent me the message, with the diagrams, which was a comparison of PerCoT and PreCoT. So, you may understand why I thought the diagram was about PreCoT. And, following from that, the "it" in my sentence referred to PreCoT as I thought you had changed the discussion to that.

Sorry about being opaque about it. I had assumed I had been clear that all the diagrams were PerCoT diagrams.

Now I am somewhat confused as to what the "Slatestarcodex" message had to do with my initial query.

I haven't looked at the whole thing, but the link brings you to a long discussion of PreCoT, from which I learned quite a bit about what people think it does and can do. Warren and Rick entered the discussion to compare it (unfavourably) with PerCoT, which makes it potentially useful if you want to understand a little about PreCoT and how it differs from PerCoT. I got the impression that the whole blog might be on PreCoT, but I never tested that hypothesis.

Anyway. Maybe Warren can shed light on the discussion.

I hope so. I'm more interested in PerCoT.

Martin

[From Rupert Young (2017.12.08 10.30)]

(Martin Taylor 2017.12.07.17.31]

It's all too easy to talk at cross-purposes in e-mail, isn't it. We each think the other is working from the same basic assumptions as we are, and sometimes it's not true.

The curse of CSGNET!

Yes, that question is what got me to read the paper "Active interoceptive inference and the emotional brain", by
Anil K. Seth and Karl J. Friston, Phil. Trans. R. Soc. B 371: 20160007. http://dx.doi.org/10.1098/rstb.2016.0007, which is where I got the basic structure of my diagram. I'm not sure, but I remember that Warren posted a link to it. My answer now to that question is "If they are, they are not doing so intentionally." Prediction is an expectation of what ought to be seen if everything has been correctly computed, whereas reference is what is forced to be seen. But the circuit connection is the same.

I presume you mean that the labels of endpoints of the connection are the same, but that the actual functions at the endpoints, that produce the signals that travel along the connection, are not. And that PreCoT required more complex functions than PerCoT.

I hope so. I'm more interested in PerCoT.

Me too, but I'd like to implement models of both for a particular problem so we(I) can understand better what the differences are.

Rupert