Are goals predictions?

[From Rupert Young (2013.11.17 17.30 UT)]

Following on from Erling's and my other posts on Hawkins' book, here is
a striking passage from it (p89, attached).

" "Prediction" means that the neurons involved in sensing
your door become active in advance of them actually receiving
sensory input. When the sensory input does arrive, it is compared
with what was expected. As you approach the door, your
cortex is forming a slew of predictions based on past experience.
As you reach out, it predicts what you will feel on your
fingers, when you will feel the door, and at what angle your
joints will be when you actually touch the door. As you start to
push the door open, your cortex predicts how much resistance
the door will offer and how it will sound. When your predictions
are all met, you'll walk through the door without consciously
knowing these predictions were verified. But if your
expectations about the door are violated, the error will cause
you to take notice. Correct predictions result in understanding.
The door is normal. Incorrect predictions result in confusion
and prompt you to pay attention. The door latch is not where
it's supposed to be. The door is too light. The door is off center.
The texture of the knob is wrong. We are making continuous
low-level predictions in parallel across all our senses"

This could almost be a description of goals (references) from a PCT
perspective, particularly given the sentence "When the sensory input
does arrive, it is compared with what was expected."

So, is the difference (between goals and predictions) just one of
semantics, or is there a fundamental reason why references should not be
thought of as predictions?

PCT is clear in the behavioural situation of achieving an existing goal,
but, for me, it is not so obvious how PCT addresses the situation where
perception precedes the goal. For example, thinking back to the chapter
on sequences, it is clear how control works when wanting to speak the
word "juice". But what about when reading something, when you don't have
a reference for the next word that will be perceived (because you don't
know what it is yet). I can see how a theory of "prediction" would have
a plausible explanation for this (probability of next word based upon
context), but how can such comprehension be framed in PCT terms?

p88-89.pdf (363 KB)

···

--

Regards,
Rupert

[From: Richard Pfau (2013.11.19 1622 EST)]

Regarding: [From Rupert Young (2013.11.17 17.30 UT)]

Could Hawkins use of the term “prediction” be considered instead as examples of “priming”?

For example, rather than the Hawkins statement “We are making continuous low-level predictions in parallel across all our senses.” (On Intelligence, p. 89), we could say “Our existing neural networks are continuously being primed in parallel across all our senses.”

That is, (a) our environment through our senses is continuously priming neural networks that have developed in the past based on experience in similar contexts and situations, such that (b) when sensory impulses are received that are quite different from the neural networks primed, © what we in PCT call an error signal is produced that (d) may stimulate action aimed at reducing the difference/error signal.

Personally, at this point, I am more comfortable thinking of “priming” occurring than Hawkins “prediction”–perhaps because of the cultural baggage involved. And so, instead of Hawkins statement “When listening to a song, you are predicting the next note” I would say “When listening to a song, you are primed (i.e., stimulated) to experience the next note.”

Similarly, in each of his “key properties of HTM prediction” mentioned in Mumenta’a paper “Hierarchical Temporal Memory” [Version 0.2.1, September 12, 2011 – referred to by Erling Jorgensen (2013.11.18 0015EST)], the term “Prediction” can easily be replaced by the term and concept of “Priming”. And so, “1) Prediction is continuous” can be rephrased “1) Priming is continuous”; “2) Prediction occurs in every region at every level of the hierarchy” can be restated “Priming occurs in every region at every level of the hierarchy”; “3) Predictions are context sensitive” can be restated “3) Priming is context sensitive”; and so on ref his other key properties on pp.16-17.

Relatedly, thinking of priming as a way of stimulating neural networks associated with reference signals seems consistent with Power’s statement [referred to by Erling Jorgensen (2013.11.17 2305EDT)] that “We will assume from now on that all reference signals are retrieved recordings of past perceptual signals” (B:CP, 2005, p. 219; 1973, p. 217) – with priming being a way of stimulating/retrieving recordings of past perceptual signals and/or sensitizing the neural networks associated with those past signals so that they are stimulated more easily.

With Regards,

Richard Pfau

···

-----Original Message-----

From: Rupert Young rupert@MOONSIT.CO.UK

To: CSGNET CSGNET@LISTSERV.ILLINOIS.EDU

Sent: Sun, Nov 17, 2013 12:35 pm

Subject: Are goals predictions?

`[From Rupert Young (2013.11.17 17.30 UT)]

Following on from Erling's and my other posts on Hawkins' book, here is

a striking passage from it (p89, attached).

" "Prediction" means that the neurons involved in sensing

your door become active in advance of them actually receiving

sensory input. When the sensory input does arrive, it is compared

with what was expected. As you approach the door, your

cortex is forming a slew of predictions based on past experience.

As you reach out, it predicts what you will feel on your

fingers, when you will feel the door, and at what angle your

joints will be when you actually touch the door. As you start to

push the door open, your cortex predicts how much resistance

the door will offer and how it will sound. When your predictions

are all met, you'll walk through the door without consciously

knowing these predictions were verified. But if your

expectations about the door are violated, the error will cause

you to take notice. Correct predictions result in understanding.

The door is normal. Incorrect predictions result in confusion

and prompt you to pay attention. The door latch is not where

it's supposed to be. The door is too light. The door is off center.

The texture of the knob is wrong. We are making continuous

low-level predictions in parallel across all our senses"

This could almost be a description of goals (references) from a PCT

perspective, particularly given the sentence "When the sensory input

does arrive, it is compared with what was expected."

So, is the difference (between goals and predictions) just one of

semantics, or is there a fundamental reason why references should not be

thought of as predictions?

PCT is clear in the behavioural situation of achieving an existing goal,

but, for me, it is not so obvious how PCT addresses the situation where

perception precedes the goal. For example, thinking back to the chapter

on sequences, it is clear how control works when wanting to speak the

word "juice". But what about when reading something, when you don't have

a reference for the next word that will be perceived (because you don't

know what it is yet). I can see how a theory of "prediction" would have

a plausible explanation for this (probability of next word based upon

context), but how can such comprehension be framed in PCT terms?

--

Regards,

Rupert

`

[From: Shannon Williams (2013.11.19 1642 CST)]

Regarding: [From: Richard Pfau (2013.11.19 1622 EST)]

Beautifully stated!

When listening to a song, you are primed (i.e., stimulated) to experience the next note.

This priming can be extended to expected behavior from other individuals. It describes so much!!

···

On Tue, Nov 19, 2013 at 3:22 PM, Richard H. Pfau richardpfau4153@aol.com wrote:

[From: Richard Pfau (2013.11.19 1622 EST)]

Regarding: [From Rupert Young (2013.11.17 17.30 UT)]

Could Hawkins use of the term “prediction” be considered instead as examples of “priming”?

For example, rather than the Hawkins statement “We are making continuous low-level predictions in parallel across all our senses.” (On Intelligence, p. 89), we could say “Our existing neural networks are continuously being primed in parallel across all our senses.”

That is, (a) our environment through our senses is continuously priming neural networks that have developed in the past based on experience in similar contexts and situations, such that (b) when sensory impulses are received that are quite different from the neural networks primed, © what we in PCT call an error signal is produced that (d) may stimulate action aimed at reducing the difference/error signal.

Personally, at this point, I am more comfortable thinking of “priming” occurring than Hawkins “prediction”–perhaps because of the cultural baggage involved. And so, instead of Hawkins statement “When listening to a song, you are predicting the next note” I would say “When listening to a song, you are primed (i.e., stimulated) to experience the next note.”

Similarly, in each of his “key properties of HTM prediction” mentioned in Mumenta’a paper “Hierarchical Temporal Memory” [Version 0.2.1, September 12, 2011 – referred to by Erling Jorgensen (2013.11.18 0015EST)], the term “Prediction” can easily be replaced by the term and concept of “Priming”. And so, “1) Prediction is continuous” can be rephrased “1) Priming is continuous”; “2) Prediction occurs in every region at every level of the hierarchy” can be restated “Priming occurs in every region at every level of the hierarchy”; “3) Predictions are context sensitive” can be restated “3) Priming is context sensitive”; and so on ref his other key properties on pp.16-17.

Relatedly, thinking of priming as a way of stimulating neural networks associated with reference signals seems consistent with Power’s statement [referred to by Erling Jorgensen (2013.11.17 2305EDT)] that “We will assume from now on that all reference signals are retrieved recordings of past perceptual signals” (B:CP, 2005, p. 219; 1973, p. 217) – with priming being a way of stimulating/retrieving recordings of past perceptual signals and/or sensitizing the neural networks associated with those past signals so that they are stimulated more easily.

With Regards,

Richard Pfau

[From Erling Jorgensen (2013.11.21 1200EST)]

Richard Pfau (2013.11.19? 1622 EST)]
Could Hawkins use of the term "prediction" be considered instead as
examples of "priming"?

Priming may be a helpful analogy, as long as we consider it as priming from
above, rather than priming from below. To think of it a priming from above
may come close to Powers' notion of a fragment of a past perception serving
as an address signal for _retrieving the rest_ of a constellation of
perceptions.
I am uncomfortable with how you describe priming, because it seems to put
the agency back out in the environment. For instance, your description:

That is, (a) our environment through our senses is continuously priming
neural networks that have developed in the past?based on?experience in
similar contexts and situations, such that (b) when sensory impulses?are
received that are quite different from?the neural networks primed, (c)
what we in PCT call an error signal is produced that (d) may stimulate
action aimed at reducing the difference/error signal.

This certainly sounds like neural networks getting primed by the
environment & then being compared to further sensory input from the
environment. I believe this loses the idea of goals or references being
generated from within or from above.
I do think priming from above captures some of the flavor of addressing
that Powers was trying to explain in talking about reference signals.
All the best,
Erling

[From: Richard Pfau (2013.11.22 1043 EST)]

Ref: [From Erling Jorgensen (2013.11.21 1200EST)]

`>I do think priming from above captures some of the flavor of addressing

that Powers was trying to explain in talking about reference signals. `

Yes, priming from “above” (ex. via active goals) also seems to occur as well as priming from “below” (by sensing the environment) – for example with priming from above making one more sensitive to perceptional signals related to a goal that one is consciously doing things to achieve (i.e., to reduce the error signals involved).

For those interested, such top-down and bottom-up priming seems consistent with the thinking of Lord and Levy who state that “we propose that instantiating a referent in a control system activates related cognitive structures.” They also suggest that “the pursuit of one referent directly inhibits the activation of competing referents and their associated knowledge.” Regarding such negative priming, they state that “We maintain that this capacity to inhibit competing cognitions is the primary mechanism by which one maintains focus on a particular task, problem, or line of thought until it is completed” (Robert G. Lord and Paul E. Levy, “Moving from Cognition to Action: A Control Theory Perspective,” Applied Psychology: An International Review, 43 (3), 1994, p. 350).

With Regards,

Richard Pfau

···

-----Original Message-----

From: Erling Jorgensen ejorgensen@RIVERBENDCMHC.ORG

To: CSGNET CSGNET@LISTSERV.ILLINOIS.EDU

Sent: Thu, Nov 21, 2013 12:28 pm

Subject: Re: Are goals predictions?

`[From Erling Jorgensen (2013.11.21 1200EST)]

>Richard Pfau (2013.11.19? 1622 EST)]

>Could Hawkins use of the term "prediction" be considered instead as

>examples of "priming"?

Priming may be a helpful analogy, as long as we consider it as priming from

above, rather than priming from below. To think of it a priming from above

may come close to Powers' notion of a fragment of a past perception serving

as an address signal for _retrieving the rest_ of a constellation of

perceptions.

I am uncomfortable with how you describe priming, because it seems to put

the agency back out in the environment. For instance, your description:

>That is, (a) our environment through our senses is continuously priming

>neural networks that have developed in the past?based on?experience in

>similar contexts and situations, such that (b) when sensory impulses?are

>received that are quite different from?the neural networks primed, (c)

>what we in PCT call an error signal is produced that (d) may stimulate

>action aimed at reducing the difference/error signal.

This certainly sounds like neural networks getting primed by the

environment & then being compared to further sensory input from the

environment. I believe this loses the idea of goals or references being

generated from within or from above.

I do think priming from above captures some of the flavor of addressing

that Powers was trying to explain in talking about reference signals.

All the best,

Erling

`

[From Rick Marken (2013.11.22.0850)]

Sorry I haven’t been engaged much in this thread but that’s because it doesn’t seem to have much to do with PCT. The answer to the title question “Are goals predictions?” is simple: no. Goals (reference signals in PCT) are specifications for the states of perceptual variables.

As far as “priming” goes, I think that’s a concept best left in the closet containing other animistic concepts, like “affordance”, that just get in the way of explanations based on working models.

Best regards

Rick

···

On Fri, Nov 22, 2013 at 7:43 AM, Richard H. Pfau richardpfau4153@aol.com wrote:

[From: Richard Pfau (2013.11.22 1043 EST)]

Ref: [From Erling Jorgensen (2013.11.21 1200EST)]

`>I do think priming from above captures some of the flavor of addressing

that Powers was trying to explain in talking about reference signals. `

Yes, priming from “above” (ex. via active goals) also seems to occur as well as priming from “below” (by sensing the environment) – for example with priming from above making one more sensitive to perceptional signals related to a goal that one is consciously doing things to achieve (i.e., to reduce the error signals involved).

For those interested, such top-down and bottom-up priming seems consistent with the thinking of Lord and Levy who state that “we propose that instantiating a referent in a control system activates related cognitive structures.” They also suggest that “the pursuit of one referent directly inhibits the activation of competing referents and their associated knowledge.” Regarding such negative priming, they state that “We maintain that this capacity to inhibit competing cognitions is the primary mechanism by which one maintains focus on a particular task, problem, or line of thought until it is completed” (Robert G. Lord and Paul E. Levy, “Moving from Cognition to Action: A Control Theory Perspective,” Applied Psychology: An International Review, 43 (3), 1994, p. 350).

With Regards,

Richard Pfau

-----Original Message-----

From: Erling Jorgensen ejorgensen@RIVERBENDCMHC.ORG

To: CSGNET CSGNET@LISTSERV.ILLINOIS.EDU

Sent: Thu, Nov 21, 2013 12:28 pm

Subject: Re: Are goals predictions?

`[From Erling Jorgensen (2013.11.21 1200EST)]

>Richard Pfau (2013.11.19? 1622 EST)]

>Could Hawkins use of the term "prediction" be considered instead as

>examples of "priming"?

Priming may be a helpful analogy, as long as we consider it as priming from

above, rather than priming from below. To think of it a priming from above

may come close to Powers' notion of a fragment of a past perception serving

as an address signal for _retrieving the rest_ of a constellation of

perceptions.

I am uncomfortable with how you describe priming, because it seems to put

the agency back out in the environment. For instance, your description:

>That is, (a) our environment through our senses is continuously priming

>neural networks that have developed in the past?based on?experience in

>similar contexts and situations, such that (b) when sensory impulses?are

>received that are quite different from?the neural networks primed, (c)

>what we in PCT call an error signal is produced that (d) may stimulate

>action aimed at reducing the difference/error signal.

This certainly sounds like neural networks getting primed by the

environment & then being compared to further sensory input from the

environment. I believe this loses the idea of goals or references being

generated from within or from above.

I do think priming from above captures some of the flavor of addressing

that Powers was trying to explain in talking about reference signals.

All the best,

Erling

`


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rupert Young (2013.11.23 18.00 UT)]

Rick Marken (2013.11.22.0850)]

RM: Sorry I haven't been engaged much in this thread but that's because it doesn't seem to have much to do with PCT. The answer to the title question "Are goals predictions?" is simple: no. Goals (reference signals in PCT) are specifications for the states of perceptual variables.

If I had asked the question "are predictions goals", you would, I predict, have answered in the affirmative. That is, that the way the term "prediction" is often used is better explained by the term "goal", and that there is not actual prediction going on.

This is all relevant to PCT, I would say, if one is interested in promoting and explaining PCT, which, unfortunately, means having to explain why the all-pervading acceptance of the concept of prediction is invalid.

Part of the problem, I think, is in the vague definition of, and loose usage of, the term prediction, as demonstrated by my usage of "predict" in my first sentence; when I meant "guess".

However, for me, it would be useful to understand how control systems operate in different behavioural situations and exactly why prediction is the wrong way of thinking. Here are a few different situations:
1. something changes in your peripheral vision and you move your head to look at it
2. while driving you control your position on the road
3. in an art gallery you move your position until you can recognise what is in a picture
Now, the first two represent situations where a reference level pre-exists and you act to minimise error. But there is a difference between the two. #1, it seems, is a (unconscious) system that is always "active". #2 is only temporarily active, when required; not when sitting on couch. But what can "inactive" mean? Is the reference signal zero? Or do all the control system components have to be switched off? Neither of these require the concept of prediction.

The third concerns comprehension, and is similar to reading text or viewing random dot stereo-grams, in that we don't know what is to be perceived until we perceive it, so there is no goal (reference) specific to that perception. Does this mean that a control system for this perception becomes activated? Or just the input function? Is a full control system necessary for comprehension? In this last situation, particularly, prediction is invoked as being necessary for efficiency by pruning the possibilities.

My thinking on some of this is somewhat confused, and welcome any insights to restore clarity.

Regards,
Rupert

[From Rick Marken (2013.11.22.0850)]

Sorry I haven't been engaged much in this thread but that's because it doesn't seem to have much to do with PCT. The answer to the title question "Are goals predictions?" is simple: no. Goals (reference signals in PCT) are specifications for the states of perceptual variables.

As far as "priming" goes, I think that's a concept best left in the closet containing other animistic concepts, like "affordance", that just get in the way of explanations based on working models.

Best regards

Rick

[From: Richard Pfau (2013.11.22 1043 EST)]

Ref: [From Erling Jorgensen (2013.11.21 1200EST)]

<tt>&gt;I do think priming from above captures some of the flavor of addressing

that Powers was trying to explain in talking about reference signals. </tt>>>

Yes, priming from "above" (ex. via active goals) also seems to occur as well as priming from "below" (by sensing the environment) -- for example with priming from above making one more sensitive to perceptional signals related to a goal that one is consciously doing things to achieve (i.e., to reduce the error signals involved).
For those interested, such top-down and bottom-up priming seems consistent with the thinking of Lord and Levy who state that "we propose that instantiating a referent in a control system activates related cognitive structures." They also suggest that "the pursuit of one referent directly inhibits the activation of competing referents and their associated knowledge." Regarding such negative priming, they state that "We maintain that this capacity to inhibit competing cognitions is the primary mechanism by which one maintains focus on a particular task, problem, or line of thought until it is completed" (Robert G. Lord and Paul E. Levy, "Moving from Cognition to Action: A Control Theory Perspective," Applied Psychology: An International Review, 43 (3), 1994, p. 350).
With Regards,

Richard Pfau

From: Erling Jorgensen <<mailto:ejorgensen@RIVERBENDCMHC.ORG>ejorgensen@RIVERBENDCMHC.ORG>
To: CSGNET <<mailto:CSGNET@LISTSERV.ILLINOIS.EDU>CSGNET@LISTSERV.ILLINOIS.EDU>
Sent: Thu, Nov 21, 2013 12:28 pm
Subject: Re: Are goals predictions?

<tt>[From Erling Jorgensen (2013.11.21 1200EST)]

&gt;Richard Pfau (2013.11.19? 1622 EST)]

&gt;Could Hawkins use of the term "prediction" be considered instead as

&gt;examples of "priming"?

Priming may be a helpful analogy, as long as we consider it as priming from

above, rather than priming from below. To think of it a priming from above

may come close to Powers' notion of a fragment of a past perception serving

as an address signal for _retrieving the rest_ of a constellation of

perceptions.

I am uncomfortable with how you describe priming, because it seems to put

the agency back out in the environment. For instance, your description:

&gt;That is, (a) our environment through our senses is continuously priming

&gt;neural networks that have developed in the past?based on?experience in

&gt;similar contexts and situations, such that (b) when sensory impulses?are

&gt;received that are quite different from?the neural networks primed, (c)

&gt;what we in PCT call an error signal is produced that (d) may stimulate

&gt;action aimed at reducing the difference/error signal.

This certainly sounds like neural networks getting primed by the

environment &amp; then being compared to further sensory input from the

environment. I believe this loses the idea of goals or references being

generated from within or from above.

I do think priming from above captures some of the flavor of addressing

that Powers was trying to explain in talking about reference signals.

All the best,

Erling

</tt>>>

···

On 22/11/2013 16:50, Richard Marken wrote:

On Fri, Nov 22, 2013 at 7:43 AM, Richard H. Pfau <<mailto:richardpfau4153@aol.com>richardpfau4153@aol.com> wrote:

-----Original Message-----

--
Richard S. Marken PhD
<mailto:rsmarken@gmail.com>> rsmarken@gmail.com
<http://www.mindreadings.com>> www.mindreadings.com

I think priming is a better term than prediction, but, I guess, the
question is whether it adds anything to the standard terminology. I
am not sure it does, though I am interested in the concept of
“activation” of control systems which seems to be captured by the
use of “priming”. In my response to Rick I do query whether systems can be thought of
as active or inactive.
The above paper looks interesting, is it available online?

···

[From Rupert Young (2013.11.23 18.00 UT)]

  On 22/11/2013 15:43, Richard H. Pfau wrote:
    For those interested, such top-down and bottom-up priming

seems consistent with the thinking of Lord and Levy who state
that “we propose that instantiating a referent in a control
system activates related cognitive structures.” They
also suggest that “the pursuit of one referent * directly
inhibits* the activation of competing referents and their
associated knowledge.” Regarding such negative priming, they
state that “We maintain that this capacity to inhibit competing
cognitions is the primary mechanism by which one maintains focus
on a particular task, problem, or line of thought until it is
completed” (Robert G. Lord and Paul E. Levy, “Moving from
Cognition to Action: A Control Theory Perspective,” * Applied
Psychology: An International Review* , 43 (3), 1994, p.
350).

Regards,
Rupert

[Martin Taylor 2013.11.23.14.40]

Mathematically, it is not invalid, so you would have a bit of a

problem explaining why it is invalid in PCT, even though PCT is a
valid description of how biological nature works. If the equivalent
bandwidth of the disturbance is W, a certain amount of prediction is
valid out to a time T = 1/2W in the future.

Now if you want to say that "prediction" means "precisely describing

some future state with no possibility of error", then, of course,
that kind of prediction is impossible. By “prediction”, I mean “to
specify the future state of a variable more precisely than can be
done by knowing its statistical properties averaged over a time long
compared to its rate of variation”. Quantitative specification of
this kind of prediction depends on the statistics of the variable
being predicted.

Martin
···

[From Rupert Young (2013.11.23 18.00
UT)]

Rick Marken (2013.11.22.0850)]

    RM: Sorry I haven't been engaged much in this thread but that's

because it doesn’t seem to have much to do with PCT. The answer
to the title question “Are goals predictions?” is simple: no.
Goals (reference signals in PCT) are specifications for the
states of perceptual variables.

    If I had asked the question "are predictions goals", you would,

I predict, have answered in the affirmative. That is, that the
way the term “prediction” is often used is better explained by
the term “goal”, and that there is not actual prediction going
on.

    This is all relevant to PCT, I would say, if one is interested

in promoting and explaining PCT, which, unfortunately, means
having to explain why the all-pervading acceptance of the
concept of prediction is invalid.

[From Adam Matic 2013.11.17]

On Intelligence is very interesting. There have been some improvements since that book was published, you will probably be interested in this paper: http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf and other resources from http://numenta.org/

There are also a bunch of videos on their youtube channel: http://www.youtube.com/user/numenta

From what I’ve understood, hierarchical temporal memory (HTM) systems, which are an application of Jeff’s theories, are good at categorizing objects. He might be talking about motor behavior in examples, but HTMs recognize sequences and that is it. They don’t “behave”. From the paper linked above:

"We expect that a motor output could be added to each HTM region within the

currently existing framework since generating motor commands is similar to

making predictions. However, all the implementations of HTMs to date have been

purely sensory, without a motor component"

Even without the key concept of feedback control, I think he is onto something. Input functions, as I understand PCT, are essentially input-output systems, and HTMs can be placed in a closed loop. Some parts of HTM theory might be compatible with PCT. I think even his neuron models are not the digital on-off neurons, but analog ones, but I haven’t seen any code for them, so I can’t say for sure.

About prediction - I think prediction is a side effect of categorization in the input functions. If you are hearing a sequence you have heard before, say a song or a word, then it might feel like you are ‘predicting’ what you will hear next, but you are just remembering what has happened before. It’s not prediction from complex formulas, and I don’t think Hawkins differentiates between ways of predicting.

I’m not sure these memories need to enter the loop as reference signals. Perhaps they are structural, in the connections between neurons, perhaps related to the reorganization system. Would be nice to make models that do things like that…

···

Adam

[From Matti Kolu 2013.11.17.2340 CET)

Adam Matic 2013.11.17 --

I think even his neuron models are
not the digital on-off neurons, but analog ones, but I haven't seen any code
for them, so I can't say for sure.

I don't know anything about the underlying implementation, but he
specifically mentions that "it's not a binary thing" when he talks
about error/anomaly detection in this video, at about 51:40 and
onwards:
http://www.youtube.com/watch?v=J33B-tEtPjA

Video reference: Series: "UC Berkeley Graduate Council Lectures"
[12/2012] [Science] [Show ID: 24412], published on Youtube by
"University of California Television (UCTV)", with the title
"Intelligence and Machines: Creating Intelligent Machines by Modeling
the Brain with Jeff Hawkins".

Matti

[From Erling Jorgensen (2013.11.17 2305EDT)]

Rupert Young (2013.11.17 17.30 UT)

You're right, Rupert. The passage on "prediction" that you quote from p.89
of Jeff Hawkins' book _On Intelligence_ is very striking. As you say:

This could almost be a description of goals (references) from a PCT

perspective, particularly given the sentence "When the sensory input

does arrive, it is compared with what was expected."

Passages like this keep me thinking that Hawkins' approach can be
compatible with PCT, despite what he does when talking about thinks like
motor commands. I almost wonder (with you) whether we should think "goals"
when Hawkins says "predictions."

I even inserted a margin correction in my copy of Hawkins' book, on the
very next page from what you cite. On page 90, he quotes Rodolpho Llinas
as making the following statement: "The capacity of *predict* the outcome
of future events--critical to successful movement--is, most likely, the
ultimate and most common of all global brain functions" (emphasis added).

Notice how the passage shifts, if we replaced the one word *predict*, and
inserted instead the word *control* (robustly defined, as per Bill Power's
B:CP, 1st ed., p.283.)

You ask:

PCT is clear in the behavioural situation of achieving an existing goal,

but, for me, it is not so obvious how PCT addresses the situation where

perception precedes the goal.

I find it helpful here to recall what Powers says about references
being "retrieved recordings of past perceptual signals" (B:CP, 1st ed., p.
217). If we're looking to control a perception to be a value different
than the current one, it seems we would draw upon a past perception to
become the preferred value.

In the situation you raise, what if we simply make the reference a pass-
through, to accept the value of the perception we are currently receiving?
I think this is what Bill means by his "Passive Observation Mode" (p.220),
although I get a little confused by his notion of "switches." He does note
that this is one of the standard ways that we learn many things, utilizing
the ability to record perceptions in memory. To then switch to a control
mode becomes a matter of addressing the perception from memory, as a
reference for understanding something different from the current stream of
perception.

To get back to your key question:

So, is the difference (between goals and predictions) just one of

semantics, or is there a fundamental reason why references should not be

thought of as predictions?

It seems the notion of "retrieving from memory" could be a bridge concept,
between "goals" and "predictions." A prediction has to begin by retrieving
something from memory, and perhaps it then gets utilized or implemented as
a goal in the PCT sense of the word. I haven't worked through Hawkins'
multitude of citations of "prediction" to see if this idea would work under
his schema.

All the best,

Erling

[From Erling Jorgensen (2013.11.17 2355EST)]

Matti Kolu 2013.11.17.2340 CET

Regarding the question of Hawkins' neuron models...

I don't know anything about the underlying implementation, but he

specifically mentions that "it's not a binary thing" when he talks

about error/anomaly detection in this video, at about 51:40 and

onwards:

http://www.youtube.com/watch?v=J33B-tEtPjA

Video reference: Series: "UC Berkeley Graduate Council Lectures"

I tried to sample my way through another of Hawkins' video presentations:
<http://www.youtube.com/watch?v=48r-IeYOvG4> given at the University of
British Columbia - Vancouver, in March 2010. It was entitled, "Hierarchical
Temporal Memory: How a theory of neocortex may lead to truly intelligent
machines." I would drag the scroll bar ahead to whenever the Power Point
slide changed, & then listen to a portion to see if I could get the gist of
what he was proposing.

I remember him saying, perhaps during the Question & Answer session at the
end, that they did not run the model at the level of simulated neurons, but
seemingly via software that reproduced the functional characteristics that
they had ascribed to how the layers of neurons were functioning.

Seems like several of us are intrigued by what he is doing, although I
sometimes find it hard going to understand all the aspects.

All the best,

Erling

[From Erling Jorgensen (2013.11.18 0015EST)]

Adam Matic 2013.11.17

On Intelligence is very interesting. There have been some improvements

since that book was published, you will probably be interested in this

paper: http://numenta.org/resources/HTM_CorticalLearningAlgorithms.pdf ...

Thanks, Adam, for the link to the paper you mentioned to Rupert. There's a
lot to digest there, but I agree, it seems the folks at Numenta are
expanding quite a bit on what Hawkins' initially presented in the 2004
book.

From what I've understood, hierarchical temporal memory (HTM) systems,

which are an application of Jeff's theories, are good at categorizing

objects. He might be talking about motor behavior in examples, but HTMs

recognize sequences and that is it. They don't "behave".

This is my sense, too. This is why I think they may fit our way of
thinking about "perceptual input functions," as a part of a closed loop.
The 3/2010 presentation that I mentioned in replying to Matti seemed to
offer three layers of perceptual processing:

a) Learning common spatial patterns, which they called a "Spatial Pooler."
b) Learning common sequences, which they called "Sequence Memory." And c)
Forming stable representations of sequences, which they call a "Temporal
Pooler."

While it may be a superficial resemblance, I can't help but notice a
conceptual similarity to Bill Powers' chapters 9, 10, & 11, in Behavior:
The Control of Perception, with its treatment of Configurations,
Transitions, and Sequences, respectively.

I appreciate your reflection about prediction, and whether "remembering" is
a better way to understand it:

If you are hearing a sequence you have heard before,

say a song or a word, then it might feel like you are 'predicting' what you

will hear next, but you are just remembering what has happened before. It's

not prediction from complex formulas

I, too, would like to see whether the Hierarchical Temporal Memory models
could be placed within a closed loop control schema.

All the best,

Erling

Yes, the issue is of what is meant by prediction. Btw, how are you
defining it in the former?
The problem with Hawkins’ book is that he doesn’t give a definition
of what he means by prediction and uses the term in such a loose way
that renders it virtually meaningless.
He even uses it to explain the behaviour of an ecoli-type organism,
and trees (attached);
“lmagine a one-cell animal living in a pond. The cell has a
flagellum
that lets it swim. On the surface of the cell are molecules
that detect the presence of nutrients. Since not all areas of the
pond have the same concentration of nutrients, there is a gradual
change in value, or gradient, of nutrients from one side of the
cell to the other. As it swims across the pond, the cell can detect
the shift, This is a simple form of structure in the world of the
one-cell animal. The cell exploits its chemical awareness by
swimming toward places with higher concentrations of nutrients.
We could say that this simple organism is making a prediction.
It is predicting that by swimming in a certain way it will
find more nutrients. Is there memory involved in this prediction?
Yes, there is. The memory is in the DNA of the organism.
The one-cell animal did not learn, in its lifetime, how to exploit
this gradient. Rather, the learning occurred over evolutionary
time and is stored in the animal’s DNA. If the structure of the
world changed suddenly, this particular one-cell animal could
not learn to adapt. It could not alter its DNA or the resulting
behavior. For this species, learning can occur only through
evolutionary
processes over many generations.”
“A tree makes a prediction when it sends its
roots down into the soil and its branches and leaves up toward
the sky. The tree is predicting where it will find water and
minerals based on the experience of its ancestors.”
As I said previously; insane!
Incidentally, there is only one brief mention of goals throughout
the whole book where he says, “Goal-oriented behavior is the holy
grail of robotics. It is built into the fabric of the cortex.” But
provides no explanation of how it is built in, or what he means by
it or how it fits in with his own theories.
Regards,
Rupert

178-179.pdf (357 KB)

···

[From Rupert Young (2013.12.01 21.00
UT)]

  [Martin Taylor 2013.11.23.14.40]
  Mathematically, it is not invalid, so you would have a bit of a

problem explaining why it is invalid in PCT, even though PCT is a
valid description of how biological nature works. If the
equivalent bandwidth of the disturbance is W, a certain amount of
prediction is valid out to a time T = 1/2W in the future.

  Now if you want to say that "prediction" means "precisely

describing some future state with no possibility of error", then,
of course, that kind of prediction is impossible. By “prediction”,
I mean “to specify the future state of a variable more precisely
than can be done by knowing its statistical properties averaged
over a time long compared to its rate of variation”. Quantitative
specification of this kind of prediction depends on the statistics
of the variable being predicted.

      [From Rupert Young (2013.11.23

18.00 UT)]

      This is all relevant to PCT, I would say, if one is interested

in promoting and explaining PCT, which, unfortunately, means
having to explain why the all-pervading acceptance of the
concept of prediction is invalid.

  [From Rupert Young (2013.12.01 21.00

UT)]

  [Martin Taylor 2013.11.23.14.40]
      [From Rupert Young (2013.11.23

18.00 UT)]

      This is all relevant to PCT, I would say, if one is interested

in promoting and explaining PCT, which, unfortunately, means
having to explain why the all-pervading acceptance of the
concept of prediction is invalid.

  Mathematically, it is not invalid, so you would have a bit of a

problem explaining why it is invalid in PCT, even though PCT is a
valid description of how biological nature works. If the
equivalent bandwidth of the disturbance is W, a certain amount of
prediction is valid out to a time T = 1/2W in the future.

  Now if you want to say that "prediction" means "precisely

describing some future state with no possibility of error", then,
of course, that kind of prediction is impossible. By “prediction”,
I mean “to specify the future state of a variable more precisely
than can be done by knowing its statistical properties averaged
over a time long compared to its rate of variation”. Quantitative
specification of this kind of prediction depends on the statistics
of the variable being predicted.

Yes, the issue is of what is meant by prediction. Btw, how are you

defining it in the former?

The problem with Hawkins' book is that he doesn't give a definition

of what he means by prediction and uses the term in such a loose way
that renders it virtually meaningless.

He even uses it to explain the behaviour of an ecoli-type organism,

and trees (attached);

"lmagine a one-cell animal living in a pond. The cell has a

flagellum

that lets it swim. On the surface of the cell are molecules

that detect the presence of nutrients. Since not all areas of the

pond have the same concentration of nutrients, there is a gradual

change in value, or gradient, of nutrients from one side of the

cell to the other. As it swims across the pond, the cell can detect

the shift, This is a simple form of structure in the world of the

one-cell animal. The cell exploits its chemical awareness by

swimming toward places with higher concentrations of nutrients.

We could say that this simple organism is making a prediction.

It is predicting that by swimming in a certain way it will

find more nutrients. Is there memory involved in this prediction?

Yes, there is. The memory is in the DNA of the organism.

The one-cell animal did not learn, in its lifetime, how to exploit

this gradient. Rather, the learning occurred over evolutionary

time and is stored in the animal's DNA. If the structure of the

world changed suddenly, this particular one-cell animal could

not learn to adapt. It could not alter its DNA or the resulting

behavior. For this species, learning can occur only through

evolutionary

processes over many generations."



"A tree makes a prediction when it sends its

roots down into the soil and its branches and leaves up toward

the sky. The tree is predicting where it will find water and

minerals

based on the experience of its ancestors."



As I said previously; insane!



Incidentally, there is only one brief mention of goals throughout

the whole book where he says, “Goal-oriented behavior is the holy
grail of robotics. It is built into the fabric of the cortex.” But
provides no explanation of how it is built in, or what he means by
it or how it fits in with his own theories.

Regards,

Rupert
···

On Sunday, December 1, 2013, Rupert Young wrote:

Richard S. Marken PhD
www.mindreadings.com
The only thing that will redeem mankind is cooperation.

                                               -- Bertrand Russell