World Model (was Re: What I'm controlling for...)

  • we can imagine things that we have never perceived before.

We cannot perceive things that we’ve never perceived before…consider the experiment with the cats grown in controlled visual environments. Can we imagine something that we’ve never perceived before? I doubt it.

The thing about imagination is that its a double-edged sword. On the one hand, imagination might all just be a waste of time. If you’re only imagining perceptions a la carte (viz. a unicorn), then this causes no influence to any environmental variable whatsoever. So there’s no CEV. But then there’s math, which is all about language (codes). With math, the CEV crosses into the mind.

···

Martin Taylor (2015.05.06.15.18)–

MT: Anyway, as I have said many times, I previously thought of the

“World Model” as a consequence of the existing theory, not as an
addendum requiring a new construct. To me, it was just the way the
imagination connection would have to work. The only new thing was
that Bill in 1993 hadn’t seen it that way, but in 2010 he seemed to
be coming around to this way of looking at it (though Rick
disagrees).

RM: What I disagreed with was the idea that the quote from Bill that Boris posted had anything to do with imagination. I don’t know what you mean when you say that you thought the “World Model” was “just the way the imagination connection would have to work”. The only thing I know about the way the imagination connection works is as it’s described in B:CP (Figure 15.3). The imagination connection is the state of a two switches which, when thrown, connects the descending reference (from memory) to the ascending perceptual signal. But Bill never came up with any proposals regarding the mechanisms involved in “throwing the switches” – why they are thrown (and un-thrown). This seems like the kind of mechanism that is needed to explain Rupert’s observation – that we can imagine things that we have never perceived before. A unicorn isn’t the best example of this, perhaps, because most people have perceived them in drawings and cartoons; but the first person to come up with the idea of a unicorn certainly had to construct it using existing perceptual function via imagination. There is nothing currently in PCT that explains how we do this – how we imagine things that we construct imagined perceptions that we have never perceived before. This must involve combining imagined versions of perceptions that we have had – like horses and rhinoceroses – but there is nothing in PCT that says how this is done. Your concept of a World Model may be this mechanism, if, as you say, the World Model is the way the imagination connection works. If you do propose a mechanism to add to the PCT model that explains the workings of the imagination connection it would be great if you could also propose a way of testing that proposal. That would make it more interesting to me given my interest in research.

Best

Rick


Richard S. Marken

www.mindreadings.com
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[From Rick Marken (2015.05.11.1015)]

···

On Sat, May 9, 2015 at 12:54 PM, PHILIP JERAIR YERANOSIAN pyeranos@ucla.edu wrote:

RM: - we can imagine things that we have never perceived before.

PY: We cannot perceive things that we’ve never perceived before…consider the experiment with the cats grown in controlled visual environments.

RM: This is a good example of why it’s hard to deal with a model only in terms of words. Saying we “cannot perceive things we’ve never perceived before” is both true and false in terms of the PCT model. To understand why you first have to know that perceptions in PCT are perceptual variables.What we are perceiving at any moment are the current states of many different perceptual variables. So, for example, as I write I am listening to music and hearing (among other things) sequences of notes – musical phrases. Each phrase is a different state of a perceptual variable (according to PCT) - a sequence variable in the PCT hierarchy. Some of these phrases – some of these states of the sequence perceptual variable – I have heard before but some I have not. But I am still able to perceive the never before heard phrases in the sense that I can experience them as musical phrases – specific states of a sequence perception. So I can perceive what I have never perceived before presumably because I have developed a perceptual function that produces a perceptual signal – a sequence perceptual variable – that varies along with variations in the note (acoustical frequency) sequence aspect of the environment.

RM: But your example of the cats grown in a controlled visual environment is a situation where it does make sense to say that we “cannot perceive things we’ve never perceived before”. Cats raised in such an environment apparently do not develop the perceptual functions that allow them to perceive certain aspects of the environment. Actually, there is a good example of this in human language acquisition. Apparently, the Japanese language does not make use of the phonemic distinction between what we hear as “R” and “L”. These two phonemes are two states of a perceptual variable. Since this R/L distinction is not used in the Japanese language, native speakers of Japanese apparently don’t develop the perceptual function that allows them to make this distinction. So adult native speakers of Japanese cannot perceive the R/L distinction that we make in English; they truly cannot perceive what they have not perceived before.

PY: Can we imagine something that we’ve never perceived before? I doubt it.

RM: But we know that people unquestionably can imagine what they have never perceived before. Artists, composers, scientists, and writers do this all the time. No one had perceived the 9th symphony before Beethoven imagined it. His imagination was clearly constructed from elements that he had perceived before (like the folk theme of the final Choral movement) but no one had never before imagined anything like the combination of perceptions that we hear as the 9th. Rupert’s question about the unicorn was, to me, a question about how novel imaginations like the 9th-- creations based on what we have perceived before – are created. And I have no idea. There is no mechanism proposed in PCT for how this works so this is open theoretical territory.But there is no question that we can imagine what we have never perceived before, sometimes for good (as in the creation of great works of art) and sometimes for ill (as in the creation of fear through demagoguery).

Best

Rick

The thing about imagination is that its a double-edged sword. On the one hand, imagination might all just be a waste of time. If you’re only imagining perceptions a la carte (viz. a unicorn), then this causes no influence to any environmental variable whatsoever. So there’s no CEV. But then there’s math, which is all about language (codes). With math, the CEV crosses into the mind.

On Friday, May 8, 2015, Richard Marken csgnet@lists.illinois.edu wrote:

[From Rick Marken (2015.05.08.1830)]


Richard S. Marken

www.mindreadings.com
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

Martin Taylor (2015.05.06.15.18)–

MT: Anyway, as I have said many times, I previously thought of the

“World Model” as a consequence of the existing theory, not as an
addendum requiring a new construct. To me, it was just the way the
imagination connection would have to work. The only new thing was
that Bill in 1993 hadn’t seen it that way, but in 2010 he seemed to
be coming around to this way of looking at it (though Rick
disagrees).

RM: What I disagreed with was the idea that the quote from Bill that Boris posted had anything to do with imagination. I don’t know what you mean when you say that you thought the “World Model” was “just the way the imagination connection would have to work”. The only thing I know about the way the imagination connection works is as it’s described in B:CP (Figure 15.3). The imagination connection is the state of a two switches which, when thrown, connects the descending reference (from memory) to the ascending perceptual signal. But Bill never came up with any proposals regarding the mechanisms involved in “throwing the switches” – why they are thrown (and un-thrown). This seems like the kind of mechanism that is needed to explain Rupert’s observation – that we can imagine things that we have never perceived before. A unicorn isn’t the best example of this, perhaps, because most people have perceived them in drawings and cartoons; but the first person to come up with the idea of a unicorn certainly had to construct it using existing perceptual function via imagination. There is nothing currently in PCT that explains how we do this – how we imagine things that we construct imagined perceptions that we have never perceived before. This must involve combining imagined versions of perceptions that we have had – like horses and rhinoceroses – but there is nothing in PCT that says how this is done. Your concept of a World Model may be this mechanism, if, as you say, the World Model is the way the imagination connection works. If you do propose a mechanism to add to the PCT model that explains the workings of the imagination connection it would be great if you could also propose a way of testing that proposal. That would make it more interesting to me given my interest in research.

Best

Rick


Richard S. Marken

www.mindreadings.com
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

I think there is a fundamental problem with viewing perceptions as a
“model of the current state of the world”. A perception is the result of a function of a set of inputs (which
may have a basis in the outer world), and the structure of the
perceptual systems has reorganised such that intrinsic goals are
realised. I hope you would agree that this is, in simple terms, a
reasonable description of the nature of perceptions in the context
of PCT. Also that perceptions are new creations, in that the signals
have never existed in the outer world; that without perceptual
functions perceptions do not exist.
It does not follow from this that any perception, or the complete
set, is a model of the state of the current world. Although I
realise that with some (low level) perceptions it might that this is the case I think it is a misconception of what
perceptions are, and is ascribing to them a conceptualisation which
reaches beyond their inherent nature, and also leads to the
erroneous concept of “World Models”. Perceptions as a “World View”
might be closer to the mark, but even then this gives the wrong
impression that they are representative of an objective real world.
That we can have perceptions (hunger, fear, arrogance) that are not
properties of the external world suggests that this is not the case.
How we conceptualise PCT is extremely important as erroneous
conceptualisation can lead to erroneous modelling and
implementation. In my view the concept of “World Models” is such a
case, of resulting in misleading conceptualisation of PCT and we
should expunge from use.
Perhaps you are using the term “World Model” to mean the (internal)
environment (“world”) in which it perceives and operates? But then
it wouldn’t be a model. Do you mean that we need an explicit world
model in addition to the existing structure of the hierarchy? What is an “explicit” world model if not a replica? Well, I think “World Model” is a misleading and unnecessary term.
Memory are models of perceptions and to higher level systems it
doesn’t make any difference (and they don’t know) whether a
perceptions is real-time or from memory. So those systems operate in
the same way and don’t need a separate process or “model” for
offline control.
The difference is that one is external (and intended to be a
replica) and one internal (which you are saying is not a replica, I
think).
The simulation models we (Rick) build (the computer programs)
consist of two sub-models, one which models the perceptual system
and one which models the physical world (a “World Model”). The
latter is required to provide the feedback path and is an explicit
model of the external world. Both are replicas of what they model.
In my real world robots the latter (a “World Model”) is not required
as real world is available. You are saying, it appears, that real
perceptual systems also contain a model of the physical world. Then I am not quite clear on what you are saying, or why you are
calling it a “World Model”; elsewhere you have said a model is “a
dynamic functioning system that produces results like those of the
thing modelled”, “The World Model you have built up by long
experience controlling perceptions in the real world does much the
same” and “Both [PCT models and world models] are models of the
dynamics.”
The way I see it is that we control perceptions in real time with a
particular structure that has developed to improve the quality of
control, ultimately so that intrinsic error is low; and those
perceptions can be complex, multitudinous and diverse; transitions,
sequences, programs etc. And those perceptions are signals that do
not exist as external world variables (though some may be functions
of such) and may not even “represent” things that do, have ever or
even could exist outside of perceptual functions. Thus calling it
(the structure?) a “World Model” doesn’t seem appropriate to me as
they are very different things and don’t work in the same way (at
least haven’t been shown to be so). My point was that perceptions don’t exist without perceptual
functions (perceivers) and are not properties of the external world.
They may even be functions of arbitrary totally unrelated elements
of the world such as the alignment of a lamppost in Timbuctoo and a
star a billion light years away. Perceptions are subjective
constructions that are not world properties.
Sounds like you are saying it is not a model of THE world then?
Rupert

···

[From Rupert Young (2015.05.13 21.30)]

([Martin Taylor 2015.05.06.15.18]

    Though perhaps you mean perceptions are models of

the outer world?

  Not necessarily. I have said that I consider the complete set of

perceptions at any moment to be a model of the current state of
the world, but the question you raised in the previous iteration
of this exchange has led me to restrict this, because some
perceptions are clearly not of the outer world. You provide
examples later in the message to which I am now responding.

appear

  A

“World Model” in the sense I have been using it is a process
model, which, given the state of the world and some imagined
action on it, produces the flow of perceptions that would (as
imagined) occur in that world. My problem now is that I had
previously presumed the “world” in question to be the one to which
the organism had reorganized, but you made me realized that the
modelled world could have any imagined properties, and if those
properties don’t correspond to the ones for which we reorganized,
then using the existing structure of the hierarchy won’t work, and
we would need to entertain the possibility that an explicit world
model can be built and retained and used somewhere in the brain.
Maybe you would think of such a model as a “replica”; I wouldn’t,
because I think “replica” has connotations that wouldn’t be
appropriate.

          But there is a replica in the sense that the

imagined actions on the outer world result in the
perceptions that those actions would produce if executed
in the outer world (though usually on a much shorter
time-scale).

        This, it seems to me, could be called a replica of

perceptual control but not of the outer world. I could
imagine myself closing a door and imagine actions that would
result in a closed door. But these “imagined actions”
themselves are perceptions (perceptions of actions). There
is no need to invoke “World Models” to explain this.

      So how do you imagine a door closing so as to produce the same

perception as a closing door would have, if you don’t have a
model (which I claim to be in the form of all the connections
you have reorganized as you have learned to control such
things as a perception of a door closing)? What kind of magic
produces this perception? A “World Model” produces
perceptions.

    So does an associative memory system (already part of the

theory) so why need a “World Model”.

  Don't confuse implementation with function. Associative memory has

been very much in my underlying thinking of how a World Model
might be implemented.

        No, I am thinking of dynamic processes as in

PCT. Again you appear to be talking about two different
types of models here. External models where the purpose is
to replicate internal dynamic processes. And internal models
of external processes. I do not see the case for the latter.

      They aren't different. When you actually control in a tracking

experiment showing the task on a screen and using a mouse to
move a cursor on screen, you don’t act randomly, flailing at
the air, yelling and rolling your eyes before happening on the
mouse and discovering that it influences the cursor and helps
you control. You don’t reorganize to control every perception.
You have done that already. You have a model inside you of
what it taked to move that cursor on screen, and that model
includes finding a mouse, holding it, and moving it. You don’t
even have to imagine it, though you may. You are using a model
you have already built.

    Sure, your system has reorganised to select the

appropriate goals (perceptions), but I don’t see any support for
the case that this constitutes a “World Model” in that it is
replicating actual dynamic processes of the world.

  I'm not clear what you are getting at in this comment. The "World

Model" you use in going straight to the appropriate control action
at the many levels needed to select “mouse”, get your hand on it,
and move it usefully don’t replicate the dynamic processes of the
world. They produce the perceptions that you need in order to be
using the mouse, and those perceptions do depend on the dynamics
of the world.

        The property of "tasting like lemonade" is a

property of the perceiving system not of the external world.
It can’t be said to model the world if there is nothing in
the external world to model.

      Can you get the taste of lemonade from what's in a glass from

which you are drinking, if what is in the glass is gasoline?
Can you really say there’s nothing in the properties of the
glass contents that our perceptual functions convert from
sensory input into the taste of lemonade? I think that’s a
ridiculous proposition, or one that if you are to be
consistent about it must apply to every one of our
perceptions. There is nothing in the sensory input that
corresponds to, say, colour, or even light intensity. To say
that there is nothing on the external world corresponding to
“perception X” is a solipsistic copout. And I say this knowing
that Bill Powers made that claim.

    How about if all animals were suddenly to die, or if we went

back a few billion years before animals existed would you still
say that there were still a property that exists which
corresponds to “perception X” (or any other perception)?

  Does a tree make a noise if it falls in an empty forest? The

answer is the same. It depends on whether you are talking about
imagined sound waves (or whatever environmental properties are
involved) or on the existence of perceptual functions that create
specific perceptions from those sound waves or property sets. Do
you have the same “taste of lemonade” as I do? There’s no way I
could tell. But I could do a Test fot the Controlled Variable
along the lines of the coin game, and try different chemical
structures and/or material processing procedures, and see whether
you said “Lemonade” for the same liquid as I would say had the
taste of lemonade. I couldn’t do that with a fossillized animal,
or even a dog.

    Another thought; if a model is a model of the world

then it could only produce things that the world could produce,
so a perception-producing “world model” would need to account
for perceptions that do not and could not exist in the real
world, such as imaginary concepts and events, especially during
dream time.

  Yes, that is exactly the problem I was posing in my last message,

to which I have hearkened back a few times in this message.

    Whatever is producing these perceptions is not a

model of the world,

  Really???
    but an associative memory system is quite

consistent with this, I would say.

  Why the word "but"?



  Couldn't you say: "Whatever is producing these perceptions is a

model of a non-existent world, in the construction of which the
actions of associative memories play a significant part"?

[Martin Taylor 2015.05.13.18.02]

  I think there is a fundamental problem with viewing perceptions as

a “model of the current state of the world”.

What else could they possibly be? One perception is based on one

function of the sensory consequences of the current state of the
entire external world, is it not? How else could you describe the
set of all perceptions based on the senses other than as a model of
the world?

  A perception is the result of a function of a set of inputs (which

may have a basis in the outer world), and the structure of the
perceptual systems has reorganised such that intrinsic goals are
realised. I hope you would agree that this is, in simple terms, a
reasonable description of the nature of perceptions in the context
of PCT. Also that perceptions are new creations, in that the
signals have never existed in the outer world; that without
perceptual functions perceptions do not exist.

No problem, except for a possible confusion that you may or may not

be making between the perceptual functions that are part of the
model of the way the world works (what I have been calling the World
Model) and the perceptions that constitute the model of the way the
world is.

  It does not follow from this that any perception, or the complete

set, is a model of the state of the current world. Although I
realise that with some (low level) perceptions it might appear
that this is the case I think it is a misconception of what
perceptions are, and is ascribing to them a conceptualisation
which reaches beyond their inherent nature, and also leads to the
erroneous concept of “World Models”. Perceptions as a “World View”
might be closer to the mark, but even then this gives the wrong
impression that they are representative of an objective real
world.

No. We just assume that there exists an objective real world,

because that’s the alternative to solipsism, and if you and the rest
of what I think I perceive are simply figments of my mind, the
situation is not very interesting. So I do assume you exist, and
work from there. Starting from the assumption that there exists a
real world that influences our senses, there is no error in taking
our perceptions to constitute a model of the way the world is, and
our control structures as a model of the way the world works.

  That we can have perceptions (hunger, fear,

arrogance) that are not properties of the external world suggests
that this is not the case.

Yes, we do have some perceptions of internal states as well, but I

don’t see how this leads to the conclusion that you seem to be
controlling for. In fact, I still don’t know what that really is,
other than to deny that what is in the structure of our control
system and the current values of the signals in it are unrelated to
what is outside us.

  How we conceptualise PCT is extremely important as erroneous

conceptualisation can lead to erroneous modelling and
implementation. In my view the concept of “World Models” is such a
case, of resulting in misleading conceptualisation of PCT and we
should expunge from use.

I have seen where you are coming from with this comment, from the

example that you linked <>.
Yes, a misconceptualization can lead to great error, just as
misconceptualizations of Darwinian evolution have done. That’s not a
problem for the theory. It’s a PR problem. I guess if you live in a
world in which people believe that Darwin’s theory means a
dog-eat-dog world, you might think that dog-eat-dog libertarianism
or capitalism was the natural way to a better world, but that
doesn’t mean that you should give up on using the phrase “survival
of the fittest”. Neither do I think that misuse of the kind of
explicit model that accurately describes a mechanical structure to
say that solving the equations for such a model is the way we
control is a reason for denying that our actual control structure,
whatever it may be, is a model of the way the world works so far as
we have yet learned it.
No. I keep saying (and you keep refusing to acknowledge that I have
said) that the structure of the hierarchy IS the model of the actual
world.
What’s the point of a replica? You would just have to model that
instead, wouldn’t you? In an infinite regress. Either the replica of
the replica of the replica would at each level of recursion become
of lower and lower fidelity to the original or it wouldn’t. In
neither case would it be much use for anything escapt admiring how
cleverly it was made. ?? I would have thought that memories ARE perceptions. Or are we
really going to get into a replica recursion proposal?
Whether they do or whether they don’t is an issue. I know that
sometimes I don’t know whether what I think is a memory is of
something that really happened or of something I dreamed or cobbled
together from fragments of different things that really happened,
but I do seem (consciously) to be aware as to whether I am seeing
the here and now or remembering something old. Of course, conscious
perception an perception in the wider PCT sense are not the same
thing, but Bill thought, and I agree, that you can’t be consciously
aware of something that is not in the repertoire of PCT signals. I
am aware of uncertainty about some things (such as what you are
trying to achieve in this thread), so somewhere in the hierarchy is
a signal representing uncertainty about that perception.
Of course, I agree that no scalar signal “knows” anything but its
value. What must “know” whether a perception is real-time or from
memory has to come from a different perceptual function. What is “one” and what is “the other” here? I don’t think I
mentioned two. You did, but I was trying to show there are not two,
but only one – the reorganized structure of the hierarchy,
Yep. Those are like the “model” to which you linked.
Partial replicas, but yes. They exemplify what I was talking about
when I said that your questions made me rethink whether in fact
there might be explicit models in our brain to deal with
counterfactual worlds. You assume a world that might be what happens
inside the head and build your replica, which is one of a myriad of
possible replicas of worlds that may or may not exist. Those models
are built inside your head before they become software or hardware,
and they are not – or rather, are not entirely – implemented as
the structure of your control hierarchy.
It’s funny, I had the impression that you started this thread by
criticizing me for thinking that there was such a model in one’s
head, when at the time I did not think there was. Now, after a few
iterations, I have come around to the view about which you wrongly
complained.
Real perceptual systems produce perceptions that are (they don’t
“contain”) a model of the current state of the world. Real
perceptual systems are a component of (they don’t “contain”) a model
of the wy the world works.
You have expressed it correctly, so I don’t understand why you are
not quite clear about it. After “elsewhere” you put it quite well,
and explain “why [I am] calling it a World Model”. So where’s the
problem?
Replace “some may” by “all except those with imagination components
are”.
Again you go into the philosopher’s trick (The tree falling in the
forest). Every possible relationship among components in the world
exists, whether anyone currently perceives it or not. That’s just as
reasonable a proposition as that none of them do, or that only those
perceived by someone do. Take your pick. No-one can prove you wrong.
True. So what’s the point?
True, though I don’t like the word “subjective” in this context, as
it smacks of conscious thought – but what’s the point? The
relationship between the lamppost and the star exists, whether
anyone perceives it or not.
What is “it” in this question? You seem to be referring back to my
mention of fantasy worlds, which are definitely not THE world, since
there can be lots of them, all different, as any inventor knows.
In the earlier message I was just saying that I had come around to
recognize that you CAN construct an indefinite number of models of
fantasy worlds, and that those world models are not embodied in the
control hierarchy, but are manipulable perceptual data that include
perceptions of how the real world works. You CAN have an explicit
model of how the real world works (it’s called science, in some
circles, Faith in others). But you don’t have to, since your
reorganized hierarchy is the only world model you need for survival,
whether you are a human, a wolf, a tree, or a bacterium.
Making lots of different models of how a fantasy world might work is
called creativity, but I think it likely that at their lowest levels
all these fantasy-world models are constructed from imagined
perceptions that depend on how the real world works so far as our
reorganization has allowed us to control in that real world. I don’t
know what animals make such models, but it seems as though some do,
for example crows and elephants.
Martin

···

[From Rupert Young (2015.05.13
21.30)]

([Martin Taylor 2015.05.06.15.18]

    ....

http://www.perceptualrobots.com/2014/09/18/taros-2014-iet-public-lecture/

    A "World Model" in the sense I have been using it is a process

model, which, given the state of the world and some imagined
action on it, produces the flow of perceptions that would (as
imagined) occur in that world. My problem now is that I had
previously presumed the “world” in question to be the one to
which the organism had reorganized, but you made me realized
that the modelled world could have any imagined properties, and
if those properties don’t correspond to the ones for which we
reorganized, then using the existing structure of the hierarchy
won’t work, and we would need to entertain the possibility that
an explicit world model can be built and retained and used
somewhere in the brain. Maybe you would think of such a model as
a “replica”; I wouldn’t, because I think “replica” has
connotations that wouldn’t be appropriate.

  Perhaps you are using the term "World Model" to mean the

(internal) environment (“world”) in which it perceives and
operates? But then it wouldn’t be a model. Do you mean that we
need an explicit world model in addition to the existing structure
of the hierarchy?

What is an “explicit” world model if not a replica?

            But there is a replica in the sense that the

imagined actions on the outer world result in the
perceptions that those actions would produce if executed
in the outer world (though usually on a much shorter
time-scale).

          This, it seems to me, could be called a replica of

perceptual control but not of the outer world. I could
imagine myself closing a door and imagine actions that
would result in a closed door. But these “imagined
actions” themselves are perceptions (perceptions of
actions). There is no need to invoke “World Models” to
explain this.

        So how do you imagine a door closing so as to produce the

same perception as a closing door would have, if you don’t
have a model (which I claim to be in the form of all the
connections you have reorganized as you have learned to
control such things as a perception of a door closing)?
What kind of magic produces this perception? A “World Model”
produces perceptions.

      So does an associative memory system (already part of the

theory) so why need a “World Model”.

    Don't confuse implementation with function. Associative memory

has been very much in my underlying thinking of how a World
Model might be implemented.

  Well, I think "World Model" is a misleading and unnecessary term.

Memory are models of perceptions

  and to higher level systems it doesn't make any

difference (and they don’t know) whether a perceptions is
real-time or from memory.

  So those systems operate in the same way and don't

need a separate process or “model” for offline control.

          No, I am thinking of dynamic processes as in

PCT. Again you appear to be talking about two different
types of models here. External models where the purpose is
to replicate internal dynamic processes. And internal
models of external processes. I do not see the case for
the latter.

        They aren't different. When you actually control in a

tracking experiment showing the task on a screen and using a
mouse to move a cursor on screen, you don’t act randomly,
flailing at the air, yelling and rolling your eyes before
happening on the mouse and discovering that it influences
the cursor and helps you control. You don’t reorganize to
control every perception. You have done that already. You
have a model inside you of what it taked to move that cursor
on screen, and that model includes finding a mouse, holding
it, and moving it. You don’t even have to imagine it, though
you may. You are using a model you have already built.

  The difference is that one is external (and intended to be a

replica) and one internal (which you are saying is not a replica,
I think).

  The simulation models we (Rick) build (the computer programs)

consist of two sub-models, one which models the perceptual system
and one which models the physical world (a “World Model”).

  The latter is required to provide the feedback path

and is an explicit model of the external world. Both are replicas
of what they model.

  In my real world robots the latter (a "World Model")

is not required as real world is available. You are saying, it
appears, that real perceptual systems also contain a model of the
physical world.

      Sure, your system has reorganised to select the

appropriate goals (perceptions), but I don’t see any support
for the case that this constitutes a “World Model” in that it
is replicating actual dynamic processes of the world.

    I'm not clear what you are getting at in this comment. The

“World Model” you use in going straight to the appropriate
control action at the many levels needed to select “mouse”, get
your hand on it, and move it usefully don’t replicate the
dynamic processes of the world. They produce the perceptions
that you need in order to be using the mouse, and those
perceptions do depend on the dynamics of the world.

  Then I am not quite clear on what you are saying, or why you are

calling it a “World Model”; elsewhere you have said a model is “a
dynamic functioning system that produces results like those of the
thing modelled”, “The World Model you have built up by long
experience controlling perceptions in the real world does much the
same” and “Both [PCT models and world models] are models of the
dynamics.”

  The way I see it is that we control perceptions in real time with

a particular structure that has developed to improve the quality
of control, ultimately so that intrinsic error is low; and those
perceptions can be complex, multitudinous and diverse;
transitions, sequences, programs etc. And those perceptions are
signals that do not exist as external world variables (though some
may be functions of such)

  and may not even "represent" things that do, have

ever or even could exist outside of perceptual functions.

  Thus calling it (the structure?) a "World Model"

doesn’t seem appropriate to me as they are very different things
and don’t work in the same way (at least haven’t been shown to be
so).

          The property of "tasting like lemonade" is a

property of the perceiving system not of the external
world. It can’t be said to model the world if there is
nothing in the external world to model.

        Can you get the taste of lemonade from what's in a glass

from which you are drinking, if what is in the glass is
gasoline? Can you really say there’s nothing in the
properties of the glass contents that our perceptual
functions convert from sensory input into the taste of
lemonade? I think that’s a ridiculous proposition, or one
that if you are to be consistent about it must apply to
every one of our perceptions. There is nothing in the
sensory input that corresponds to, say, colour, or even
light intensity. To say that there is nothing on the
external world corresponding to “perception X” is a
solipsistic copout. And I say this knowing that Bill Powers
made that claim.

      How about if all animals were suddenly to die, or if we went

back a few billion years before animals existed would you
still say that there were still a property that exists which
corresponds to “perception X” (or any other perception)?

    Does a tree make a noise if it falls in an empty forest? The

answer is the same. It depends on whether you are talking about
imagined sound waves (or whatever environmental properties are
involved) or on the existence of perceptual functions that
create specific perceptions from those sound waves or property
sets. Do you have the same “taste of lemonade” as I do? There’s
no way I could tell. But I could do a Test fot the Controlled
Variable along the lines of the coin game, and try different
chemical structures and/or material processing procedures, and
see whether you said “Lemonade” for the same liquid as I would
say had the taste of lemonade. I couldn’t do that with a
fossillized animal, or even a dog.

  My point was that perceptions don't exist without perceptual

functions (perceivers) and are not properties of the external
world.

  They may even be functions of arbitrary totally

unrelated elements of the world such as the alignment of a
lamppost in Timbuctoo and a star a billion light years away.
Perceptions are subjective constructions that are not world
properties.

      Another thought; if a model is a model of the

world then it could only produce things that the world could
produce, so a perception-producing “world model” would need to
account for perceptions that do not and could not exist in the
real world, such as imaginary concepts and events, especially
during dream time.

    Yes, that is exactly the problem I was posing in my last

message, to which I have hearkened back a few times in this
message.

      Whatever is producing these perceptions is not a

model of the world,

    Really???
      but an associative memory system is quite

consistent with this, I would say.

    Why the word "but"?



    Couldn't you say: "Whatever is producing these perceptions is a

model of a non-existent world, in the construction of which the
actions of associative memories play a significant part"?

  Sounds like you are saying it is not a model of THE world then?

This is interesting because the discussion is converging on a little section in my chapter for LCS4. Martin has helped me consider a very technical role for the cerebellum within the output function of the hierarchy.

My initial idea had been a little different. What if the cerebellum holds a model of the world that is not perceived and can never be perceived? What if we interact with the cerebellum like we interact with the world? It learns, from completely outside the perceptual hierarchy, how to emulate the feedback functions and disturbances of the environment. It emulates the properties of the environment that we never perceive. A lot of the time it is pointless because the environment is there. But when there are gaps, it fills in, and could even time shift the whole emulation a few milliseconds forward to smooth control.

This idea could be seen
as heresy from a PCT perspective except maybe for the fact that this system is separate from the PCT hierarchy so it’s ‘rules’ don’t apply to the PCT hierarchy, just to it.

What do you think Rupert and Martin?

Warren

···

[From Rupert Young (2015.05.13
21.30)]

([Martin Taylor 2015.05.06.15.18]

    ....

http://www.perceptualrobots.com/2014/09/18/taros-2014-iet-public-lecture/

    A "World Model" in the sense I have been using it is a process

model, which, given the state of the world and some imagined
action on it, produces the flow of perceptions that would (as
imagined) occur in that world. My problem now is that I had
previously presumed the “world” in question to be the one to
which the organism had reorganized, but you made me realized
that the modelled world could have any imagined properties, and
if those properties don’t correspond to the ones for which we
reorganized, then using the existing structure of the hierarchy
won’t work, and we would need to entertain the possibility that
an explicit world model can be built and retained and used
somewhere in the brain. Maybe you would think of such a model as
a “replica”; I wouldn’t, because I think “replica” has
connotations that wouldn’t be appropriate.

  Perhaps you are using the term "World Model" to mean the

(internal) environment (“world”) in which it perceives and
operates? But then it wouldn’t be a model. Do you mean that we
need an explicit world model in addition to the existing structure
of the hierarchy?

What is an “explicit” world model if not a replica?

            But there is a replica in the sense that the

imagined actions on the outer world result in the
perceptions that those actions would produce if executed
in the outer world (though usually on a much shorter
time-scale).

          This, it seems to me, could be called a replica of

perceptual control but not of the outer world. I could
imagine myself closing a door and imagine actions that
would result in a closed door. But these “imagined
actions” themselves are perceptions (perceptions of
actions). There is no need to invoke “World Models” to
explain this.

        So how do you imagine a door closing so as to produce the

same perception as a closing door would have, if you don’t
have a model (which I claim to be in the form of all the
connections you have reorganized as you have learned to
control such things as a perception of a door closing)?
What kind of magic produces this perception? A “World Model”
produces perceptions.

      So does an associative memory system (already part of the

theory) so why need a “World Model”.

    Don't confuse implementation with function. Associative memory

has been very much in my underlying thinking of how a World
Model might be implemented.

  Well, I think "World Model" is a misleading and unnecessary term.

Memory are models of perceptions

  and to higher level systems it doesn't make any

difference (and they don’t know) whether a perceptions is
real-time or from memory.

  So those systems operate in the same way and don't

need a separate process or “model” for offline control.

          No, I am thinking of dynamic processes as in

PCT. Again you appear to be talking about two different
types of models here. External models where the purpose is
to replicate internal dynamic processes. And internal
models of external processes. I do not see the case for
the latter.

        They aren't different. When you actually control in a

tracking experiment showing the task on a screen and using a
mouse to move a cursor on screen, you don’t act randomly,
flailing at the air, yelling and rolling your eyes before
happening on the mouse and discovering that it influences
the cursor and helps you control. You don’t reorganize to
control every perception. You have done that already. You
have a model inside you of what it taked to move that cursor
on screen, and that model includes finding a mouse, holding
it, and moving it. You don’t even have to imagine it, though
you may. You are using a model you have already built.

  The difference is that one is external (and intended to be a

replica) and one internal (which you are saying is not a replica,
I think).

  The simulation models we (Rick) build (the computer programs)

consist of two sub-models, one which models the perceptual system
and one which models the physical world (a “World Model”).

  The latter is required to provide the feedback path

and is an explicit model of the external world. Both are replicas
of what they model.

  In my real world robots the latter (a "World Model")

is not required as real world is available. You are saying, it
appears, that real perceptual systems also contain a model of the
physical world.

      Sure, your system has reorganised to select the

appropriate goals (perceptions), but I don’t see any support
for the case that this constitutes a “World Model” in that it
is replicating actual dynamic processes of the world.

    I'm not clear what you are getting at in this comment. The

“World Model” you use in going straight to the appropriate
control action at the many levels needed to select “mouse”, get
your hand on it, and move it usefully don’t replicate the
dynamic processes of the world. They produce the perceptions
that you need in order to be using the mouse, and those
perceptions do depend on the dynamics of the world.

  Then I am not quite clear on what you are saying, or why you are

calling it a “World Model”; elsewhere you have said a model is “a
dynamic functioning system that produces results like those of the
thing modelled”, “The World Model you have built up by long
experience controlling perceptions in the real world does much the
same” and “Both [PCT models and world models] are models of the
dynamics.”

  The way I see it is that we control perceptions in real time with

a particular structure that has developed to improve the quality
of control, ultimately so that intrinsic error is low; and those
perceptions can be complex, multitudinous and diverse;
transitions, sequences, programs etc. And those perceptions are
signals that do not exist as external world variables (though some
may be functions of such)

  and may not even "represent" things that do, have

ever or even could exist outside of perceptual functions.

  Thus calling it (the structure?) a "World Model"

doesn’t seem appropriate to me as they are very different things
and don’t work in the same way (at least haven’t been shown to be
so).

          The property of "tasting like lemonade" is a

property of the perceiving system not of the external
world. It can’t be said to model the world if there is
nothing in the external world to model.

        Can you get the taste of lemonade from what's in a glass

from which you are drinking, if what is in the glass is
gasoline? Can you really say there’s nothing in the
properties of the glass contents that our perceptual
functions convert from sensory input into the taste of
lemonade? I think that’s a ridiculous proposition, or one
that if you are to be consistent about it must apply to
every one of our perceptions. There is nothing in the
sensory input that corresponds to, say, colour, or even
light intensity. To say that there is nothing on the
external world corresponding to “perception X” is a
solipsistic copout. And I say this knowing that Bill Powers
made that claim.

      How about if all animals were suddenly to die, or if we went

back a few billion years before animals existed would you
still say that there were still a property that exists which
corresponds to “perception X” (or any other perception)?

    Does a tree make a noise if it falls in an empty forest? The

answer is the same. It depends on whether you are talking about
imagined sound waves (or whatever environmental properties are
involved) or on the existence of perceptual functions that
create specific perceptions from those sound waves or property
sets. Do you have the same “taste of lemonade” as I do? There’s
no way I could tell. But I could do a Test fot the Controlled
Variable along the lines of the coin game, and try different
chemical structures and/or material processing procedures, and
see whether you said “Lemonade” for the same liquid as I would
say had the taste of lemonade. I couldn’t do that with a
fossillized animal, or even a dog.

  My point was that perceptions don't exist without perceptual

functions (perceivers) and are not properties of the external
world.

  They may even be functions of arbitrary totally

unrelated elements of the world such as the alignment of a
lamppost in Timbuctoo and a star a billion light years away.
Perceptions are subjective constructions that are not world
properties.

      Another thought; if a model is a model of the

world then it could only produce things that the world could
produce, so a perception-producing “world model” would need to
account for perceptions that do not and could not exist in the
real world, such as imaginary concepts and events, especially
during dream time.

    Yes, that is exactly the problem I was posing in my last

message, to which I have hearkened back a few times in this
message.

      Whatever is producing these perceptions is not a

model of the world,

    Really???
      but an associative memory system is quite

consistent with this, I would say.

    Why the word "but"?



    Couldn't you say: "Whatever is producing these perceptions is a

model of a non-existent world, in the construction of which the
actions of associative memories play a significant part"?

  Sounds like you are saying it is not a model of THE world then?

[From Rick Marken (2015.05.14.1920)]

···

On Thu, May 14, 2015 at 1:03 AM, Warren Mansell csgnet@lists.illinois.edu wrote:

W: This is interesting because the discussion is converging on a little section in my chapter for LCS4. Martin has helped me consider a very technical role for the cerebellum within the output function of the hierarchy.

RM: There is a nice discussion of the possible role of the cerebellum in behavior in Ch. 9 of B:CP. The neurophysiological evidence is presented in Figure 9.1. The neurophysiology suggests that the cerebellar cortex is involved in the control of third level perceptions, in this case configurations of mainly kinesthetic (muscle force) perceptions, like those involved in grasping. As Bill says (p. 118 in the 2nd edition) " A function of the cerebellum is clearly to manipulate many vector-effort reference signals at once, thus to create coordinated patterns of otherwise independent efforts. To control is to sense: The cerebellum must perceive those same efforts". So the cerebellum varies it’s outputs (via the Purkinje Cells) as references for lower level vector effort control systems (sensation level) as the means of controlling perceptions of configurations of these vector efforts.

RM: I don’t understand why you say there would be a “role for the cerebellum within the output function of the hierarchy”. There are output functions at all levels of the hierarchy. Are you saying that the output functions for the control systems at the level of the cerebellum have some special properties? Or are you saying something else?

RM: By the way, reading Ch. 9 in B:CP and looking at Figure 9.1 should put to rest anyone’s notion that PCT doesn’t have a strong basis in neurophysiology or that Bill didn’t know the relevant neurophysiology pretty darn well. I think Bill knew more physiology and more conscientiously anchored his theory in physiology than those, like Feldman, who can supposedly “help out” PCT with their physiological knowledge.

WM: My initial idea had been a little different. What if the cerebellum holds a model of the world that is not perceived and can never be perceived? What if we interact with the cerebellum like we interact with the world? It learns, from completely outside the perceptual hierarchy, how to emulate the feedback functions and disturbances of the environment. It emulates the properties of the environment that we never perceive. A lot of the time it is pointless because the environment is there. But when there are gaps, it fills in, and could even time shift the whole emulation a few milliseconds forward to smooth control.

WM: This idea could be seen
as heresy from a PCT perspective except maybe for the fact that this system is separate from the PCT hierarchy so it’s ‘rules’ don’t apply to the PCT hierarchy, just to it.

RM: PCT is not a religion so your idea is certainly not heresy. But I wonder why you proposed it? Is there some phenomenon that is explained by this idea that is not explained by the existing PCT model? I’m all for adding to or changing the PCT model as necessary to account for observations that cannot be accounted for by the model in its current state. But I think any proposed additions/changes should be testable.The only thing that I consider to be “heresy” is suggesting changes/ additions to PCT without proposing how those changes/additions can be tested.

RM: For me, the charm of PCT is that you can demonstrate to yourself that it “works”; that you can test it (in demos and experiments) and see that it accounts for the real behavior observed in these situations very precisely. Bill never theorized just for theorizing’s sake. Theorizing is only 1/2 of the scientific enterprise; the other half is testing the theories. Bill was great at both and the excitement of PCT for me is that it involves both; theory and test. Leave out the test and PCT becomes a religion; leave out the theory and it’s “dust bowl empiricism”. Combine them and, to paraphrase the other Rick, the one in Casablanca, it’s the beginning of a beautiful friendship.

Best

Rick

What do you think Rupert and Martin?

Warren


Richard S. Marken

www.mindreadings.com
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

Thanks Rick! I had overlooked this section of B:CP! I will take a look in more detail!

Warren

···

On Thu, May 14, 2015 at 1:03 AM, Warren Mansell csgnet@lists.illinois.edu wrote:

W: This is interesting because the discussion is converging on a little section in my chapter for LCS4. Martin has helped me consider a very technical role for the cerebellum within the output function of the hierarchy.

RM: There is a nice discussion of the possible role of the cerebellum in behavior in Ch. 9 of B:CP. The neurophysiological evidence is presented in Figure 9.1. The neurophysiology suggests that the cerebellar cortex is involved in the control of third level perceptions, in this case configurations of mainly kinesthetic (muscle force) perceptions, like those involved in grasping. As Bill says (p. 118 in the 2nd edition) " A function of the cerebellum is clearly to manipulate many vector-effort reference signals at once, thus to create coordinated patterns of otherwise independent efforts. To control is to sense: The cerebellum must perceive those same efforts". So the cerebellum varies it’s o
utputs (via the Purkinje Cells) as references for lower level vector effort control systems (sensation level) as the means of controlling perceptions of configurations of these vector efforts.

RM: I don’t understand why you say there would be a “role for the cerebellum within the output function of the hierarchy”. There are output functions at all levels of the hierarchy. Are you saying that the output functions for the control systems at the level of the cerebellum have some special properties? Or are you saying something else?

RM: By the way, reading Ch. 9 in B:CP and looking at Figure 9.1 should put to rest anyone’s notion that PCT doesn’t have a strong basis in neurophysiology or that Bill didn’t know the relevant neurophysiology pretty darn well. I think Bill knew more physiology and more conscientiously anchored his theory in physiology than those, like Feldman, who can supposedly “help out” PCT with their p
hysiological knowledge.

WM: My initial idea had been a little different. What if the cerebellum holds a model of the world that is not perceived and can never be perceived? What if we interact with the cerebellum like we interact with the world? It learns, from completely outside the perceptual hierarchy, how to emulate the feedback functions and disturbances of the environment. It emulates the properties of the environment that we never perceive. A lot of the time it is pointless because the environment is there. But when there are gaps, it fills in, and could even time shift the whole emulation a few milliseconds forward to smooth control.

WM: This idea could be seen
as heresy from a PCT perspective except maybe for the fact that this system is separate from the PCT hierarchy so it’s ‘rules’ don’t apply to the PCT hierarchy, just to it.

RM: PCT is not a religion so your idea is certainly not heresy. But I wonder why you proposed it? Is there some phenomenon that is explained by this idea that is not explained by the existing PCT model? I’m all for adding to or changing the PCT model as necessary to account for observations that cannot be accounted for by the model in its current state. But I think any proposed additions/changes should be testable.The only thing that I consider to be “heresy” is suggesting changes/ additions to PCT without proposing how those changes/additions can be tested.

RM: For me, the charm of PCT is that you can demonstrate to yourself that it “works”; that you can test it (in demos and experiments) and see that it accounts for the real beh
avior observed in these situations very precisely. Bill never theorized just for theorizing’s sake. Theorizing is only 1/2 of the scientific enterprise; the other half is testing the theories. Bill was great at both and the excitement of PCT for me is that it involves both; theory and test. Leave out the test and PCT becomes a religion; leave out the theory and it’s “dust bowl empiricism”. Combine them and, to paraphrase the other Rick, the one in Casablanca, it’s the beginning of a beautiful friendship.

Best

Rick

What do you think Rupert and Martin?

Warren

I see PCT competence. And I see you used Bill as reference.

RM: By the way, reading Ch. 9 in B:CP and looking at Figure 9.1 should put to rest anyone’s notion that PCT doesn’t have a strong basis in neurophysiology or that Bill didn’t know the relevant neurophysiology pretty darn well. I think Bill knew more physiology and more conscientiously anchored his theory in physiology than those, like Feldman, who can supposedly “help out” PCT with their physiological knowledge.

HB : I agree. And I would add that Bill didn’t only understand physiology and neurophysiology, but also simplyfied it and aplyed in his own PCT way, what makes by my oppinion both scientific discipline more understandable. Genious.

Best,

Boris

<
/span>

···

From: Richard Marken (rsmarken@gmail.com via csgnet Mailing List) [mailto:csgnet@lists.illinois.edu]
Sent: Friday, May 15, 2015 4:22 AM
To: csgnet@lists.illinois.edu
Subject: Re: World Model (was Re: What I’m controlling for…)

[From Rick Marken (2015.05.14.1920)]

On Thu, May 14, 2015 at 1:03 AM, Warren Mansell csgnet@lists.illinois.edu wrote:

W: This is interesting because the discussion is converging on a little section in my chapter for LCS4. Martin has helped me consider a very technical role for the cerebellum within the output function of the hierarchy.

RM: There is a nice discussion of the possible role of the cerebellum in behavior in Ch. 9 of B:CP. The neurophysiological evidence is presented in Figure 9.1. The neurophysiology suggests that the cerebellar cortex is involved in the control of third level perceptions, in this case configurations of mainly kinesthetic (muscle force) perceptions, like those involved in grasping. As Bill says (p. 118 in the 2nd edition) " A function of the cerebellum is clearly to manipulate many vector-effort refer
ence signals at once, thus to create coordinated patterns of otherwise independent efforts. To control is to sense: The cerebellum must perceive those same efforts". So the cerebellum varies it’s outputs (via the Purkinje Cells) as references for lower level vector effort control systems (sensation level) as the means of controlling perceptions of configurations of these vector efforts.

RM: I don’t understand why you say there would be a “role for the cerebellum within the output function of the hierarchy”. There are output functions at all levels of the hierarchy. Are you saying that the output functions for the control systems at the level of the cerebellum have some special properties? Or are you saying something else?

RM: By the way, reading Ch. 9 in B
:CP and looking at Figure 9.1 should put to rest anyone’s notion that PCT doesn’t have a strong basis in neurophysiology or that Bill didn’t know the relevant neurophysiology pretty darn well. I think Bill knew more physiology and more conscientiously anchored his theory in physiology than those, like Feldman, who can supposedly “help out” PCT with their physiological knowledge.

WM: My initial idea had been a little different. What if the cerebellum holds a model of the world that is not perceived and can never be perceived? What if we interact with the cerebellum like we interact with the world? It learns, from completely outside the perceptual hierarchy, how to emulate the feedback functions and disturbances of the environment. It emul
ates the properties of the environment that we never perceive. A lot of the time it is pointless because the environment is there. But when there are gaps, it fills in, and could even time shift the whole emulation a few milliseconds forward to smooth control.

WM: This idea could be seen as heresy from a PCT perspective except maybe for the fact that this system is separate from the PCT hierarchy so it’s ‘rules’ don’t apply to the PCT hierarchy, just to it.

RM: PCT is not a religion so your idea is certainly not heresy. But I wonder why you proposed it? Is there some phenomenon that is explained by this idea that is not exp
lained by the existing PCT model? I’m all for adding to or changing the PCT model as necessary to account for observations that cannot be accounted for by the model in its current state. But I think any proposed additions/changes should be testable.The only thing that I consider to be “heresy” is suggesting changes/ additions to PCT without proposing how those changes/additions can be tested.

RM: For me, the charm of PCT is that you can demonstrate to yourself that it “works”; that you can test it (in demos and experiments) and see that it accounts for the real behavior observed in these situations very precisely. Bill never theorized just for theorizing’s sake. Theorizing is only 1/2 of the scientific enterprise; the other half is testing the theories. Bill was great at both and the excitement of PCT for me is that it involves both; theory and test. Leave
out the test and PCT becomes a religion; leave out the theory and it’s “dust bowl empiricism”. Combine them and, to paraphrase the other Rick, the one in Casablanca, it’s the beginning of a beautiful friendship.

Best

Rick

What do you think Rupert and Martin?

Warren

Ric
hard S. Marken

www.mindreadings.com
Author of Doing Research on Purpose.

Now available from Amazon or Barnes & Noble

[Martin Taylor 2015.05.09.11.08]

Yes, that's the connection Bill showed. But what he neglected in

that figure and the discussion was that the reference value doesn’t
come from just one higher-level output. Maybe it doesn’t matter,
maybe it does. I happen to think (at this moment and for the last
long while) that it does, because each of the various higher-level
systems is controlling a different perception using the one (and
others), so the result of the imagining is impossible to determine
from any single higher-level contribution to the reference input (as
opposed to Bill’s idea that it would be what the single higher-level
unit shown in Fig 15.3 asks for). But that’s really independent of
the “World Model” problem.
No, but there’s a pretty standard switching mechanism, which it is
hard to imagine a neural network not implementing unless there is
some connection principle that avoids it – the flip-flop, which is
at the heart of the Von Neumann computer. On the perceptual side,
the reversing figure illusions (Vase-two people, Necker Cube,
old-young lady) seem as though they are the result of flip-flop
action. I have just taken it for granted that there’s no problem
with the switches unless someone shows that neurons don’t connect
that way even though it’s the most natural thing to expect.
Probably from a verbal description of a narwhal or a rhinosceros.
And yes, all these imaginary worlds are built from components
previously perceived.
Agreed.
There are two question, which might turn out to be the same. One is
that of creativity. An experiment would presumably have to ask
someone to control for perceiving something they had never perceived
or been told of. I can’t think how to design such an experiment, and
if one did, how one would identify the different levels of
perception involved. The other is planning in the world in which things behave as we have
known them to do, the world in which we reorganized the control
structure we now have. That ought to be easier, since it is
something we all do every day. Since the issue is whether we can
choose among different possible ways to control a perception, not
whether we can control a perception or what the parameters are of
the control loop, the experiment would have to set up several
plausible means to control a perception the subject controls either
to please to experimenter (as in the usual tracking study) or for
some obvious reason. The experiments in which a bird or animal fills
a tube with water so as to reach a peanut, or an elephant moves a
box to get at something placed high, might be examples.
Got to go now. I’ll be away for about a week, but might have moments
to read mail.
Martin

···

On 2015/05/8 9:31 PM, Richard Marken
( via csgnet Mailing List) wrote:

rsmarken@gmail.com

[From Rick Marken (2015.05.08.1830)]

            Martin Taylor

(2015.05.06.15.18)–

            MT: Anyway, as I have said many times, I previously

thought of the “World Model” as a consequence of the
existing theory, not as an addendum requiring a new
construct. To me, it was just the way the imagination
connection would have to work. The only new thing was
that Bill in 1993 hadn’t seen it that way, but in 2010
he seemed to be coming around to this way of looking at
it (though Rick disagrees).

          RM: What I disagreed with was the idea that the quote

from Bill that Boris posted had anything to do with
imagination. I don’t know what you mean when you say that
you thought the “World Model” was “just the way the
imagination connection would have to work”. The only thing
I know about the way the imagination connection works is
as it’s described in B:CP (Figure 15.3). The imagination
connection is the state of a two switches which, when
thrown, connects the descending reference (from memory) to
the ascending perceptual signal.

          But Bill never came up with any proposals regarding

the mechanisms involved in “throwing the switches” – why
they are thrown (and un-thrown).

          This seems like the kind of mechanism that is needed to

explain Rupert’s observation – that we can imagine things
that we have never perceived before. A unicorn isn’t the
best example of this, perhaps, because most people have
perceived them in drawings and cartoons; but the first
person to come up with the idea of a unicorn certainly had
to construct it using existing perceptual function via
imagination.

          There is nothing currently in PCT that explains how we

do this – how we imagine things that we construct
imagined perceptions that we have never perceived before.
This must involve combining imagined versions of
perceptions that we have had – like horses and
rhinoceroses – but there is nothing in PCT that says how
this is done.

          Your concept of a World Model may be this mechanism,

if, as you say, the World Model is the way the imagination
connection works. If you do propose a mechanism to add to
the PCT model that explains the workings of the
imagination connection it would be great if you could also
propose a way of testing that proposal. That would make it
more interesting to me given my interest in research.

[From Rupert Young (2015.05.17 21.00)]

(Martin Taylor 2015.05.13.18.02]

This might seem valid, but is, I think, missing out important parts
of the story. This seems to imply that perceptions are objective
representations of objective states of the world, in that they
represent states that are independent of the perceiver. In reality
perceptions are functions of external states and our own effects,
which may be output or perspectives, and are not independent of the
perceiver. Would you, for example, say hunger or arrogance are
states of the external world, independent of the perceiver? Or how
about perceptions that there is no god but allah, of the beauty of
Kylie Minogue or the view that some groups of people are worthless
vermin and should be exterminated (I am reading “Sophie’s Choice” at
the moment). Perceptions are, at all levels, in the eye of the
beholder and, so, subjective. Above, where you say “[perceptions are a] model of the current state
of the world” and a “perception is based on one function of the
sensory consequences of the current state of the entire external
world” are saying different things. The distinction being what is
meant by “model” and “function”. To me a model means something that
is the same as the thing being modelled, in terms of components and
processes, to an approximation, whereas a function is something that
transforms one set of things into something new, and is not the
same. With functions the individual (input) states are lost and we
only have the output of the function (from a sum you can’t get the
constituent values). So, it may be valid to say that perceptions are of
the state of the world it is not valid to say that perceptions are of the state of the world.
Additionally, and crucially, how can the “model” (which, you are
saying, is internal to the perceptual system) acquire information to
model disturbances in the external world? If it doesn’t do this then
it can’t really be considered a model of the world as disturbances
are a fundamental part of the state of the external world.
How about perceptions are subjective functions of the combination of
the (unknown) state of the objective world and our own subjective
effects (output or perspective).
If PCT shows us anything it shows us that living systems operate, to
maintain states at desired values, without needing
to know (or having access to) the “current state of the world”. The
inputs to the system from outside are not the current state of the
world but a combination of the effects of our own output and
external variables, which by definition (disturbances) are unknown
to us. Those internal states are maintained at the desired values
even thought the current (external) state of the world varies,
perhaps substantially.
Perceptions provide a new dimension that that would otherwise not
exist without the perceiver. As they are self-constructed internal
states we are able to control them in a way that we are not able to
control external states. If we were able to model the external world
then we wouldn’t need perceptions. This has been a major approach
within AI, but has insurmountable practical (and conceptual)
difficulties.
If we consider a simple example of a cruise control system. It
perceives “speed” and varies output to keep the perceived speed at
X, even though the car is going up and down dale. What could we say
that the system is modelling of the state of the external world?
The incline of the hills? No. The aerodynamics of the car? No. The
dynamics of the the equations of motion of objects? No. etc, etc,
etc. Are any of these modelled in the perceptual system? No. There
is no need, because of the way perceptual control systems work; that
is their “magic”! Changing the internal parameters of the perceptual
doesn’t help either, that just affects the quality of control. The
only thing we could say, I think, is that the perception “models”
the actual speed of the car, that is the only “information”
available to the system. But that is captured by the word “perceive”
so introducing another term is superfluous. I am happy to be
convinced but so far I don’t see any justification, or necessity.
I assume the same so I’m not sure why you keep referring to
solipsism. I am saying perceptions are subjective perspectives on
the world so could not be said to model the way the world is. You
seem to be saying that perceptions are models of some objective
truth. How does a perception that whiskey tastes nice constitute the
way the world is? All perceptions are internal states. The perception that someone is
arrogant is a (subjective internal state) of the
current state of the (external) world. As I understand your theory
you are saying that this perception is a model of an actual state in
the external world. Did you mean “related”? I am not denying that
our control systems are related to what is outside, but that they
could be considered a “model” of the state of the world. In fact, I
would say that perceptual control is a way that living systems have
got around the absence of such a model.
Quite! “survival of the fittest” has been misinterpreted to mean
survival of the strongest rather than survival of those that most
fit their environment. Likewise it is a misinterpretation to
conclude that perceptions are models of the external world from
perceptions being of variables in the external
world.
I acknowledge what you are saying but don’t agree, so am trying to
work out if you mean something different from the meaning I receive.
Which is, as far as I can see it is a conjectured conceptualisation
without support.
It is you who is proposing an explicit model, but not saying why it
is not a replica. Yes, memories are perceptions, but also, according to PCT,
“retrieved recordings [models/replicas] of past perceptual signals”,
but not models of the external world.
Oh no!
I don’t see any rationale for this conclusion. It is a
misconceptualisation of perceptual systems. I think we are going
around in circles.
It seems like a contradiction, " don’t replicate the dynamic
processes of the world" and “Both [PCT models and world models] are
models of the dynamics.”
All perceptual functions are functionally the same, they transform a
set of inputs into an output, whether the perceptions are imaginary
or not. As we go up the levels they are less dependent upon current
external variables; mixed up as they with imagined perceptions
(memory). What we call them is an externally imposed classification.
We could regard all as imagination perceptions in the sense that
they are new creations, of signals that do not exist outside of the
perceptual system.
There’s no trick. Those relationships are relative to the perceiver
and require its presence, and don’t exist without it.
No, the of the alignment is relative to the
perceiver; it is the perception of the alignment of the lamppost and
the star, AND the eye. It cannot exist without the perceiver. It is
subjective, relative to the perceiver.
“It” is the model, which here you are saying is a model of a
non-existent world, hence not a model of THE world.
These are also perceptions (system level?). Perhaps, you are using
the term “model” to refer to a particular set of perceptions related
to how the world (may) work, rather than ones that actually model
it?
The reorganized hierarchy is a structure that enables you to control
intrinsic goals. It doesn’t follow that it models the world.
I don’t think we’re going to agree on this.
Rupert

···
    I think there is a fundamental problem with viewing perceptions

as a “model of the current state of the world”.

  What else could they possibly be? One perception is based on one

function of the sensory consequences of the current state of the
entire external world, is it not?

functions *
models*

  How

else could you describe the set of all perceptions based on the
senses other than as a model of the world?

internal

    It does not follow from this that any perception,

or the complete set, is a model of the state of the current
world. Although I realise that with some (low level) perceptions
it might appear that this is the case I think it is a
misconception of what perceptions are, and is ascribing to them
a conceptualisation which reaches beyond their inherent nature,
and also leads to the erroneous concept of “World Models”.
Perceptions as a “World View” might be closer to the mark, but
even then this gives the wrong impression that they are
representative of an objective real world.

  No. We just assume that there exists an objective real world,

because that’s the alternative to solipsism, and if you and the
rest of what I think I perceive are simply figments of my mind,
the situation is not very interesting. So I do assume you exist,
and work from there. Starting from the assumption that there
exists a real world that influences our senses, there is no error
in taking our perceptions to constitute a model of the way the
world is, and our control structures as a model of the way the
world works.

    That we can have perceptions (hunger, fear,

arrogance) that are not properties of the external world
suggests that this is not the case.

  Yes, we do have some perceptions of internal states as well, but I

don’t see how this leads to the conclusion that you seem to be
controlling for. In fact, I still don’t know what that really is,
other than to deny that what is in the structure of our control
system and the current values of the signals in it are unrelated
to what is outside us.

perception

    How we conceptualise PCT is extremely important as erroneous

conceptualisation can lead to erroneous modelling and
implementation. In my view the concept of “World Models” is such
a case, of resulting in misleading conceptualisation of PCT and
we should expunge from use.

  I have seen where you are coming from with this comment, from the

example that you linked <>.
Yes, a misconceptualization can lead to great error, just as
misconceptualizations of Darwinian evolution have done. That’s not
a problem for the theory. It’s a PR problem. I guess if you live
in a world in which people believe that Darwin’s theory means a
dog-eat-dog world, you might think that dog-eat-dog libertarianism
or capitalism was the natural way to a better world, but that
doesn’t mean that you should give up on using the phrase “survival
of the fittest”. Neither do I think that misuse of the kind of
explicit model that accurately describes a mechanical structure to
say that solving the equations for such a model is the way we
control is a reason for denying that our actual control structure,
whatever it may be, is a model of the way the world works so far
as we have yet learned it.

functions

      A "World Model" in the sense I have been using it is a process

model, which, given the state of the world and some imagined
action on it, produces the flow of perceptions that would (as
imagined) occur in that world. My problem now is that I had
previously presumed the “world” in question to be the one to
which the organism had reorganized, but you made me realized
that the modelled world could have any imagined properties,
and if those properties don’t correspond to the ones for which
we reorganized, then using the existing structure of the
hierarchy won’t work, and we would need to entertain the
possibility that an explicit world model can be built and
retained and used somewhere in the brain. Maybe you would
think of such a model as a “replica”; I wouldn’t, because I
think “replica” has connotations that wouldn’t be appropriate.

    Perhaps you are using the term "World Model" to mean the

(internal) environment (“world”) in which it perceives and
operates? But then it wouldn’t be a model. Do you mean that we
need an explicit world model in addition to the existing
structure of the hierarchy?

  No. I keep saying (and you keep refusing to acknowledge that I

have said) that the structure of the hierarchy IS the model of the
actual world.

    What is an "explicit" world model if not a replica?
  What's the point of a replica? You would just have to model that

instead, wouldn’t you? In an infinite regress. Either the replica
of the replica of the replica would at each level of recursion
become of lower and lower fidelity to the original or it wouldn’t.
In neither case would it be much use for anything escapt admiring
how cleverly it was made.

        So does an associative memory system (already

part of the theory) so why need a “World Model”.

      Don't confuse implementation with function. Associative memory

has been very much in my underlying thinking of how a World
Model might be implemented.

    Well, I think "World Model" is a misleading and unnecessary

term. Memory are models of perceptions

  ?? I would have thought that memories ARE perceptions. Or are we

really going to get into a replica recursion proposal?

  It's funny, I had the impression that you started this thread by

criticizing me for thinking that there was such a model in one’s
head, when at the time I did not think there was. Now, after a few
iterations, I have come around to the view about which you wrongly
complained.

    In my real world robots the latter (a "World

Model") is not required as real world is available. You are
saying, it appears, that real perceptual systems also contain a
model of the physical world.

  Real perceptual systems produce perceptions that are (they don't

“contain”) a model of the current state of the world. Real
perceptual systems are a component of (they don’t “contain”) a
model of the wy the world works.

        Sure, your system has reorganised to select the

appropriate goals (perceptions), but I don’t see any support
for the case that this constitutes a “World Model” in that
it is replicating actual dynamic processes of the world.

      I'm not clear what you are getting at in this comment. The

“World Model” you use in going straight to the appropriate
control action at the many levels needed to select “mouse”,
get your hand on it, and move it usefully don’t replicate the
dynamic processes of the world. They produce the perceptions
that you need in order to be using the mouse, and those
perceptions do depend on the dynamics of the world.

    Then I am not quite clear on what you are saying, or why you are

calling it a “World Model”; elsewhere you have said a model is
“a dynamic functioning system that produces results like those
of the thing modelled”, “The World Model you have built up by
long experience controlling perceptions in the real world does
much the same” and “Both [PCT models and world models] are
models of the dynamics.”

  You have expressed it correctly, so I don't understand why you are

not quite clear about it. After “elsewhere” you put it quite well,
and explain “why [I am] calling it a World Model”. So where’s the
problem?

    The way I see it is that we control perceptions in

real time with a particular structure that has developed to
improve the quality of control, ultimately so that intrinsic
error is low; and those perceptions can be complex,
multitudinous and diverse; transitions, sequences, programs etc.
And those perceptions are signals that do not exist as external
world variables (though some may be functions of such)

  Replace "some may" by "all except those with imagination

components are".

    and may not even "represent" things that do, have

ever or even could exist outside of perceptual functions.

  Again you go into the philosopher's trick (The tree falling in the

forest). Every possible relationship among components in the world
exists, whether anyone currently perceives it or not. That’s just
as reasonable a proposition as that none of them do, or that only
those perceived by someone do. Take your pick. No-one can prove
you wrong.

    They may even be functions of arbitrary totally

unrelated elements of the world such as the alignment of a
lamppost in Timbuctoo and a star a billion light years away.
Perceptions are subjective constructions that are not world
properties.

  True, though I don't like the word "subjective" in this context,

as it smacks of conscious thought – but what’s the point? The
relationship between the lamppost and the star exists, whether
anyone perceives it or not.

perception

      Whatever

is producing these perceptions is not a model of the world,

      Really???
        but an associative memory system is quite

consistent with this, I would say.

      Why the word "but"?



      Couldn't you say: "Whatever is producing these perceptions is

a model of a non-existent world, in the construction of which
the actions of associative memories play a significant part"?

    Sounds like you are saying it is not a model of THE world then?
  What is "it" in this question? You seem to be referring back to my

mention of fantasy worlds, which are definitely not THE world,
since there can be lots of them, all different, as any inventor
knows.

  In

the earlier message I was just saying that I had come around to
recognize that you CAN construct an indefinite number of models of
fantasy worlds, and that those world models are not embodied in
the control hierarchy, but are manipulable perceptual data that
include perceptions of how the real world works. You CAN have an
explicit model of how the real world works (it’s called science,
in some circles, Faith in others).

  But

you don’t have to, since your reorganized hierarchy is the only
world model you need for survival, whether you are a human, a
wolf, a tree, or a bacterium.

      [From Rupert Young (2015.05.13

21.30)]

([Martin Taylor 2015.05.06.15.18]

      ....

http://www.perceptualrobots.com/2014/09/18/taros-2014-iet-public-lecture/

[From Philip 2015.05.17]

RY: In reality perceptions are functions of external states and our own effects.

I think the way Bill would have said this (I remember he said it somewhere in the beginning of LCS III) is that what we observe as behavior (the experimental measurement - the perception of the experimenter) is the sum of actions and random disturbances.

RY: To me a model means something that is the same as the thing being modelled.

A model is essentially a simulation of behavior.

RY: With functions the individual (input) states are lost and we only have the output of the function (from a sum you can’t get the constituent values).

It depends on how you conceptualize the process of measurement. Consider the Lebesgue integral. This is how it’s often described:

Imagine a cashier who is in-charge of counting the coins at a bank at the end of the day. Assume the coins come in denominations of 1, 5, and 10. Say he receives the coins in the following order:

5, 2, 1, 2, 2, 1, 5, 2, 1, 2, 5

He has two different ways to count. The first is to count the coins as and when they come:

5 + 2 + 1 + 2 + 2 + 1 + 5 + 2 + 1 + 2 + 5 = 28

The second way is as follows. He has 3 boxes, one box for each denomination (the first for coins with denomination 1 and the last for coins with denomination 5). At the end of the day, he counts the coins in each box:

3 x 1 + 5 x 2 + 3 x 5 = 28

The first method is the Riemann way of summing, the second method is the Lebesgue way of summing.

intentional dynamics.pdf (1.22 MB)

···

On Sun, May 17, 2015 at 11:58 AM, Rupert Young csgnet@lists.illinois.edu wrote:

[From Rupert Young (2015.05.17 21.00)]

(Martin Taylor 2015.05.13.18.02]

      [From Rupert Young (2015.05.13

21.30)]

([Martin Taylor 2015.05.06.15.18]

      ....
    I think there is a fundamental problem with viewing perceptions

as a “model of the current state of the world”.

  What else could they possibly be? One perception is based on one

function of the sensory consequences of the current state of the
entire external world, is it not?
How
else could you describe the set of all perceptions based on the
senses other than as a model of the world?

    It does not follow from this that any perception,

or the complete set, is a model of the state of the current
world. Although I realise that with some (low level) perceptions
it might appear that this is the case I think it is a
misconception of what perceptions are, and is ascribing to them
a conceptualisation which reaches beyond their inherent nature,
and also leads to the erroneous concept of “World Models”.
Perceptions as a “World View” might be closer to the mark, but
even then this gives the wrong impression that they are
representative of an objective real world.

  No. We just assume that there exists an objective real world,

because that’s the alternative to solipsism, and if you and the
rest of what I think I perceive are simply figments of my mind,
the situation is not very interesting. So I do assume you exist,
and work from there. Starting from the assumption that there
exists a real world that influences our senses, there is no error
in taking our perceptions to constitute a model of the way the
world is, and our control structures as a model of the way the
world works.

    That we can have perceptions (hunger, fear,

arrogance) that are not properties of the external world
suggests that this is not the case.

  Yes, we do have some perceptions of internal states as well, but I

don’t see how this leads to the conclusion that you seem to be
controlling for. In fact, I still don’t know what that really is,
other than to deny that what is in the structure of our control
system and the current values of the signals in it are unrelated
to what is outside us.

    How we conceptualise PCT is extremely important as erroneous

conceptualisation can lead to erroneous modelling and
implementation. In my view the concept of “World Models” is such
a case, of resulting in misleading conceptualisation of PCT and
we should expunge from use.

  I have seen where you are coming from with this comment, from the

example that you linked <http://www.perceptualrobots.com/2014/09/18/taros-2014-iet-public-lecture/

  >.

Yes, a misconceptualization can lead to great error, just as
misconceptualizations of Darwinian evolution have done. That’s not
a problem for the theory. It’s a PR problem. I guess if you live
in a world in which people believe that Darwin’s theory means a
dog-eat-dog world, you might think that dog-eat-dog libertarianism
or capitalism was the natural way to a better world, but that
doesn’t mean that you should give up on using the phrase “survival
of the fittest”. Neither do I think that misuse of the kind of
explicit model that accurately describes a mechanical structure to
say that solving the equations for such a model is the way we
control is a reason for denying that our actual control structure,
whatever it may be, is a model of the way the world works so far
as we have yet learned it.

      A "World Model" in the sense I have been using it is a process

model, which, given the state of the world and some imagined
action on it, produces the flow of perceptions that would (as
imagined) occur in that world. My problem now is that I had
previously presumed the “world” in question to be the one to
which the organism had reorganized, but you made me realized
that the modelled world could have any imagined properties,
and if those properties don’t correspond to the ones for which
we reorganized, then using the existing structure of the
hierarchy won’t work, and we would need to entertain the
possibility that an explicit world model can be built and
retained and used somewhere in the brain. Maybe you would
think of such a model as a “replica”; I wouldn’t, because I
think “replica” has connotations that wouldn’t be appropriate.

    Perhaps you are using the term "World Model" to mean the

(internal) environment (“world”) in which it perceives and
operates? But then it wouldn’t be a model. Do you mean that we
need an explicit world model in addition to the existing
structure of the hierarchy?

  No. I keep saying (and you keep refusing to acknowledge that I

have said) that the structure of the hierarchy IS the model of the
actual world.

    What is an "explicit" world model if not a replica?
  What's the point of a replica? You would just have to model that

instead, wouldn’t you? In an infinite regress. Either the replica
of the replica of the replica would at each level of recursion
become of lower and lower fidelity to the original or it wouldn’t.
In neither case would it be much use for anything escapt admiring
how cleverly it was made.

        So does an associative memory system (already

part of the theory) so why need a “World Model”.

      Don't confuse implementation with function. Associative memory

has been very much in my underlying thinking of how a World
Model might be implemented.

    Well, I think "World Model" is a misleading and unnecessary

term. Memory are models of perceptions

  ?? I would have thought that memories ARE perceptions. Or are we

really going to get into a replica recursion proposal?
It’s funny, I had the impression that you started this thread by
criticizing me for thinking that there was such a model in one’s
head, when at the time I did not think there was. Now, after a few
iterations, I have come around to the view about which you wrongly
complained.

    In my real world robots the latter (a "World

Model") is not required as real world is available. You are
saying, it appears, that real perceptual systems also contain a
model of the physical world.

  Real perceptual systems produce perceptions that are (they don't

“contain”) a model of the current state of the world. Real
perceptual systems are a component of (they don’t “contain”) a
model of the wy the world works.

        Sure, your system has reorganised to select the

appropriate goals (perceptions), but I don’t see any support
for the case that this constitutes a “World Model” in that
it is replicating actual dynamic processes of the world.

      I'm not clear what you are getting at in this comment. The

“World Model” you use in going straight to the appropriate
control action at the many levels needed to select “mouse”,
get your hand on it, and move it usefully don’t replicate the
dynamic processes of the world. They produce the perceptions
that you need in order to be using the mouse, and those
perceptions do depend on the dynamics of the world.

    Then I am not quite clear on what you are saying, or why you are

calling it a “World Model”; elsewhere you have said a model is
“a dynamic functioning system that produces results like those
of the thing modelled”, “The World Model you have built up by
long experience controlling perceptions in the real world does
much the same” and “Both [PCT models and world models] are
models of the dynamics.”

  You have expressed it correctly, so I don't understand why you are

not quite clear about it. After “elsewhere” you put it quite well,
and explain “why [I am] calling it a World Model”. So where’s the
problem?

    The way I see it is that we control perceptions in

real time with a particular structure that has developed to
improve the quality of control, ultimately so that intrinsic
error is low; and those perceptions can be complex,
multitudinous and diverse; transitions, sequences, programs etc.
And those perceptions are signals that do not exist as external
world variables (though some may be functions of such)

  Replace "some may" by "all except those with imagination

components are".

    and may not even "represent" things that do, have

ever or even could exist outside of perceptual functions.

  Again you go into the philosopher's trick (The tree falling in the

forest). Every possible relationship among components in the world
exists, whether anyone currently perceives it or not. That’s just
as reasonable a proposition as that none of them do, or that only
those perceived by someone do. Take your pick. No-one can prove
you wrong.

    They may even be functions of arbitrary totally

unrelated elements of the world such as the alignment of a
lamppost in Timbuctoo and a star a billion light years away.
Perceptions are subjective constructions that are not world
properties.

  True, though I don't like the word "subjective" in this context,

as it smacks of conscious thought – but what’s the point? The
relationship between the lamppost and the star exists, whether
anyone perceives it or not.

      Whatever

is producing these perceptions is not a model of the world,

      Really???
        but an associative memory system is quite

consistent with this, I would say.

      Why the word "but"?



      Couldn't you say: "Whatever is producing these perceptions is

a model of a non-existent world, in the construction of which
the actions of associative memories play a significant part"?

    Sounds like you are saying it is not a model of THE world then?
  What is "it" in this question? You seem to be referring back to my

mention of fantasy worlds, which are definitely not THE world,
since there can be lots of them, all different, as any inventor
knows.
In
the earlier message I was just saying that I had come around to
recognize that you CAN construct an indefinite number of models of
fantasy worlds, and that those world models are not embodied in
the control hierarchy, but are manipulable perceptual data that
include perceptions of how the real world works. You CAN have an
explicit model of how the real world works (it’s called science,
in some circles, Faith in others).
But
you don’t have to, since your reorganized hierarchy is the only
world model you need for survival, whether you are a human, a
wolf, a tree, or a bacterium.

This might seem valid, but is, I think, missing out important parts

of the story. This seems to imply that perceptions are objective
representations of objective states of the world, in that they
represent states that are independent of the perceiver. In reality
perceptions are functions of external states and our own effects,
which may be output or perspectives, and are not independent of the
perceiver. Would you, for example, say hunger or arrogance are
states of the external world, independent of the perceiver? Or how
about perceptions that there is no god but allah, of the beauty of
Kylie Minogue or the view that some groups of people are worthless
vermin and should be exterminated (I am reading “Sophie’s Choice” at
the moment). Perceptions are, at all levels, in the eye of the
beholder and, so, subjective.

Above, where you say "[perceptions are a] model of the current state

of the world" and a “perception is based on one function of the
sensory consequences of the current state of the entire external
world” are saying different things. The distinction being what is
meant by “model” and “function”. To me a model means something that
is the same as the thing being modelled, in terms of components and
processes, to an approximation, whereas a function is something that
transforms one set of things into something new, and is not the
same. With functions the individual (input) states are lost and we
only have the output of the function (from a sum you can’t get the
constituent values).

So, it may be valid to say that perceptions are *functions*
of

the state of the world it is not valid to say that perceptions are *
models*of the state of the world.

Additionally, and crucially, how can the "model" (which, you are

saying, is internal to the perceptual system) acquire information to
model disturbances in the external world? If it doesn’t do this then
it can’t really be considered a model of the world as disturbances
are a fundamental part of the state of the external world.

How about perceptions are subjective functions of the combination of

the (unknown) state of the objective world and our own subjective
effects (output or perspective).

If PCT shows us anything it shows us that living systems operate, to

maintain internal states at desired values, without needing
to know (or having access to) the “current state of the world”. The
inputs to the system from outside are not the current state of the
world but a combination of the effects of our own output and
external variables, which by definition (disturbances) are unknown
to us. Those internal states are maintained at the desired values
even thought the current (external) state of the world varies,
perhaps substantially.

Perceptions provide a new dimension that that would otherwise not

exist without the perceiver. As they are self-constructed internal
states we are able to control them in a way that we are not able to
control external states. If we were able to model the external world
then we wouldn’t need perceptions. This has been a major approach
within AI, but has insurmountable practical (and conceptual)
difficulties.

If we consider a simple example of a cruise control system. It

perceives “speed” and varies output to keep the perceived speed at
X, even though the car is going up and down dale. What could we say
that the system is modelling of the state of the external world?
The incline of the hills? No. The aerodynamics of the car? No. The
dynamics of the the equations of motion of objects? No. etc, etc,
etc. Are any of these modelled in the perceptual system? No. There
is no need, because of the way perceptual control systems work; that
is their “magic”! Changing the internal parameters of the perceptual
doesn’t help either, that just affects the quality of control. The
only thing we could say, I think, is that the perception “models”
the actual speed of the car, that is the only “information”
available to the system. But that is captured by the word “perceive”
so introducing another term is superfluous. I am happy to be
convinced but so far I don’t see any justification, or necessity.

I assume the same so I'm not sure why you keep referring to

solipsism. I am saying perceptions are subjective perspectives on
the world so could not be said to model the way the world is. You
seem to be saying that perceptions are models of some objective
truth. How does a perception that whiskey tastes nice constitute the
way the world is?

All perceptions are internal states. The perception that someone is

arrogant is a perception (subjective internal state) of the
current state of the (external) world. As I understand your theory
you are saying that this perception is a model of an actual state in
the external world. Did you mean “related”? I am not denying that
our control systems are related to what is outside, but that they
could be considered a “model” of the state of the world. In fact, I
would say that perceptual control is a way that living systems have
got around the absence of such a model.

Quite! "survival of the fittest" has been misinterpreted to mean

survival of the strongest rather than survival of those that most
fit their environment. Likewise it is a misinterpretation to
conclude that perceptions are models of the external world from
perceptions being functions of variables in the external
world.
I acknowledge what you are saying but don’t agree, so am trying to
work out if you mean something different from the meaning I receive.
Which is, as far as I can see it is a conjectured conceptualisation
without support.

It is you who is proposing an explicit model, but not saying why it

is not a replica.

Yes, memories are perceptions, but also, according to PCT,

“retrieved recordings [models/replicas] of past perceptual signals”,
but not models of the external world.

Oh no!

I don't see any rationale for this conclusion. It is a

misconceptualisation of perceptual systems. I think we are going
around in circles.

It seems like a contradiction, " don't replicate the dynamic

processes of the world" and “Both [PCT models and world models] are
models of the dynamics.”

All perceptual functions are functionally the same, they transform a

set of inputs into an output, whether the perceptions are imaginary
or not. As we go up the levels they are less dependent upon current
external variables; mixed up as they with imagined perceptions
(memory). What we call them is an externally imposed classification.
We could regard all as imagination perceptions in the sense that
they are new creations, of signals that do not exist outside of the
perceptual system.

There's no trick. Those relationships are relative to the perceiver

and require its presence, and don’t exist without it.

No, the *perception* of the alignment is relative to the

perceiver; it is the perception of the alignment of the lamppost and
the star, AND the eye. It cannot exist without the perceiver. It is
subjective, relative to the perceiver.

"It" is the model, which here you are saying is a model of a 

non-existent world, hence not a model of THE world.

These are also perceptions (system level?). Perhaps, you are using

the term “model” to refer to a particular set of perceptions related
to how the world (may) work, rather than ones that actually model
it?

The reorganized hierarchy is a structure that enables you to control

intrinsic goals. It doesn’t follow that it models the world.

I don't think we're going to agree on this.



Rupert

[From Rick Marken (2015.05.18.0840)]

···
  MT: What else could they possibly be? One perception is based on one

function of the sensory consequences of the current state of the
entire external world, is it not?
Rupert Young (2015.05.17 21.00)
RY: This might seem valid, but is, I think, missing out important parts
of the story. This seems to imply that perceptions are objective
representations of objective states of the world, in that they
represent states that are independent of the perceiver.

RM: I’m with Martin on this. Saying that a perception is a function of sensory input does not imply that perceptions represent an objective state of the world. Again I give the perception of the taste of lemonade as an example. That perception is a particular function of the sensory effects of various chemicals on the taste buds. There is no objective “taste of lemonade” out there in the world. A different perceptual function would produce a different perception of the same sensory inputs. The sensory function that produces the perception of lemonade can be considered a model of the “objective” cause of that aspect of our sensory input. A different function of the same sensory input could be considered a different model of the objective cause of that input.

RY: In reality

perceptions are functions of external states and our own effects,
which may be output or perspectives, and are not independent of the
perceiver.

RM: I think it’s more correct to say that the states of perceptual variables depend on the states of the external variables and our own feedback effects. Our effects on perceptions don’t affect the nature of those perceptions; they affect the state of perceptual variables. For example, the contraction of the iris is our effect on one perceptual variable: brightness (of course it affects focus and “depth of field” too). The greater the contraction, the lower the perception of brightness. So, again, it’s important to remember that what we call “perceptions” are often the current states of “perceptual variables”. Maybe it would have been better if Bill had called his book: Behavior: The Control of Perceptual Variables. But I think it’s good enough to try to keep in mind that when we talk about perceptions we are talking about the states of perceptual variables.

RM: This discussion about perceptions being or not being models would be more interesting (to me, at least) if you could explain the practical consequences of thinking of them in one way rather than the other. Right now it seems more like a 'number of angels dancing on the head of a pin" kind of debate. Rupert, you have built robots that control all kinds of interesting perceptual variables. Does conceiving of perceptions as models (or not) have any practical impact on how you design these robots?

Best regards

Rick

  How

else could you describe the set of all perceptions based on the
senses other than as a model of the world?

    It does not follow from this that any perception,

or the complete set, is a model of the state of the current
world. Although I realise that with some (low level) perceptions
it might appear that this is the case I think it is a
misconception of what perceptions are, and is ascribing to them
a conceptualisation which reaches beyond their inherent nature,
and also leads to the erroneous concept of “World Models”.
Perceptions as a “World View” might be closer to the mark, but
even then this gives the wrong impression that they are
representative of an objective real world.

  No. We just assume that there exists an objective real world,

because that’s the alternative to solipsism, and if you and the
rest of what I think I perceive are simply figments of my mind,
the situation is not very interesting. So I do assume you exist,
and work from there. Starting from the assumption that there
exists a real world that influences our senses, there is no error
in taking our perceptions to constitute a model of the way the
world is, and our control structures as a model of the way the
world works.

    That we can have perceptions (hunger, fear,

arrogance) that are not properties of the external world
suggests that this is not the case.

  Yes, we do have some perceptions of internal states as well, but I

don’t see how this leads to the conclusion that you seem to be
controlling for. In fact, I still don’t know what that really is,
other than to deny that what is in the structure of our control
system and the current values of the signals in it are unrelated
to what is outside us.

    How we conceptualise PCT is extremely important as erroneous

conceptualisation can lead to erroneous modelling and
implementation. In my view the concept of “World Models” is such
a case, of resulting in misleading conceptualisation of PCT and
we should expunge from use.

  I have seen where you are coming from with this comment, from the

example that you linked <http://www.perceptualrobots.com/2014/09/18/taros-2014-iet-public-lecture/

  >.

Yes, a misconceptualization can lead to great error, just as
misconceptualizations of Darwinian evolution have done. That’s not
a problem for the theory. It’s a PR problem. I guess if you live
in a world in which people believe that Darwin’s theory means a
dog-eat-dog world, you might think that dog-eat-dog libertarianism
or capitalism was the natural way to a better world, but that
doesn’t mean that you should give up on using the phrase “survival
of the fittest”. Neither do I think that misuse of the kind of
explicit model that accurately describes a mechanical structure to
say that solving the equations for such a model is the way we
control is a reason for denying that our actual control structure,
whatever it may be, is a model of the way the world works so far
as we have yet learned it.

      A "World Model" in the sense I have been using it is a process

model, which, given the state of the world and some imagined
action on it, produces the flow of perceptions that would (as
imagined) occur in that world. My problem now is that I had
previously presumed the “world” in question to be the one to
which the organism had reorganized, but you made me realized
that the modelled world could have any imagined properties,
and if those properties don’t correspond to the ones for which
we reorganized, then using the existing structure of the
hierarchy won’t work, and we would need to entertain the
possibility that an explicit world model can be built and
retained and used somewhere in the brain. Maybe you would
think of such a model as a “replica”; I wouldn’t, because I
think “replica” has connotations that wouldn’t be appropriate.

    Perhaps you are using the term "World Model" to mean the

(internal) environment (“world”) in which it perceives and
operates? But then it wouldn’t be a model. Do you mean that we
need an explicit world model in addition to the existing
structure of the hierarchy?

  No. I keep saying (and you keep refusing to acknowledge that I

have said) that the structure of the hierarchy IS the model of the
actual world.

    What is an "explicit" world model if not a replica?
  What's the point of a replica? You would just have to model that

instead, wouldn’t you? In an infinite regress. Either the replica
of the replica of the replica would at each level of recursion
become of lower and lower fidelity to the original or it wouldn’t.
In neither case would it be much use for anything escapt admiring
how cleverly it was made.

        So does an associative memory system (already

part of the theory) so why need a “World Model”.

      Don't confuse implementation with function. Associative memory

has been very much in my underlying thinking of how a World
Model might be implemented.

    Well, I think "World Model" is a misleading and unnecessary

term. Memory are models of perceptions

  ?? I would have thought that memories ARE perceptions. Or are we

really going to get into a replica recursion proposal?
It’s funny, I had the impression that you started this thread by
criticizing me for thinking that there was such a model in one’s
head, when at the time I did not think there was. Now, after a few
iterations, I have come around to the view about which you wrongly
complained.

    In my real world robots the latter (a "World

Model") is not required as real world is available. You are
saying, it appears, that real perceptual systems also contain a
model of the physical world.

  Real perceptual systems produce perceptions that are (they don't

“contain”) a model of the current state of the world. Real
perceptual systems are a component of (they don’t “contain”) a
model of the wy the world works.

        Sure, your system has reorganised to select the

appropriate goals (perceptions), but I don’t see any support
for the case that this constitutes a “World Model” in that
it is replicating actual dynamic processes of the world.

      I'm not clear what you are getting at in this comment. The

“World Model” you use in going straight to the appropriate
control action at the many levels needed to select “mouse”,
get your hand on it, and move it usefully don’t replicate the
dynamic processes of the world. They produce the perceptions
that you need in order to be using the mouse, and those
perceptions do depend on the dynamics of the world.

    Then I am not quite clear on what you are saying, or why you are

calling it a “World Model”; elsewhere you have said a model is
“a dynamic functioning system that produces results like those
of the thing modelled”, “The World Model you have built up by
long experience controlling perceptions in the real world does
much the same” and “Both [PCT models and world models] are
models of the dynamics.”

  You have expressed it correctly, so I don't understand why you are

not quite clear about it. After “elsewhere” you put it quite well,
and explain “why [I am] calling it a World Model”. So where’s the
problem?

    The way I see it is that we control perceptions in

real time with a particular structure that has developed to
improve the quality of control, ultimately so that intrinsic
error is low; and those perceptions can be complex,
multitudinous and diverse; transitions, sequences, programs etc.
And those perceptions are signals that do not exist as external
world variables (though some may be functions of such)

  Replace "some may" by "all except those with imagination

components are".

    and may not even "represent" things that do, have

ever or even could exist outside of perceptual functions.

  Again you go into the philosopher's trick (The tree falling in the

forest). Every possible relationship among components in the world
exists, whether anyone currently perceives it or not. That’s just
as reasonable a proposition as that none of them do, or that only
those perceived by someone do. Take your pick. No-one can prove
you wrong.

    They may even be functions of arbitrary totally

unrelated elements of the world such as the alignment of a
lamppost in Timbuctoo and a star a billion light years away.
Perceptions are subjective constructions that are not world
properties.

  True, though I don't like the word "subjective" in this context,

as it smacks of conscious thought – but what’s the point? The
relationship between the lamppost and the star exists, whether
anyone perceives it or not.

      Whatever

is producing these perceptions is not a model of the world,

      Really???
        but an associative memory system is quite

consistent with this, I would say.

      Why the word "but"?



      Couldn't you say: "Whatever is producing these perceptions is

a model of a non-existent world, in the construction of which
the actions of associative memories play a significant part"?

    Sounds like you are saying it is not a model of THE world then?
  What is "it" in this question? You seem to be referring back to my

mention of fantasy worlds, which are definitely not THE world,
since there can be lots of them, all different, as any inventor
knows.
In
the earlier message I was just saying that I had come around to
recognize that you CAN construct an indefinite number of models of
fantasy worlds, and that those world models are not embodied in
the control hierarchy, but are manipulable perceptual data that
include perceptions of how the real world works. You CAN have an
explicit model of how the real world works (it’s called science,
in some circles, Faith in others).
But
you don’t have to, since your reorganized hierarchy is the only
world model you need for survival, whether you are a human, a
wolf, a tree, or a bacterium.
Would you, for example, say hunger or arrogance are
states of the external world, independent of the perceiver? Or how
about perceptions that there is no god but allah, of the beauty of
Kylie Minogue or the view that some groups of people are worthless
vermin and should be exterminated (I am reading “Sophie’s Choice” at
the moment). Perceptions are, at all levels, in the eye of the
beholder and, so, subjective.

Above, where you say "[perceptions are a] model of the current state

of the world" and a “perception is based on one function of the
sensory consequences of the current state of the entire external
world” are saying different things. The distinction being what is
meant by “model” and “function”. To me a model means something that
is the same as the thing being modelled, in terms of components and
processes, to an approximation, whereas a function is something that
transforms one set of things into something new, and is not the
same. With functions the individual (input) states are lost and we
only have the output of the function (from a sum you can’t get the
constituent values).

So, it may be valid to say that perceptions are *functions*
of

the state of the world it is not valid to say that perceptions are *
models*of the state of the world.

Additionally, and crucially, how can the "model" (which, you are

saying, is internal to the perceptual system) acquire information to
model disturbances in the external world? If it doesn’t do this then
it can’t really be considered a model of the world as disturbances
are a fundamental part of the state of the external world.

How about perceptions are subjective functions of the combination of

the (unknown) state of the objective world and our own subjective
effects (output or perspective).

If PCT shows us anything it shows us that living systems operate, to

maintain internal states at desired values, without needing
to know (or having access to) the “current state of the world”. The
inputs to the system from outside are not the current state of the
world but a combination of the effects of our own output and
external variables, which by definition (disturbances) are unknown
to us. Those internal states are maintained at the desired values
even thought the current (external) state of the world varies,
perhaps substantially.

Perceptions provide a new dimension that that would otherwise not

exist without the perceiver. As they are self-constructed internal
states we are able to control them in a way that we are not able to
control external states. If we were able to model the external world
then we wouldn’t need perceptions. This has been a major approach
within AI, but has insurmountable practical (and conceptual)
difficulties.

If we consider a simple example of a cruise control system. It

perceives “speed” and varies output to keep the perceived speed at
X, even though the car is going up and down dale. What could we say
that the system is modelling of the state of the external world?
The incline of the hills? No. The aerodynamics of the car? No. The
dynamics of the the equations of motion of objects? No. etc, etc,
etc. Are any of these modelled in the perceptual system? No. There
is no need, because of the way perceptual control systems work; that
is their “magic”! Changing the internal parameters of the perceptual
doesn’t help either, that just affects the quality of control. The
only thing we could say, I think, is that the perception “models”
the actual speed of the car, that is the only “information”
available to the system. But that is captured by the word “perceive”
so introducing another term is superfluous. I am happy to be
convinced but so far I don’t see any justification, or necessity.

I assume the same so I'm not sure why you keep referring to

solipsism. I am saying perceptions are subjective perspectives on
the world so could not be said to model the way the world is. You
seem to be saying that perceptions are models of some objective
truth. How does a perception that whiskey tastes nice constitute the
way the world is?

All perceptions are internal states. The perception that someone is

arrogant is a perception (subjective internal state) of the
current state of the (external) world. As I understand your theory
you are saying that this perception is a model of an actual state in
the external world. Did you mean “related”? I am not denying that
our control systems are related to what is outside, but that they
could be considered a “model” of the state of the world. In fact, I
would say that perceptual control is a way that living systems have
got around the absence of such a model.

Quite! "survival of the fittest" has been misinterpreted to mean

survival of the strongest rather than survival of those that most
fit their environment. Likewise it is a misinterpretation to
conclude that perceptions are models of the external world from
perceptions being functions of variables in the external
world.
I acknowledge what you are saying but don’t agree, so am trying to
work out if you mean something different from the meaning I receive.
Which is, as far as I can see it is a conjectured conceptualisation
without support.

It is you who is proposing an explicit model, but not saying why it

is not a replica.

Yes, memories are perceptions, but also, according to PCT,

“retrieved recordings [models/replicas] of past perceptual signals”,
but not models of the external world.

Oh no!

I don't see any rationale for this conclusion. It is a

misconceptualisation of perceptual systems. I think we are going
around in circles.

It seems like a contradiction, " don't replicate the dynamic

processes of the world" and “Both [PCT models and world models] are
models of the dynamics.”

All perceptual functions are functionally the same, they transform a

set of inputs into an output, whether the perceptions are imaginary
or not. As we go up the levels they are less dependent upon current
external variables; mixed up as they with imagined perceptions
(memory). What we call them is an externally imposed classification.
We could regard all as imagination perceptions in the sense that
they are new creations, of signals that do not exist outside of the
perceptual system.

There's no trick. Those relationships are relative to the perceiver

and require its presence, and don’t exist without it.

No, the *perception* of the alignment is relative to the

perceiver; it is the perception of the alignment of the lamppost and
the star, AND the eye. It cannot exist without the perceiver. It is
subjective, relative to the perceiver.

"It" is the model, which here you are saying is a model of a 

non-existent world, hence not a model of THE world.

These are also perceptions (system level?). Perhaps, you are using

the term “model” to refer to a particular set of perceptions related
to how the world (may) work, rather than ones that actually model
it?

The reorganized hierarchy is a structure that enables you to control

intrinsic goals. It doesn’t follow that it models the world.

I don't think we're going to agree on this.



Rupert

Richard S. Marken

www.mindreadings.com
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[From Rupert Young (2015.05.21 21.00)]

Yes, just what I've been saying. Though doesn't seem to tally with

what Martin said, “” And what you said above is not the same as what
Martin said as he includes “”.
Well, what are you (and Martin) meaning by “model”, is what I have
been trying to determine? Why use “model”, rather than function?
What does it add to the theory?
Before I got into AI I was a Land Surveyor for a dozen years. I used
to wander around with my theodolite, tape measure and level and take
precise measurements of the real world. From those I would construct
maps, which are 3-dimensional models of the world. That is,
objective replicas of the real world. That is what a “model” mean to
me. In that context it doesn’t make sense to talk of models having the same inputs (unless you are talking about
authenticity).
Likewise the models you build are also intended to be objective
replicas of the structure and processes of perceptual systems, and I
don’t think that is what you mean by your use of “model” above.
Though Martin is saying there is only one type of model.
When I returned to University to study AI I was dismayed to find
that building 3-D objective world models was actually thought of as
a valid approach, with the robots being used as a mobile measuring
instrument. It seemed that the role of perception was seen precisely
as a way of building objective models of the world.

Sure.
I think the history of AI has been a litany of misconceived
methodology, leading to dead ends and slow progress. The techniques
and methodology employed is dependant upon how the paradigm is
conceived. Examples of approaches have included knowledge-based
systems, Search, classification, pattern recognition, Neural
Networks, Genetic Algorithms and model mapping (as mentioned above).
So, if you think intelligence systems are about knowledge you may
end up with expert systems, and if you think they are about pattern
recognition you end up with trying to extract information from
images. Whichever of these approaches are taken they will all have
one thing in common, they will not encompass perceptual control
(except by accident).
So, if an AI researcher thinks perception is about modelling the
current state of the world they may well spend all their time
working out how to build objective replicas of the world, and never
get to perceptual control. In my view, from this perspective, we
should always start with perceptual control and not introduce
concepts which don’t add anything to the current theory, or could
mislead. I see the inclusion of “models” as such a case.
The model mapping approach is still seen as valid and is used in
robotic vacuum cleaners, as with this quote, from,
.
This is all very sophisticated (taking 202 man years), but, I think,
neglects fundamental aspects of intelligence. I would call it AA
(Advanced Automation) rather than AI (Artificial Intelligence). And, of course, one fundamental omission is control of goals.
Shouldn’t these vacuum cleaner systems embody their raison d’etre of
cleaning as an actual goal, which this example certainly doesn’t?!
I see significant problems with this “perception as world model”
concept, with how it is used in AI anyway. One principal issue is
that it severely restricts the role of perception, as only
emulating, or extracting, what is out there in the real world rather
than creating new, subjective (internal) states, which can also
include abstractions and imagination. So, in the PCT context, one may want to say the imagination and
memory is a model of the world as a loose metaphor, but it is not a
fundamental or general (or necessary) principle of perceptual
control systems; which is simply that perceptions (or states of
perceptual variables
) are a function of external variables and our
own feedback effects.
Regards,
Rupert

···

(From Rick Marken (2015.05.18.0840)]

                MT: What else could they

possibly be? One perception is based on one function
of the sensory consequences of the current state of
the entire external world, is it not?
Rupert Young
(2015.05.17 21.00)
RY: This might seem valid, but is, I think,
missing out important parts of the story. This seems to
imply that perceptions are objective representations of
objective states of the world, in that they represent
states that are independent of the perceiver.

          RM: I'm with Martin on this. Saying that a perception

is a function of sensory input does not imply that
perceptions represent an objective state of the world.
Again I give the perception of the taste of lemonade as an
example. That perception is a particular function of the
sensory effects of various chemicals on the taste buds.
There is no objective “taste of lemonade” out there in the
world.

  Every possible relationship among

components in the world exists, whether anyone currently perceives
it or not. current state of the
entire external world

          A different perceptual function would produce a

different perception of the same sensory inputs. The
sensory function that produces the perception of lemonade
can be considered a model of the “objective” cause of that
aspect of our sensory input. A different function of the
same sensory input could be considered a different model
of the objective cause of that input.

different

            RY: In reality

perceptions are functions of external states and our own
effects, which may be output or perspectives, and are
not independent of the perceiver.

          RM: ... But I think it's good enough to try to keep in

mind that when we talk about perceptions we are talking
about the states of perceptual variables.

          RM: This discussion about perceptions being or not

being models would be more interesting (to me, at least)
if you could explain the practical consequences of
thinking of them in one way rather than the other. Right
now it seems more like a 'number of angels dancing on the
head of a pin" kind of debate. Rupert, you have built
robots that control all kinds of interesting perceptual
variables. Does conceiving of perceptions as models (or
not) have any practical impact on how you design these
robots?

“* This unique 360°
vision system uses complex mathematics, probability theory,
geometry and trigonometry to map and navigate a room”,* http://diydrones.com/profiles/blogs/new-dyson-robot-vacuum-has-360-degree-slam-camera-tech

https://www.youtube.com/watch?v=KbOqsp3oUQI

[From Rupert Young (2015.07.24 12.00)]

(Martin Taylor 2015.07.01.09.40]

I finally got around to reading Your messages and Rick's. The problem with constructing a proper reply at this point is that when I read what you wrote about perception a modelling, I agree with it. When I read what I wrote about those topics, I agree with it. When I read your critiques of what I wrote, I don't see why you think there is disagreement.

Well, I'm reluctant to get sucked backed into this vortex of despair. But I can't help myself.

The term "model" comes from a different conceptual space, and has many possible denotations and a wider range of connotations. Powers liked to use it as a way of constructing something that performs like the thing modelled, which would be your "replica". But when you make a sculptural mould, that, too is a model. It is the inverse of the shaped moulded, and it enables replicas of the original to be made, replicas which could be modified to create possible variants of the original. I see the reorganized hierarchy as that kind of model, not of a physical shape, but of the workings of the world. It's a function of functions, and it is a model.

Let's try a simple example, of opening doors. You are likely to have a control system for the goal of force to apply (the perception of opposing muscle tensions) that will result in the door opening. You probably also need, at least, a higher system for controlling the perception of whether the door is actually opening (the perception of the rate of change of opening). The output from the higher level sets the reference for the lower level.

That connection between the two may start off "loose" but will change through reorganisation according to what doors you are used to. So if you tend to come across 10 kg doors then the connection (gain perhaps) will reorganise such that the general error response of the system will minimise.

This organisation, I think, would be what you are calling a model 'of the world'.

I think what we can say is that a structure has developed that is consistent with good control in the real world. But as there is nothing in system that actually models the world, such as mass of the door, it is not valid to call it a model 'of the world'. There is no direct correspondence between entities in the control system and entities in the world.

Though we could say, perhaps, that the structure is a model of the perceptions (or produces the perceptions) that are required for good control in relation to the world.

An additional reason for demonstrating that it is not a model of the world is that it can handle situations which it has not come across before. For example, if you go on holiday to Brobdingnag where the doors are 20kg initially you wouldn't push against the door with sufficient force to open it, but after a while, due to error building up in your higher, rate of opening system, the reference for your "force-applied" perception will increase to the point where the door does open. So here the system handles a situation which was not part of the world it had met before, so how could the system be said to be modelling the 'world'?

Also there seems to be a logical flaw in the whole concept of the HPCT-type control system being a model of the world, in that it could only model things which exist independently of the system itself; that is, it would be restricted to objective aspects of the world. This is plainly not the case, as has been discussed before; love, fear, justice, taste, honesty etc.

I see the beauty and wonder, and power, of PCT being that it deals with subjective aspects (perceptions) which are most definitely not of the world, enabling living systems to control internal perspectives far beyond the limitations of the external world.

(Rick Marken (2015.05.18.0840)]
RM: Right now it seems more like a 'number of angels dancing on the head of a pin" kind of debate.

Just trying to live up to the great tradition set by yourself and Martin :slight_smile:

Regards,
Rupert

[From philip]

Look at this powerful alien sniper rifle I found, down by the bay! (where the watermelons grow)

Von Neumann universal constructor, an abstract device capable of constructing all constructible artifacts of an environment. The notion of same, as described by John Von Neumann via his kinematic (robotic) and tessellation (cellular automata) models.

I can’t wait to shoot a couple rounds!

I have a strong feeling that Bill (along with 99.999% of humanity) was technically unfamiliar with Von Neumann’s work. Maybe it really is a small world after all, or maybe everybody is just so spaced out. But it’s obvious that there is almost no discussion of theoretical computer science in PCT.
I will try to provide quality instruction about some of these things. I know there’s a lot of dedicated teachers here, maybe we can all “take a seat” and appreciate the opportunity to learning something important.

Let’s have a look here at a model of a mutating Von Neumann self- reproducing automata.


Caption: A demonstration of the ability of von Neumann’s machine to support inheritable mutations. (1) At an earlier timestep, a mutation was manually added to the second generation machine’s tape. (2) Later generations both display the phenotype
of the mutation (a drawing of a flower) and pass the mutation on to their children, since the tape is copied each time. ** This example illustrates how von Neumann’s design allows for complexity growth (in theory) since the tape could specify a machine that is more complex than
the one making it. [emphasis is mine]**
PY: I want to point out that it says the tape specifies the machine description, but it’s really the programmer who specifies its description and adds the mutation. The machine is not controlling its
input (the tape it receives). This tape is perceptually controlled by the programer. Programming is control_of perception. We will discuss this extensively!

Bill Powers:
It seems quite likely that even at the level of DNA, life is in control
of its own evolution: it alters its own genes as a way of becoming immune to selection pressures, even when this entails changing itself from one species into another

PY: I used to think that was true. A few years back, before I knew about Powers, I asked my bioengineering professor, and he told me that the evidence does not
seem to corroborate this. What he said basically amounts to “DNA is not perceptually controlled”. And he knows all about everything. I’m only now realizing why my professor was probably right. Hmm…how do we start discussing this?..
I’ve attached a wonderful essay from Bill in 2009, in which he talks about models. Notice how he is talking about having his model match the reality.
Bill’s PCT is all about the reality behind the controlling done by living things. Great! Why do none of the models reproduce? For example, the cursor tracking demo does not reproduce and evolve. Are we forgetting about something here? Can someone get me a copy of the blueprints, pronto! You know what a blueprint looks like…hopefully. It looks like someone is very serious about building something very complex.
So far, I see nobody is getting their own concept of what they think constitutes a model through anybody else’s thick skull. But I will force myself upon thee. I’ve attached a second paper, by Von Neumann, which everyone should read. I don’t give a single dime how busy you are writing LCS IV! This is DEFINITELY going to be on the final.

Modeling.pdf (73.2 KB)

VonNeumann.pdf (1.53 MB)

···

On Fri, Jul 24, 2015 at 3:03 AM, Rupert Young csgnet@lists.illinois.edu wrote:

I finally got around to reading Your messages and Rick’s. The problem with constructing a proper reply at this point is that when I read what you wrote about perception a modelling, I agree with it. When I read what I wrote about those topics, I agree with it. When I read your critiques of what I wrote, I don’t see why you think there is disagreement.
The term “model” comes from a different conceptual space, and has many possible denotations and a wider range of connotations. Powers liked to use it as a way of constructing something that performs like the thing modelled, which would be your “replica”. But when you make a sculptural mould, that, too is a model. It is the inverse of the shaped moulded, and it enables replicas of the original to be made, replicas which could be modified to create possible variants of the original. I see the reorganized hierarchy as that kind of model, not of a physical shape, but of the workings of the world. It’s a function of functions, and it is a model.
[From Rupert Young (2015.07.24 12.00)]

(Martin Taylor 2015.07.01.09.40]

Well, I’m reluctant to get sucked backed into this vortex of despair. But I can’t help myself.

Let’s try a simple example, of opening doors. You are likely to have a control system for the goal of force to apply (the perception of opposing muscle tensions) that will result in the door opening. You probably also need, at least, a higher system for controlling the perception of whether the door is actually opening (the perception of the rate of change of opening). The output from the higher level sets the reference for the lower level.

That connection between the two may start off “loose” but will change through reorganisation according to what doors you are used to. So if you tend to come across 10 kg doors then the connection (gain perhaps) will reorganise such that the general error response of the system will minimise.

This organisation, I think, would be what you are calling a model ‘of the world’.

I think what we can say is that a structure has developed that is consistent with good control in the real world. But as there is nothing in system that actually models the world, such as mass of the door, it is not valid to call it a model ‘of the world’. There is no direct correspondence between entities in the control system and entities in the world.

Though we could say, perhaps, that the structure is a model of the perceptions (or produces the perceptions) that are required for good control in relation to the world.

An additional reason for demonstrating that it is not a model of the world is that it can handle situations which it has not come across before. For example, if you go on holiday to Brobdingnag where the doors are 20kg initially you wouldn’t push against the door with sufficient force to open it, but after a while, due to error building up in your higher, rate of opening system, the reference for your “force-applied” perception will increase to the point where the door does open. So here the system handles a situation which was not part of the world it had met before, so how could the system be said to be modelling the ‘world’?

Also there seems to be a logical flaw in the whole concept of the HPCT-type control system being a model of the world, in that it could only model things which exist independently of the system itself; that is, it would be restricted to objective aspects of the world. This is plainly not the case, as has been discussed before; love, fear, justice, taste, honesty etc.

I see the beauty and wonder, and power, of PCT being that it deals with subjective aspects (perceptions) which are most definitely not of the world, enabling living systems to control internal perspectives far beyond the limitations of the external world.

(Rick Marken (2015.05.18.0840)]

RM: Right now it seems more like a 'number of angels dancing on the head of a pin" kind of debate.

Just trying to live up to the great tradition set by yourself and Martin :slight_smile:

Regards,

Rupert

Hi Philip,

I am enjoying the integration of these fields and works of great, often under recognised thinkers, as I am sure as you explain they can complement and/or enhance how we currently conceive of PCT.

Warren

···

On Fri, Jul 24, 2015 at 3:03 AM, Rupert Young csgnet@lists.illinois.edu wrote:

I finally got around to reading Your messages and Rick’s. The problem with constructing a proper reply at this point is that when I read what you wrote about perception a modelling, I agree with it. When I read what I wrote about those topics, I agree with it. When I read your critiques of what I wrote, I don’t see why you think there is disagreement.
The term “model” comes from a different conceptual space, and has many possible denotations and a wider range of connotations. Powers liked to use it as a way of constructing something that performs like the thing modelled, which would be your “replica”. But when you make a sculptural mould, that, too is a model. It is the inverse of the shaped moulded, and it enables replicas of the original to be made, replicas which could be modified to create possible variants of the original. I see the reorganized hierarchy as that kind of model, not of a physical shape, but of the workings of the world. It’s a function of functions, and it is a model.
[From Rupert Young (2015.07.24 12.00)]

(Martin Taylor 2015.07.01.09.40]

Well, I’m reluctant to get sucked backed into this vortex of despair. But I can’t help myself.

Let’s try a simple example, of opening doors. You are likely to have a control system for the goal of force to apply (the perception of opposing muscle tensions) that will result in the door opening. You probably also need, at least, a higher system for controlling the perception of whether the door is actually opening (the perception of the rate of change of opening). The output from the higher level sets the reference for the lower level.

That connection between the two may start off “loose” but will change through reorganisation according to what doors you are used to. So if you tend to come across 10 kg doors then the connection (gain perhaps) will reorganise such that the general error response of the system will minimise.

This organisation, I think, would be what you are calling a model ‘of the world’.

I think what we can say is that a structure has developed that is consistent with good control in the real world. But as there is nothing in system that actually models the world, such as mass of the door, it is not valid to call it a model ‘of the world’. There is no direct correspondence between entities in the control system and entities in the world.

Though we could say, perhaps, that the structure is a model of the perceptions (or produces the perceptions) that are required for good control in relation to the world.

An additional reason for demonstrating that it is not a model of the world is that it can handle situations which it has not come across before. For example, if you go on holiday to Brobdingnag where the doors are 20kg initially you wouldn’t push against the door with sufficient force to open it, but after a while, due to error building up in your higher, rate of opening system, the reference for your “force-applied” perception will increase to the point where the door does open. So here the system handles a situation which was not part of the world it had met before, so how could the system be said to be modelling the ‘world’?

Also there seems to be a logical flaw in the whole concept of the HPCT-type control system being a model of the world, in that it could only model things which exist independently of the system itself; that is, it would be restricted to objective aspects of the world. This is plainly not the case, as has been discussed before; love, fear, justice, taste, honesty etc.

I see the beauty and wonder, and power, of PCT being that it deals with subjective aspects (perceptions) which are most definitely not of the world, enabling living systems to control internal perspectives far beyond the limitations of the external world.

(Rick Marken (2015.05.18.0840)]

RM: Right now it seems more like a 'number of angels dancing on the head of a pin" kind of debate.

Just trying to live up to the great tradition set by yourself and Martin :slight_smile:

Regards,

Rupert