World Model

[Martin Taylor 2015.07.24.10.55]

[From Rupert Young (2015.07.24 12.00)]

(Martin Taylor 2015.07.01.09.40]

I finally got around to reading Your messages and Rick's. The problem with constructing a proper reply at this point is that when I read what you wrote about perception a modelling, I agree with it. When I read what I wrote about those topics, I agree with it. When I read your critiques of what I wrote, I don't see why you think there is disagreement.

Well, I'm reluctant to get sucked backed into this vortex of despair. But I can't help myself.

I'm reminded of a recurrent question in my mind. When someone says to a child "Behave yourself" I ask myself "Who else would she behave?"

The term "model" comes from a different conceptual space, and has many possible denotations and a wider range of connotations. Powers liked to use it as a way of constructing something that performs like the thing modelled, which would be your "replica". But when you make a sculptural mould, that, too is a model. It is the inverse of the shaped moulded, and it enables replicas of the original to be made, replicas which could be modified to create possible variants of the original. I see the reorganized hierarchy as that kind of model, not of a physical shape, but of the workings of the world. It's a function of functions, and it is a model.

Let's try a simple example, of opening doors. You are likely to have a control system for the goal of force to apply (the perception of opposing muscle tensions) that will result in the door opening.

That doesn't sound very consistent with standard HPCT as I understand it. In my understanding, you have a reference to perceive the door opening, and you have probably reorganized to have two main ways to do it, by getting the door to swing and by sliding it. Both indeed do require the application of force, but so far as I am aware, the level of force provided as a reference to the muscles will depend on the current state of the perception of the rate of door opening -- not opening fast enough, increase the reference value or opening too fast, reduce the force.

You probably also need, at least, a higher system for controlling the perception of whether the door is actually opening (the perception of the rate of change of opening). The output from the higher level sets the reference for the lower level.

Yes, if you add "dynamically" before "sets the reference", but that seems inconsistent with what you said above.

That connection between the two may start off "loose" but will change through reorganisation according to what doors you are used to. So if you tend to come across 10 kg doors then the connection (gain perhaps) will reorganise such that the general error response of the system will minimise.

It's possible to be a bit more precise here. Since every reference value depends on many outputs from higher-level control units, and every perceptual input function receives many inputs from lower-level perceptual signals, we are dealing with context and associative memory. (I agree with Powers on this, at least, and to me it seems almost to be a requirement of HPCT, but I have no proof of it). What you are saying can be restated in this context as "If every door you have encountered has been a swinging door of 10 kg, with equally lubricated hinges, then perception of a closed door for which you have a reference 'open' will result in planning (by control in imagination) to apply the same force as always has been used to open a door". But even if this is so, when you come up against a lighter or heavier door, would you not expect normal control to kick in? What you imply seems to be the control of output, not the control of input that is fundamental to PCT.

This organisation, I think, would be what you are calling a model 'of the world'.

I think what we can say is that a structure has developed that is consistent with good control in the real world. But as there is nothing in system that actually models the world, such as mass of the door, it is not valid to call it a model 'of the world'. There is no direct correspondence between entities in the control system and entities in the world.

We are back to wordplay here. You define a model as an element-by-element replica of something. The system is definitely not that kind of "model". I have not (until reading your writings) usually limited "model" in this way, though my range of meaning for "model" certainly includes your more limited range. In my use of the word, what is described above is indeed a model. It's a model in the sense that the complementary strands of a DNA double helix are models of each other. It doesn't replicate the world, it mirrors the world -- perhaps "inverts" the world might be better than "mirrors", though neither really have the right connotations of dynamic complementarity.

An additional reason for demonstrating that it is not a model of the world is that it can handle situations which it has not come across before. For example, if you go on holiday to Brobdingnag where the doors are 20kg initially you wouldn't push against the door with sufficient force to open it, but after a while, due to error building up in your higher, rate of opening system, the reference for your "force-applied" perception will increase to the point where the door does open. So here the system handles a situation which was not part of the world it had met before, so how could the system be said to be modelling the 'world'?

I fail to see the issue, in two ways. Firstly, the model is correct in that pushing the 20kg doors does open them if you push hard enough. Secondly, the model is of the world encountered in its construction, not of Brobdingnag (coincidentally, I am reading Gulliver's Travels in the original for the first time, and am currently in Brobdingnag).

Also there seems to be a logical flaw in the whole concept of the HPCT-type control system being a model of the world, in that it could only model things which exist independently of the system itself; that is, it would be restricted to objective aspects of the world. This is plainly not the case, as has been discussed before; love, fear, justice, taste, honesty etc.

I don't follow that logic. Yes, I accept that no model can model itself entirely, since that would imply that the model that models itself also models itself modelling itself ... ad infinitum. But there's no reason why the model should not include any variables of the kind you mention. They are perceptions, after all. Subjectively, when you imagine certain situations, do you not also imagine experiencing perceptions such as those you mention? Is not your imagination a "replica-type" model of the world you are imagining?

I see the beauty and wonder, and power, of PCT being that it deals with subjective aspects (perceptions) which are most definitely not of the world, enabling living systems to control internal perspectives far beyond the limitations of the external world.

What are "the limitations of the external world"? External to what? Is one of the "limitations" that the world does not contain "jealousy" if you see the woman you want to be your girlfriend enjoying the company of another man? And yet you can imagine that you might perceive that emotion, can you not? So I ask again: "External to what?".

Why might imagination produce a replica-type model if the World Model is not a replica? Because the complementary nature of the Model produces a replica when it is used in imagination, just as it does when controlling in the world it models, and just as strand 2 of DNA provides the model for a replica of the strand 1 with which it was originally paired.

(Rick Marken (2015.05.18.0840)]
RM: Right now it seems more like a 'number of angels dancing on the head of a pin" kind of debate.

Just trying to live up to the great tradition set by yourself and Martin :slight_smile:

There are two questions here. One is what actually happens, and that's not a "dancing angels" issue. The other is what words are the best to communicate ideas about what is happening, because in the end we all (I presume) want to use the debate as a way of getting nearer to a complete theory of human (or biological or robotic) functioning.

If "Model" has a meaning to some people (including Rupert) that make it an unsuitable way to communicate the ideas, then it's not efficient to use the word, as it would convey the intended idea to only a subset of the readership, and that doesn't help advance the theory. But I don't know of a better word. Some, in the history of science have claimed that fuzzy words aren't useful at all, and only clean-edged well-defined mathematics ensures proper communication. (In the 1970s, I had a friend who claimed the programming language APL was the only language in which scientific ideas should be discussed). I don't buy the mathematics idea, for reasons I won't go into here, but people who espouse that position do have a point.

I thought of "mold" as a replacement for "model", but to me that has a static connotation, and the World Model as I conceive it is highly dynamic process that produces either the desired effects in the real world or dynamic replicas in imagination of the real or any fantasy world. Unless someone comes up with a better word, I intend to continue using the word "model" and hope that readers will understand the context in which I use it.

Martin

[From Rupert Young (2015.08.02 20.00)]

(Martin Taylor 2015.07.24.10.55]

Let's try a simple example, of opening doors. You are likely to have a control system for the goal of force to apply (the perception of opposing muscle tensions) that will result in the door opening.

That doesn't sound very consistent with standard HPCT as I understand it. In my understanding, you have a reference to perceive the door opening, and you have probably reorganized to have two main ways to do it, by getting the door to swing and by sliding it. Both indeed do require the application of force, but so far as I am aware, the level of force provided as a reference to the muscles will depend on the current state of the perception of the rate of door opening -- not opening fast enough, increase the reference value or opening too fast, reduce the force.

Did you write that before reading my next sentence, where I mention exactly that, and then forget erase it?

You probably also need, at least, a higher system for controlling the perception of whether the door is actually opening (the perception of the rate of change of opening). The output from the higher level sets the reference for the lower level.

Yes, if you add "dynamically" before "sets the reference", but that seems inconsistent with what you said above.

As far as I can see we are saying pretty much the same thing.

This organisation, I think, would be what you are calling a model 'of the world'.

I think what we can say is that a structure has developed that is consistent with good control in the real world. But as there is nothing in system that actually models the world, such as mass of the door, it is not valid to call it a model 'of the world'. There is no direct correspondence between entities in the control system and entities in the world.

We are back to wordplay here. You define a model as an element-by-element replica of something. The system is definitely not that kind of "model". I have not (until reading your writings) usually limited "model" in this way, though my range of meaning for "model" certainly includes your more limited range. In my use of the word, what is described above is indeed a model. It's a model in the sense that the complementary strands of a DNA double helix are models of each other. It doesn't replicate the world, it mirrors the world -- perhaps "inverts" the world might be better than "mirrors", though neither really have the right connotations of dynamic complementarity.

That still sounds like a replica to me, just inverted.

An additional reason for demonstrating that it is not a model of the world is that it can handle situations which it has not come across before. For example, if you go on holiday to Brobdingnag where the doors are 20kg initially you wouldn't push against the door with sufficient force to open it, but after a while, due to error building up in your higher, rate of opening system, the reference for your "force-applied" perception will increase to the point where the door does open. So here the system handles a situation which was not part of the world it had met before, so how could the system be said to be modelling the 'world'?

I fail to see the issue, in two ways. Firstly, the model is correct in that pushing the 20kg doors does open them if you push hard enough. Secondly, the model is of the world encountered in its construction, not of Brobdingnag (coincidentally, I am reading Gulliver's Travels in the original for the first time, and am currently in Brobdingnag).

I would call it a structure (not model) that is able to adapt to situations not previously encountered. I see a "model" as something that can only cope with something it models.

Also there seems to be a logical flaw in the whole concept of the HPCT-type control system being a model of the world, in that it could only model things which exist independently of the system itself; that is, it would be restricted to objective aspects of the world. This is plainly not the case, as has been discussed before; love, fear, justice, taste, honesty etc.

I don't follow that logic. Yes, I accept that no model can model itself entirely, since that would imply that the model that models itself also models itself modelling itself ... ad infinitum. But there's no reason why the model should not include any variables of the kind you mention. They are perceptions, after all. Subjectively, when you imagine certain situations, do you not also imagine experiencing perceptions such as those you mention? Is not your imagination a "replica-type" model of the world you are imagining?

I think we could say that imagination is a model of perceptions, but perceptions are not models of the world. Perceptions are subjective perspectives, some of which may include environmental aspects, but can also be independent of the world.

I see the beauty and wonder, and power, of PCT being that it deals with subjective aspects (perceptions) which are most definitely not of the world, enabling living systems to control internal perspectives far beyond the limitations of the external world.

What are "the limitations of the external world"? External to what? Is one of the "limitations" that the world does not contain "jealousy" if you see the woman you want to be your girlfriend enjoying the company of another man? And yet you can imagine that you might perceive that emotion, can you not? So I ask again: "External to what?".

External to the perceptual system. Jealousy is entirely a characteristic of the perceiving system not of the world external to that system; even if one of the inputs to the perceptual function is girlfriend with another man.

Why might imagination produce a replica-type model if the World Model is not a replica? Because the complementary nature of the Model produces a replica when it is used in imagination, just as it does when controlling in the world it models, and just as strand 2 of DNA provides the model for a replica of the strand 1 with which it was originally paired.

I don't understand the question, I may have answered it already.

(Rick Marken (2015.05.18.0840)]
RM: Right now it seems more like a 'number of angels dancing on the head of a pin" kind of debate.

Just trying to live up to the great tradition set by yourself and Martin :slight_smile:

There are two questions here. One is what actually happens, and that's not a "dancing angels" issue. The other is what words are the best to communicate ideas about what is happening, because in the end we all (I presume) want to use the debate as a way of getting nearer to a complete theory of human (or biological or robotic) functioning.

If "Model" has a meaning to some people (including Rupert) that make it an unsuitable way to communicate the ideas, then it's not efficient to use the word, as it would convey the intended idea to only a subset of the readership, and that doesn't help advance the theory. But I don't know of a better word. Some, in the history of science have claimed that fuzzy words aren't useful at all, and only clean-edged well-defined mathematics ensures proper communication. (In the 1970s, I had a friend who claimed the programming language APL was the only language in which scientific ideas should be discussed). I don't buy the mathematics idea, for reasons I won't go into here, but people who espouse that position do have a point.

I thought of "mold" as a replacement for "model", but to me that has a static connotation, and the World Model as I conceive it is highly dynamic process that produces either the desired effects in the real world or dynamic replicas in imagination of the real or any fantasy world. Unless someone comes up with a better word, I intend to continue using the word "model" and hope that readers will understand the context in which I use it.

Yes, we seem to be using the word in very different ways. From my point of view, from the AI usage (and what seems to me the standard dictionary meaning, of replica), I want to avoid accusations of PCT working in the same way as conventional theories of behaviour as being generated from explicit models of the world. From that perspective it would be a misleading communication of the ideas. I think we can manage without it.

Rupert

[Martin Taylor 2015.08.02.14.46]

  [From Rupert Young (2015.08.02 20.00)]






  (Martin Taylor 2015.07.24.10.55]
      Let's try a simple example, of opening

doors. You are likely to have a control system for the goal of
force to apply (the perception of opposing muscle tensions)
that will result in the door opening.

    That doesn't sound very consistent with standard HPCT as I

understand it. In my understanding, you have a reference to
perceive the door opening, and you have probably reorganized to
have two main ways to do it, by getting the door to swing and by
sliding it. Both indeed do require the application of force, but
so far as I am aware, the level of force provided as a reference
to the muscles will depend on the current state of the
perception of the rate of door opening – not opening fast
enough, increase the reference value or opening too fast, reduce
the force.

  Did you write that before reading my next sentence, where I

mention exactly that, and then forget erase it?

No, because I understood the above and the following to be saying

contradictory, not complementary, things. As a consequence of your
comment, I understand you not to have intended the apparent
contradiction. The above seemed to imply control of output, which
was what I was objecting to.

      You probably also need, at least, a

higher system for controlling the perception of whether the
door is actually opening (the perception of the rate of change
of opening). The output from the higher level sets the
reference for the lower level.

    Yes, if you add "dynamically" before "sets the reference", but

that seems inconsistent with what you said above.

  As far as I can see we are saying pretty much the same thing.
I have thought that apart from the choice of words, in particular

the word “model”, we have indeed been saying very much the same
thing for a long time. It seems to me that the argument has only
been about words, and your final paragraph of this message explains
why. So I skip to there.

    I thought of "mold" as a replacement for

“model”, but to me that has a static connotation, and the World
Model as I conceive it is highly dynamic process that produces
either the desired effects in the real world or dynamic replicas
in imagination of the real or any fantasy world. Unless someone
comes up with a better word, I intend to continue using the word
“model” and hope that readers will understand the context in
which I use it.

  Yes, we seem to be using the word in very different ways. From my

point of view, from the AI usage (and what seems to me the
standard dictionary meaning, of replica), I want to avoid
accusations of PCT working in the same way as conventional
theories of behaviour as being generated from explicit models of
the world. From that perspective it would be a misleading
communication of the ideas. I think we can manage without it.

Now, that makes your reasoning clear to me -- or at least I have a

new idea of where you are coming from.

I have been using "model" in the sense Bill Powers argued for,

roughly as follows: * A model is a structure that produces in any
particular test environment the same results as are observed when
the thing modelled is tested in that environment.* In that
sense, the PCT “models” used to fit experimental tracking data are
models, though I would not think anyone imagines them to be replicas
of the neural wetware. Likewise the reorganized hierarchy models the
workings of the environment, though nobody would claim that there
are little “forces” “balance beams” “torques” or “electrostatic
repulsion” in the reorganized hierarchy any more than (to use your
example) there is “jealousy” in the environment, despite that one
can perceive it in oneself or another person.

You are concerned about "model" being used in a way that implies the

control of output. The desired output would be based on the internal
analysis of a “model” or “replica” of a chunk of the environment. In
this tradition, in order to act, the appropriate muscle tensions
must be computed by analyzing some internal replica of the thing to
be acted upon, and that thing is a “model”.

So you have a legitimate concern, a concern of a type that has been

raised about several of the words used in a particular sense in PCT:
that people who have learned a different tradition in which the same
words are used for similar but not the same technical purposes may
impute their own rather than the PCT meaning to the word (as you
have been doing with “model” and expect the AI community to do). I
accept that concern.

If, however, we stop using "model" in the Powers sense, which to me

is a sense that seems to convey the intended impression at least to
a naive reader, if not to an AI researcher, what word should we use
instead? As I said in the paragraph above that I requoted, I haven’t
found a better word. Can you find one?

Martin

[Kent McClelland 2015.08.02.16.06]

Martin and Rupert,

Your discussion about meanings of the word ‘model’ reminded me of a little essay I wrote for students in a class on PCT. The essay concerns the meaning of ‘model’, as Bill used the word in Chapter 2 of B:CP. The example in the essay is a simple-minded
one, but perhaps it’s relevant to your discussion.

Kent

Two Kinds of Model Airplanes.pdf (41 KB)

ATT00001.htm (55 Bytes)

Very interesting, Kent. Useful, too. Thanks.

Fred Nickols

Managing Partner

Distance Consulting LLC

Be sure you measure what you want.

Be sure you want what you measure.

···

Sent from my iPad

On Aug 2, 2015, at 4:17 PM, ""McClelland wrote:

[Kent McClelland 2015.08.02.16.06]

Martin and Rupert,

Your discussion about meanings of the word ‘model’ reminded me of a little essay I wrote for students in a class on PCT. The essay concerns the meaning of ‘model’, as Bill used the word in Chapter 2 of B:CP. The example in the essay is a simple-minded
one, but perhaps it’s relevant to your discussion.

Kent

On Aug 2, 2015, at 3:13 PM, Martin Taylor csgnet@lists.illinois.edu wrote:

[Martin Taylor 2015.08.02.14.46]

[From Rupert Young (2015.08.02 20.00)]

(Martin Taylor 2015.07.24.10.55]

Let’s try a simple example, of opening doors. You are likely to have a control system for the goal of force to apply (the perception of opposing muscle tensions) that will result in the door opening.

That doesn’t sound very consistent with standard HPCT as I understand it. In my understanding, you have a reference to perceive the door opening, and you have probably reorganized to have two main ways to do it, by getting the door to swing and by sliding it. Both indeed do require the application of force, but so far as I am aware, the level of force provided as a reference to the muscles will depend on the current state of the perception of the rate of door opening – not opening fast enough, increase the reference value or opening too fast, reduce the force.

Did you write that before reading my next sentence, where I mention exactly that, and then forget erase it?

No, because I understood the above and the following to be saying contradictory, not complementary, things. As a consequence of your comment, I understand you not to have intended the apparent contradiction. The above seemed to imply control of output, which was what I was objecting to.

You probably also need, at least, a higher system for controlling the perception of whether the door is actually opening (the perception of the rate of change of opening). The output from the higher level sets the reference for the lower level.

Yes, if you add “dynamically” before “sets the reference”, but that seems inconsistent with what you said above.

As far as I can see we are saying pretty much the same thing.

I have thought that apart from the choice of words, in particular the word “model”, we have indeed been saying very much the same thing for a long time. It seems to me that the argument has only been about words, and your final paragraph of this message explains why. So I skip to there.

I thought of “mold” as a replacement for “model”, but to me that has a static connotation, and the World Model as I conceive it is highly dynamic process that produces either the desired effects in the real world or dynamic replicas in imagination of the real or any fantasy world. Unless someone comes up with a better word, I intend to continue using the word “model” and hope that readers will understand the context in which I use it.

Yes, we seem to be using the word in very different ways. From my point of view, from the AI usage (and what seems to me the standard dictionary meaning, of replica), I want to avoid accusations of PCT working in the same way as conventional theories of behaviour as being generated from explicit models of the world. From that perspective it would be a misleading communication of the ideas. I think we can manage without it.

Now, that makes your reasoning clear to me – or at least I have a new idea of where you are coming from.
I have been using “model” in the sense Bill Powers argued for, roughly as follows:* A model is a structure that produces in any particular test environment the same results as are observed when the thing modelled is tested in that environment.* In that sense, the PCT “models” used to fit experimental tracking data are models, though I would not think anyone imagines them to be replicas of the neural wetware. Likewise the reorganized hierarchy models the workings of the environment, though nobody would claim that there are little “forces” “balance beams” “torques” or “electrostatic repulsion” in the reorganized hierarchy any more than (to use your example) there is “jealousy” in the environment, despite that one can perceive it in oneself or another person.

You are concerned about “model” being used in a way that implies the control of output. The desired output would be based on the internal analysis of a “model” or “replica” of a chunk of the environment. In this tradition, in order to act, the appropriate muscle tensions must be computed by analyzing some internal replica of the thing to be acted upon, and that thing is a “model”.

So you have a legitimate concern, a concern of a type that has been raised about several of the words used in a particular sense in PCT: that people who have learned a different tradition in which the same words are used for similar but not the same technical purposes may impute their own rather than the PCT meaning to the word (as you have been doing with “model” and expect the AI community to do). I accept that concern.

If, however, we stop using “model” in the Powers sense, which to me is a sense that seems to convey the intended impression at least to a naive reader, if not to an AI researcher, what word should we use instead? As I said in the paragraph above that I requoted, I haven’t found a better word. Can you find one?

Martin

[From Rick Marken (2015.08.02.2050)]

···

On Sun, Aug 2, 2015 at 2:11 PM, Fred Nickols csgnet@lists.illinois.edu wrote:

FN: Very interesting, Kent. Useful, too. Thanks.

RM: I agree. Very nice job, Kent!

Best

Rick

[Kent McClelland 2015.08.02.16.06]

Martin and Rupert,

Your discussion about meanings of the word ‘model’ reminded me of a little essay I wrote for students in a class on PCT. The essay concerns the meaning of ‘model’, as Bill used the word in Chapter 2 of B:CP. The example in the essay is a simple-minded
one, but perhaps it’s relevant to your discussion.

Kent

On Aug 2, 2015, at 3:13 PM, Martin Taylor csgnet@lists.illinois.edu wrote:

[Martin Taylor 2015.08.02.14.46]

[From Rupert Young (2015.08.02 20.00)]

(Martin Taylor 2015.07.24.10.55]

Let’s try a simple example, of opening doors. You are likely to have a control system for the goal of force to apply (the perception of opposing muscle tensions) that will result in the door opening.

That doesn’t sound very consistent with standard HPCT as I understand it. In my understanding, you have a reference to perceive the door opening, and you have probably reorganized to have two main ways to do it, by getting the door to swing and by sliding it. Both indeed do require the application of force, but so far as I am aware, the level of force provided as a reference to the muscles will depend on the current state of the perception of the rate of door opening – not opening fast enough, increase the reference value or opening too fast, reduce the force.

Did you write that before reading my next sentence, where I mention exactly that, and then forget erase it?

No, because I understood the above and the following to be saying contradictory, not complementary, things. As a consequence of your comment, I understand you not to have intended the apparent contradiction. The above seemed to imply control of output, which was what I was objecting to.

You probably also need, at least, a higher system for controlling the perception of whether the door is actually opening (the perception of the rate of change of opening). The output from the higher level sets the reference for the lower level.

Yes, if you add “dynamically” before “sets the reference”, but that seems inconsistent with what you said above.

As far as I can see we are saying pretty much the same thing.

I have thought that apart from the choice of words, in particular the word “model”, we have indeed been saying very much the same thing for a long time. It seems to me that the argument has only been about words, and your final paragraph of this message explains why. So I skip to there.

I thought of “mold” as a replacement for “model”, but to me that has a static connotation, and the World Model as I conceive it is highly dynamic process that produces either the desired effects in the real world or dynamic replicas in imagination of the real or any fantasy world. Unless someone comes up with a better word, I intend to continue using the word “model” and hope that readers will understand the context in which I use it.

Yes, we seem to be using the word in very different ways. From my point of view, from the AI usage (and what seems to me the standard dictionary meaning, of replica), I want to avoid accusations of PCT working in the same way as conventional theories of behaviour as being generated from explicit models of the world. From that perspective it would be a misleading communication of the ideas. I think we can manage without it.

Now, that makes your reasoning clear to me – or at least I have a new idea of where you are coming from.
I have been using “model” in the sense Bill Powers argued for, roughly as follows:* A model is a structure that produces in any particular test environment the same results as are observed when the thing modelled is tested in that environment.* In that sense, the PCT “models” used to fit experimental tracking data are models, though I would not think anyone imagines them to be replicas of the neural wetware. Likewise the reorganized hierarchy models the workings of the environment, though nobody would claim that there are little “forces” “balance beams” “torques” or “electrostatic repulsion” in the reorganized hierarchy any more than (to use your example) there is “jealousy” in the environment, despite that one can perceive it in oneself or another person.

You are concerned about “model” being used in a way that implies the control of output. The desired output would be based on the internal analysis of a “model” or “replica” of a chunk of the environment. In this tradition, in order to act, the appropriate muscle tensions must be computed by analyzing some internal replica of the thing to be acted upon, and that thing is a “model”.

So you have a legitimate concern, a concern of a type that has been raised about several of the words used in a particular sense in PCT: that people who have learned a different tradition in which the same words are used for similar but not the same technical purposes may impute their own rather than the PCT meaning to the word (as you have been doing with “model” and expect the AI community to do). I accept that concern.

If, however, we stop using “model” in the Powers sense, which to me is a sense that seems to convey the intended impression at least to a naive reader, if not to an AI researcher, what word should we use instead? As I said in the paragraph above that I requoted, I haven’t found a better word. Can you find one?

Martin

Richard S. Marken

www.mindreadings.com
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[From Rupert Young (2015.08.05 13.00)]

···

Yes, interesting. This concerns a theory
being based upon the internal workings of a system or on the
external observations of the system. PCT takes the former approach
and is an “internal” model in the sense that it models
(replicates) the internal workings of the systems under study
(living systems) and not the externally based observations of the
those systems.

  Though that is not what we have been discussing in this thread. We

have been discussing whether the PCT-based internal workings of
living systems can be regarded as a model of the external world. I
think not.

  Rupert

  On 02/08/2015 22:17, "McClelland, Kent"

( via csgnet Mailing List) wrote:

[Kent McClelland 2015.08.02.16.06]

Martin and Rupert,

    Your discussion about meanings of the word ‘model’

reminded me of a little essay I wrote for students in a class on
PCT. The essay concerns the meaning of ‘model’, as Bill used the
word in Chapter 2 of B:CP. The example in the essay is a
simple-minded one, but perhaps it’s relevant to your discussion.

Kent

MCCLEL@Grinnell.EDU

[From Rupert Young (2015.08.05 13.00)]

I am hesitant to disturb an apparent consensus, but I still see some

elements slightly differently I think.

(Martin Taylor 2015.08.02.14.46]
  I

have been using “model” in the sense Bill Powers argued for,
roughly as follows: * A model is a structure that produces in any
particular test environment the same results as are observed
when the thing modelled is tested in that environment.*

That seems fine, when we are talking about the simulations we build

that are intended to model PCT, and don’t have any great problem
with it. Though it doesn’t seem to capture something that we are
interested in, that the model should model the internal workings of
the thing modelled, rather than just the results. I.e. it should be
an “internal model” in the sense used brought up by Kent in his
recent post.

  In

that sense, the PCT “models” used to fit experimental tracking
data are models, though I would not think anyone imagines them to
be replicas of the neural wetware.

Well no, not an exact replica, but it is intended to be a replica of

the internal processes (perceptual control) to an approximation.

  Likewise

the reorganized hierarchy models the workings of the environment,

Now this is where I differ as I see you are now talking about

something different; using “model” in a different sense. In the
first sense, above, the model is intended to model the process of
perceptual control.

If you say "the reorganized hierarchy models the workings of the

environment", and are using “model” in the same sense then the
implication is (as the reorganised hierarchy is based on perceptual
control) that the workings of the environment are actually processes
of perceptual control. I’m sure that’s not what you mean but it is
why I think the use of “model” in this case is not appropriate.

  If, however, we stop using "model" in the Powers sense, which to

me is a sense that seems to convey the intended impression at
least to a naive reader, if not to an AI researcher, what word
should we use instead? As I said in the paragraph above that I
requoted, I haven’t found a better word. Can you find one?

I think it is fine if we are talking about the simulations that we

build, where the models are intended to replicate the processes of
living systems, but not when we say that the internal processes (or
structure) of living systems model the external world.

Rupert

[From Bruce Abbott (960823.1030 EST)]

Bill Leach (960823.0015) --

Bruce Abbott (960822.1040 EST)

You quoted "know" which is very important since at the control loop
level the system does not know anything about the disturbance, what it
"knows" is what is happening to the perception and nothing else.

Yes.

The MCT model does attempt to develop such a model and, to the extent
that it succeeds, can maintain control for brief (and sometimes
longer) periods when its perceptual input is interrupted --
something of which the PCT model is incapable.

Bruce, I think that there is a serious problem with part of the
assertion above. Every example that I have seen presented that "argues"
for a model does succeed with the arguement for the need for a model but
not one has convinced me that we EVER control in absence of perceptual
input. On the contrary, it seems pretty convincing to me that we only
control for perceptions currently unavailable by controlling current
perception based upon an explicit model (ie: If I drive to the store --
controlling current perception the whole time, I can get some milk --
achieving control of a perception based upon a model).

I'm thinking of "perceptual" in this instance as the normal sensory channels
through which the system is kept informed of the current state of the CEV.
When this is lost, control is maintained (if it is) via an internally
generated perception based on the "world model" function. As I understand
it, the MCT system does not work quite this way, as control is _always_
maintained via the internal world-model perception whether there is external
input or not; in that system the sensed state of the CEV is only used to
correct the model. (Hans, please correct me if I'm wrong.)

Rick Marken (960822.1050) --

Bruce Abbott (960822.1040 EST)

How the system is set up to respond to disturbance IS its model of the
environment

Oy, vey. Another definition of "model". Look, can't we just agree
that the MCT and PCT models are different? One difference is the
existence of an equation (call it a "world model" or "Jim" or whatever)
in MCT that does _not_ exist in PCT. There is no analog of "Jim" in PCT.

It's not "another definition of 'model,'" it's the same one everyone else
(e.g., Leach, Taylor) has been using; this the the "implicit model" you've
been hearing so much about lately! (:-> But you're right, there is no
analog of "Jim" in PCT. I said so in my post.

There is no model of the environment (in the sense of an equation that is
designed to approximate the feedback function) in the PCT. In this sense,
the PCT model knows nothing about the environment. The PCT model knows
_only_ the effects of its own actions (it controls a perception of these
effects) and it is able to adjust its parameters (via reorganization) so
that it is able to control the effects of its actions in the context of
whatever (unknown) environment it happens to be in.

True and not true. There is no explicit model of the environment in the PCT
model (true). "The PCT model knows _only_ the effects of its own actions .
. " (false). It doesn't know anything about its own actions. "and it is
able to adjust its parameters . . ." (false). A reorganizing system may be
able to adjust the parameters of the PCT system, but the system can't adjust
itself. You're slipping into HPCT, not PCT.

That animals "do the same thing under similar circumstances" is not the
kind of data that would motivate me to consider a new control architecture.

It seems to have been this sort of thinking about human behavior that led
Bill P. to construct HPCT; are you now asserting that the capabilities of
living control systems are not relevant when considering what a model of
such systems should be able to do? How odd.

The problem with MCT is that it does not explain _anything_ that is not
more simply and clearly explained by PCT.

That's a bald assertion with no supporting evidence. PCT (even HPCT) does a
lot of hand-waving (to use a currently popular term) about switching to
internally-generated perceptions (memory) but thus far there are no working
models that actually implement this suggestion, or at least none of which I
am aware. (Do you have one up your sleeve?) On the other hand, MCT has an
explicit mechanism and a working demonstration of the prinicple. So
currently it's your religious faith that HPCT _will_ be able to handle such
data against a working computer model of MCT. How about developing the PCT
equivalent so we can evaluate it?

I have nothing against MCT
except that it's being proposed as an alternative or superset or
supplement to PCT for nothing other than religious reasons. I feel like
I'm supposed to say "I believe in MCT; I really do believe. Halleluja".
I only say such things under threat of inquisitorial torture;-)

It's being proposed as one method for realizing HPCT's "memory switch."
You're not being asked to say "I believe in MCT," you're being asked to
examine its principles to see whether it might offer some guidance in
dealing with the sort of problem that the MCT model was designed to solve.
It's not a matter of belief or disbelief, it's a matter of scientific enquiry.

Hans listed a number of phenomena everyone is familiar with, for which his
MCT approach provides at least a beginning toward understanding. (No one is
claiming that it's THE solution.) You asked him to provide this list
(thinking, I suspect, that he wouldn't be able to do so) and then dismissed
it when it appeared as irrelevant. I thought the list was impressive; if
you think PCT can deal with these phenomena you should explain how, in
detail, point by point. In particular, I'd like to see you present a
computer model that implements a PCT system that accomplishes these things.

By the way, Rick, what are Bill's "wonderful models of this
[reorganization] process" of which you speak? What is their principle
of operation?

At the meeting Bill showed me his stick figure arm model that points
at a target (not the "Little Man"; just an arm). The arm starts
"flailing about" like a baby; it slowly tunes itself up so that it's
pointing smartly and accurately. I think Bill calls this the "Artificial
Cerebellum". It works by tuning the transfer functions of the various
control systems.

Sounds wonderful -- how does it do this "tuning"? Oh, and that's one model.
I thought you said "models" -- what are the others?

Regards,

Bruce

[From Bruce Gregory (960823.1145 EDT)]

(Bruce Abbott 960823.1030 EST)

How the system is set up to respond to disturbance IS its model of the
environment.

It's not "another definition of 'model,'" it's the same one everyone else
(e.g., Leach, Taylor) has been using; this the the "implicit model" you've
been hearing so much about lately! (:->

To call this a model does not seem to add anything helpful to
understanding the to the process. The wall reacts to my
leaning on it by generating an equal and opposite force
(Newton's third law). I suppose that I could say that this
reaction is the wall's model of the environment, but why would
I want to? What does this terminology illuminate that otherwise
might be obscure?

Regards,

Bruce

<[Bill Leach (960823.1136)]

[From Bruce Gregory (960823.1145 EDT)]

To call this a model does not seem to add anything helpful to
understanding the to the process. The wall reacts to my ...

That is a good point Bruce, as was Bill's similar assertion.

I will agree in part that such use of the term might be confusing
however:

1) We readily refer to a level in HPCT as "holding" the "world nodels".

20 I think that it is very important to recognize that even these
    models that most of us accept as having a "real existance" within
    the system (ie: Milk is obtainable from a place that we call a
    store, etc.) could very well be of the nature that Martin has been
    describing.

I am quite willing to accept the idea that even "models" that we humans
raised in a "common" cultural environment "share in common" are most
likely unique in each implementation. It seem to me that while MCT work
on models may indeed be able assist in understanding the functional
nature of such models, I am almost willing to bet that the nature of the
knowledge gained will be vastly different from what anyone in the MCT
field would expect.

One that keeps in mind that the model (simulation) of a system is NOT
really the system will be prepared to see what others might not see.

···

--
bill leach
b.leach@worldnet.att.net
ars KB7LX