twilight of the idols

[Martin Taylor 2011.11.03.23.03]

[From Bill Powers (2011.11.03.1910 MDT)]

  Rick Marken (2011.11.03.1210) --
    RM: Thanks. But I

don’t see how
this is relevant to Martin’s question. I

    _think_ Martin is suggesting that perceptual signals that are

the

    output of a perceptual function (which has a vector input) could

    itself be a vectors: the perceptual signal would be a vector.
  BP: Yes, that's the implication: output vector = matrix times

input
vector.

    RM: I was just saying

that that
may be true but I can’t think of how to use

    such a signal in a control model and besides there is no data

that

    seems to require that kind of change in model

architecture.

  BP: There's no mystery -- we just treat the matrix operation as

the sum
of a lot of scalar operations, which is how matrix algebra is done
anyway. The matrix notation is a lot more compact than writing out
all
the details, but you get the identical result either way. Did you
look at
those two figures, 3.11 and 3.12, in B:CP?

    RM: Your

multicontroller is a
whole bunch of control systems each

    controlling a scalar perceptual variable. I don't believe this

is
the

    architectural alternative Martin was suggesting.
  BP: Sure it is. Look at the source code: all the operations that

would
appear in a matrix and vector treatment are there, but just
spelled out
in detail rather than relying on the systematic cycling of indices
and so
on that makes the matrix treatment easier to keep track of without
making
mistakes.

  Suppose we have three linear equations in three unknowns, x1, x2,

and x3,
the values of the three expressions being y1, y2, and y3. We can
write

  y1 := a11*x1 + a12*x2 + a13*x3
  y2 := a21*x1 + a22*x2 + a23*x3
  y3 := a31*x1 + a32*x2 + a33*x3

  There are all the operations that go into these three equations.

If we
want to treat the equations separately, each one is simply a
scalar
equation. But we can also write

  __Y = A * X

 __ Where **Y** and **X** are vectors and **A** is the

matrix of
coefficients. Now we can do matrix algebra with this single
equation, but
it means exactly the same thing as the three equations above. If
you
expand the matrix equation you get the three scalar equations
back.

  In a somewhat different form, those could be three equations for

three
control systems sharing a common environment, each sensing the
same set
of environmental variables and acting on all of them. The y’s
would be
perceptual signals. That collection of control systems could be
written
in matrix notation in a similar way, with Y being a vector
standing for the three perceptual signals. X would be the
vector
of three reference signals, and there would also be vectors of
three disturbances and three environmental variables. Three, that
is, if
we want a fully-determined system.

  I happen to prefer the scalar way of writing (and programming) the

equations, because I get a better picture of what’s going on when
I can
see all the details. The matrix notation hides everything
interesting.
But sometimes, when dealing with fairly large and complicated
systems, I
try to use matrix algebra simply because it’s been boiled down to
some
simple rules which, if I follow them carefully, pretty much
guarantee
that I won’t make mistakes. Often the choice is a toss-up – when
I get
confused about subscripts I’m as likely to make a mistake in
implementing
the matrix algebra as I am in using the scalar version. But the
matrix
algebra can be very useful, as in that program I just posted in
which the
number of systems involved is adjustable and can be fairly
large.

  In short, the scalar/vector choice is simply a choice between

representations, and has nothing to do with the underlying
physical
architecture.

  Ok, Taylor and Kennaway, I await your judgement of all this with

trepidation.

I have no issue with the mathematical side of this, nor with your

final comment that it “is simply a choice between representations”.
That’s all independent of the question that I was originally puzzled
by, and that I am beginning to solidify in my mind, as you may have
seen in my last message to Rick.

The possibility of a largely distributed representation of many or

most controlled variables ties in with quite a lot of other things
that at first glance may seem quite independent about which I have
been thinking and corresponding with other people off-line. That’s
why I keep trying to rephrase my question so as to help make it more
clear both to me and to those who may read my meanderings. I don’t
plan to introduce those other topics here, at least not yet. I just
want to be satisfied as to whether there is any engineering or
physiological gotcha that prohibits a controlled variable being
represented in a distributed manner across the brain. The components
of the distribution may well be scalar, but the issue is whether it
is necessary for any or all of those scalar quantities to correspond
to what out conscious perception would consider a unitary property
of the environment.

Martin

[Martin Taylor 2011.11.03.23.19]

···

On 2011/11/3 10:59 PM, Richard Marken wrote:

[From Rick Marken (2011.11.03.2000)]

  .....

I thought Martin was saying that the perceptual signal could itself be
a vector, as in

input f(x) p
                      ___________
x1 -------> | |
                      > >
x2 -------> | | -------> p1
                      > > ------->p2
x3 -------> | | -------> pn
... | |
x1 -------> | |
                      >__________|

p1,p2...pn would then be a vector input to the comparator. And it's at
this point that I have no idea how this would be turned into an error
to drive output.

But if this is not what Martin meant; all he meant is that f(x), the
perceptual function, could be a matrix operation with a single
perceptual output (as in my spreadsheet) then "never mind":wink:

Your spreadsheet does not have a single perceptual output. It has a vector of three (six?) outputs at each level.

Martin

[From Rick Marken (2011.11.04.0650)]

Martin Taylor (2011.11.03.23.19) --

Your spreadsheet does not have a single perceptual output. It has a vector
of three (six?) outputs at each level.

I suppose you could call it a vector but it doesn't function as a vector:

1. The output of every perceptual function is a scalar perceptual signal.

2. The scaler perceptual signal is what enters each comparator.

3. Copies of the scalar perceptual signal are sent to higher level
systems. This set of signals could be considered a vector except:

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Oops, sent to soon:

[From Rick Marken (2011.11.04.0650)]

Martin Taylor (2011.11.03.23.19) --

MT: Your spreadsheet does not have a single perceptual output. It has a vector
of three (six?) outputs at each level.

RM: I suppose you could call it a vector but it doesn't function as a vector:

1. The output of every perceptual function is a scalar perceptual signal.

2. The scalar perceptual signal is what enters each comparator.

3. Copies of the scalar perceptual signal are sent to higher level
systems. �This set of signals could be considered a vector except:
   a) all the components of the vector are always the same value
(because it's the same perceptual signal.
   b) the components of the vector have different functions. One
component is an input to the comparator, the others are inputs to
different higher level perceptual functions
  c) the components of this "vector" are never operated on as a
vector, so it doesn't function as a vector.

But if sending multiple copies of the same signal to higher level
systems is your idea of a matrix representation of perception then I
guess we've already got it in PCT. But I think that's the wrong way to
conceptualize what's going on. In PCT it's scalar perceptual variables
that are controlled by individual control systems (because that's what
goes into the comparator) and copies of those controlled perceptions
are also sent to higher level systems as inputs to their perceptual
functions.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2011.11.04.10.12]

Oops, sent to soon:

[From Rick Marken (2011.11.04.0650)]

Martin Taylor (2011.11.03.23.19) --

MT: Your spreadsheet does not have a single perceptual output. It has a vector
of three (six?) outputs at each level.

RM: I suppose you could call it a vector but it doesn't function as a vector:

1. The output of every perceptual function is a scalar perceptual signal.

Right.

2. The scalar perceptual signal is what enters each comparator.

Right.

  3. Copies of the scalar perceptual signal are sent to higher level
systems. This set of signals could be considered a vector except:
    a) all the components of the vector are always the same value
(because it's the same perceptual signal.

At each level you have several separate control systems, each controlling a scalar variable. I forget how many your spreadsheet actually had, but let's say six, because that's the number that springs to mind. So at each level, six perceptual signals are sent to each perceptual input at the next higher level. Those six are NOT the same perceptual signal. They are constructed from the same inputs, but each of this six perceptual input functions is different, so the perceptual signals are different.

    b) the components of the vector have different functions. One
component is an input to the comparator, the others are inputs to
different higher level perceptual functions

Huh? The same perceptual signal is sent to the comparator as is sent to the next higher level. If it isn't, then your spreadsheet doesn't represent the standard HPCT hierarchy. A scalar variable has only one component.

But if sending multiple copies of the same signal to higher level
systems is your idea of a matrix representation of perception then I
guess we've already got it in PCT.

I think you are responding to Bill, here. so I will let him answer that if it needs an answer.

But I think that's the wrong way to
conceptualize what's going on. In PCT it's scalar perceptual variables
that are controlled by individual control systems (because that's what
goes into the comparator) and copies of those controlled perceptions
are also sent to higher level systems as inputs to their perceptual
functions.

Right. So what was your message about? Was it supposed to have some relevance to my question? Apart from mistaking a the distribution of a scalar value for a vector, you describe only the basic HPCT hierarchy that you modelled and that formed the initial reason I asked myself the question that I have posed to the CSGnet community: "Is it necessary for control of what is consciously seen as a unitary property of the environment (such as an object's location) to be represented as a single scalar quantity in one brain location, or could it be represented in a distributed fashion as a vector whose elements are represented separately?" Put another way: "If the initial analysis of the visual field consists of the outputs of oriented detectors and local contrast detectors, mapped to different brain locations, is it _necessary_ that these be coalesced into scalar perceptual variables for the corresponding environmental states to be controlled?"

I don't like using the word "controlled" there, but my thought is that even though one controls only perceptions, nevertheless it is what happens in the environment that determines those perceptions.

Your spreadsheet was a good model for what I am asking about. If you consider the environmental inputs (disturbances) to have been created not arbitrarily, as in the actual spreadsheet, but as functions of less than six environmental variables -- let's say three --, the vector of top-level perceptions will have fewer than six degrees of freedom -- actually three. The question is whether the three perceptual patterns corresponding to the three environmental variables could be controlled without the creation of three scalar perceptions that would represent the three environmental variables?

Is the restriction of control to scalar perceptual variables a conceptual convenience (leading to your "I don't see how it could be otherwise"), a theoretical necessity, or a fact of nature?

Martin

···

On 2011/11/4 9:50 AM, Richard Marken wrote:

[From Bill Powers (2011.11.04.1100 MDT)]

Rick Marken (2011.11.03.2000)

···

RM: I thought Martin was saying that the perceptual signal could
itself be a vector, But if this is not what Martin meant; all he meant is
that f(x), the perceptual function, could be a matrix operation with a
single
perceptual output (as in my spreadsheet) then “never
mind”:wink:

BP: Not with a single perceptual output: a multiple perceptual output.
Each output perceptual signal is a function of all the input perceptual
signals, so we could substitute a set of perceptual input functions for
the matrix, each input function receiving all the input signals, and
computing a single output signal. That’s what I was talking about in
chapter 3 of B:CP.

What Martin is wondering is whether an experienced perception might
actually be a collection of multiple perceptual signals, a vector rather
than a scalar signal. This is tempting because clearly a conscious
experience is made up of many different perceptual signals. The problem I
see is that there is really no difference between a set of perceptual
signals coming out of a matrix and a set of perceptual signals generated
by systems that have nothing to do with each other. Whatever difference
there may be between the two sets of signals, there has to be a
perceptual input function to compute and output a function of those
signals before that difference makes a difference. We could be conscious
of either set of signals, those coming from one matrix or from separate
sources. So that isn’t the answer.

Best,

Bill P.

[From Rick Marken (2011.11.04.1015)]

Martin Taylor (2011.11.04.10.12)--

MT: At each level you have several separate control systems, each controlling a
scalar variable. I forget how many your spreadsheet actually had, but let's
say six, because that's the number that springs to mind. So at each level,
six perceptual signals are sent to each perceptual input at the next higher
level. Those six are NOT the same perceptual signal. They are constructed
from the same inputs, but each of this six perceptual input functions is
different, so the perceptual signals are different.

RM: That's true. The perceptual signals from the perceptual functions
of each control system (there are six) are all different. Those six
signals are a true vector input to the perceptual functions at the
next higher level. But there each of the higher level perceptual
functions transform that vector input into a scalar perceptual signal.
This is the perception that is controlled at the next higher level.
Perceptions (the outputs of the perceptual functions in the model) are
always scalars; the inputs to these functions are vectors, at levels 2
and 3. The perceptions in the spreadsheet are always scalars; the
inputs to the perceptual functions are usually vectors. The input to a
perceptual function is not what we call a perception; the output is.

RM:� �b) the components of the vector have different functions. One
component is an input to the comparator, the others are inputs to
different higher level perceptual functions

MT:Huh? The same perceptual signal is sent to the comparator as is sent to the
next higher level.

RM:Yes, that's correct.

MT: If it isn't, then your spreadsheet doesn't represent the
standard HPCT hierarchy.

RM: Good, then I'm standard;-)

MT:Right. So what was your message about?

RM: It was about all that stuff I was talking about;-)

MT: Was it supposed to have some
relevance to my question?

RM: It was supposed to have relevance to what you said about my
spreadsheet hierarchy model.

MT: Apart from mistaking a the distribution of a
scalar value for a vector

RM: Oy.

MT: , you describe only the basic HPCT hierarchy that
you modelled and that formed the initial reason I asked myself the question
that I have posed to the CSGnet community: "Is it necessary for control of
what is consciously seen as a unitary property of the environment (such as
an object's location) to be represented as a single scalar quantity in one
brain location, or could it be represented in a distributed fashion as a
vector whose elements are represented separately?" Put another way: "If the
initial analysis of the visual field consists of the outputs of oriented
detectors and local contrast detectors, mapped to different brain locations,
is it _necessary_ that these be coalesced into scalar perceptual variables
for the corresponding environmental states to be controlled?"

RM: Well then why keep talking about my spreadsheet model as an
example of what you are talking about? I already answered the question
you ask in the last sentence: the answer was basically "I don't know
the answer. But the way I do the modeling now accounts for the data I
-- and others -- collect so it's just not an interesting question to
me".

MT: Your spreadsheet was a good model for what I am asking about.

RM: I really can't see why.

MT: If you consider the environmental inputs (disturbances) to have been created
not arbitrarily, as in the actual spreadsheet, but as functions of less than six
environmental variables -- let's say three --, the vector of top-level
perceptions will have fewer than six degrees of freedom -- actually three.

RM: Try it and see what happens.

MT: The question is whether the three perceptual patterns corresponding to the
three environmental variables could be controlled without the creation of
three scalar perceptions that would represent the three environmental
variables?

RM: Try it! Then both of us would have a better idea of what you're
talking about. YoOu know how to use a spreadsheet, don't you? You just
pucker up and punch equations into cells;-)

MT: Is the restriction of control to scalar perceptual variables a conceptual
convenience (leading to your "I don't see how it could be otherwise"), a
theoretical necessity, or a fact of nature?

RM: Until you show us a model that controls perceptual vectors or
whatever it is you have in mind then the PCT model is not a conceptual
convenience since there was no "less convenient" alternative from
which it was selected.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.11.04.1125 MDT)]

Rick Marken (2011.11.04.1015) --

> MT: Is the restriction of control to scalar perceptual variables a conceptual convenience (leading to your "I don't see how it could be otherwise"), a > theoretical necessity, or a fact of nature?

RM: Until you show us a model that controls perceptual vectors or
whatever it is you have in mind then the PCT model is not a conceptual
convenience since there was no "less convenient" alternative from
which it was selected.

You guys are going around and around about a non-problem. All vectors are composed of scalars. The control equations are exactly the same whether you use matrix notation or expand it into scalar notation. The model is not changed in the slightest.

What Martin is puzzling about is not the PCT model or its representation, but the nature and properties of consciousness. The experience of position in 3D space includes multiple dimensions of representation, perhaps x,y,and z or r, theta and phi, or any other three straight or curved dimensions. That has nothing to do with consciousness; it's just however the neural networks came to be organized to produce signals for controlling position, and for consciousness to experience.

In consciousness we have lots of neural signals representing different aspects of the world, all together in one big experiential field. And this is where we run into a barrier. Martin speaks of a "unitary" experience of spatial position. What is that? Is it a neural signal in a higher level of perception that consciousness attends to, or is it something in consciousness itself, apart from neural signals, that generates this sense of unity? And how can consciousness have such detailed attributes that are not neural in nature? Is consciousness just another level of perception? Or, as I have conjectured semi-seriously, is consciousness outside of the three- or four-dimensional universe, observing reality by means of the brain? Like those avatars on the moon Pandora?

Obviously, we're not prepared to model consciousness. All we can do is collect information about its apparent attributes, not even knowing for sure whether we're talking about neural signals. Maybe it's just a property of a few million cells in the "end-brain," the limbic system. Or maybe that part of the brain is the multi-pin connector into which consciousness plugs to get information from the rest of the brain. Or, as some people seem to want to say, maybe there's no such thing as consciousness, which makes me wonder how they would know that, and who would know it if it were true.

Best,

Bill P.

[Martin Taylor 2011.11.11.04.15.05]

  [From Bill Powers (2011.11.04.1125 MDT)]




  Rick Marken (2011.11.04.1015) --
    > MT: Is the restriction of control to

scalar perceptual variables a conceptual convenience (leading
to your “I don’t see how it could be otherwise”), a >
theoretical necessity, or a fact of nature?

    RM: Until you show us a model that controls perceptual vectors

or

    whatever it is you have in mind then the PCT model is not a

conceptual

    convenience since there was no "less convenient" alternative

from

    which it was selected.
  You guys are going around and around about a non-problem. All

vectors are composed of scalars. The control equations are exactly
the same whether you use matrix notation or expand it into scalar
notation. The model is not changed in the slightest.

Quite so. I hope Rick will eventually come round to seeing this.
  What Martin is puzzling about is not the PCT model or its

representation, but the nature and properties of consciousness.

Not really. It's rather the other way round. We know what we

consciously perceive. Consciously we do see unitary objects. There
just is A glass on THE table. We don’t see (unless we look for them)
a mess of edges, orientations, light and dark patches of different
colours, and so forth. We can learn to see them consciously, but
that’s not what we normally do. What we normally do is think “I want
that glass to be over there”, and we move it. That is control of our
conscious perception of the location of the glass. I am asking
whether the operation of the control system necessarily has this
same property of unitariness.

Think of the other end of the process, the fairly low-level

components of the perception of the location of the glass. As I
said, we don’t usually become conscious of them individually, but at
an early stage of visual processing all we have is a rapidly varying
cortical map of oriented edges, patches brighter than their
surrounds or darker than their surrounds, separately located maps of
red-green and of blue-yellow contrasts, and so forth. Those signals
that somehow come together to allow us to consciously see a glass
and a table and the relation between them are scattered in lots of
different places in the brain. They form a very large vector, and
there’s nothing really to distinguish the elements that come
together to form the glass from those that come together to form the
wood grain on the table or the whisky in the glass.

At a much higher processing level, the "what" and the "where" of the

various consciously perceptible attributes of the environment (e.g.,
the glass, the table, the positions of objects, etc.) are processed
in different brain areas. Despite this, we consciously see a “glass
here” and a “table there”. The relationship between the two objects
and their two locations is trivially apparent. It’s a unitary
perception in consciousness, despite the spatial distribution and
complexity of the brain processes involved.

My question is whether the various controllable perceptions are

necessarily unitary _as they are ** when they come to
consciousness
** _. Is there experimental evidence or
theoretical argument to suggest that there MUST be a single scalar
variable somewhere in the brain that corresponds to the relative
location of the glass on the table.

Martin
···

On 2011/11/4 1:55 PM, Bill Powers wrote:

[From Rick Marken (2011.11.04.1320)]

Martin Taylor (2011.11.11.04.15.05)--

Bill Powers (2011.11.04.1125 MDT)--

Rick Marken (2011.11.04.1015) --

MT: Is the restriction of control to scalar perceptual variables a
conceptual� convenience (leading to your "I don't see how it could be
otherwise"), a > theoretical necessity, or a fact of nature?

RM: Until you show us a model that controls perceptual vectors or
whatever it is you have in mind then the PCT model is not a conceptual
convenience since there was no "less convenient" alternative from
which it was selected.

BP: You guys are going around and around about a non-problem. All vectors are
composed of scalars. The control equations are exactly the same whether you
use matrix notation or expand it into scalar notation. The model is not
changed in the slightest.

This is like having a conversation in a house of mirrors. Who is
arguing about whether vectors are composed of scalars or not? I
thought this was about whether perceptual signals could be vectors in
a control model, where a vector would be the controlled variable. I'm
saying that I don't see how it can be; Martin seems to continue
wondering whether it can. I don't think anyone is saying that a set of
perceptual signals from different lower level systems can't be
represented as a vector: [p1,p2,p3...pn]. I said that I represent them
that way in my spreadsheet, then they are the set of _inputs_ to a
perceptual function. But each individual control system controls a
scalar perception. A system that controls a perception, p2.1 that is a
function of the vector input -- p2.1 = f([p1,p2,p3...pn]) -- is
controlling a scalar variable, p2.1. It is not controlling the vector
[p1,p2,p3...pn]. I have a feeling that Martin believes that the
higher level system that received [p1,p2,p3...pn] as input is
controlling [p1,p2,p3...pn].

MT: Quite so. I hope Rick will eventually come round to seeing this.

I find that kind of offensive Martin. Are you saying that I don't know
that vectors are composed of scalars? Why do you say stuff like that?
Do you really think I don't understand that? If I didn't understand it
then how do you imagine I was able to build models using matrix
algebra (and without using it)? Insulting me won't make your argument
(whatever it is) stronger. That's the Fox News tactic; if you haven't
got the facts, make fun of your opponent. Badly done, Martin. Very
badly done (with apologies to Jane Austen).

BP: What Martin is puzzling about is not the PCT model or its representation,
but the nature and properties of consciousness.

MT: Not really.

Nice try, Bill.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.11.04.1555 MDT)]

Martin Taylor 2011.11.11.04.15.05 –

What Martin is puzzling about is
not the PCT model or its representation, but the nature and properties of
consciousness.

Not really. It’s rather the other way round. We know what we consciously
perceive. Consciously we do see unitary objects. There just is A glass on
THE table. We don’t see (unless we look for them) a mess of edges,
orientations, light and dark patches of different colours, and so forth.
We can learn to see them consciously, but that’s not what we normally do.
What we normally do is think “I want that glass to be over
there”, and we move it. That is control of our conscious
perception of the location of the glass. I am asking whether the
operation of the control system necessarily has this same property of
unitariness.

BP: I’d say it’s three control systems that move the glass in three
dimensions. Consciousness isn’t an output function, it’s an input
function of some kind. It receives information, and as far as I can tell
that’s all that it does. We are aware of neural signals, but it’s the
neural control hierarchy that creates signals to be aware of,
representing different levels of organization, and it’s the neural
hierarchy that does the controlling. I don’t know how much influence
consciousness has on that.

MT: Think of the other end of
the process, the fairly low-level components of the perception of the
location of the glass. As I said, we don’t usually become conscious of
them individually, but at an early stage of visual processing all we have
is a rapidly varying cortical map of oriented edges, patches brighter
than their surrounds or darker than their surrounds, separately located
maps of red-green and of blue-yellow contrasts, and so forth. Those
signals that somehow come together to allow us to consciously see a glass
and a table and the relation between them are scattered in lots of
different places in the brain. They form a very large vector, and there’s
nothing really to distinguish the elements that come together to form the
glass from those that come together to form the wood grain on the table
or the whisky in the glass.

BP: I’d guess that the higher-level systems do that – distinguishing
objects from backgrounds and so on…

MT: At a much higher processing
level, the “what” and the “where” of the various
consciously perceptible attributes of the environment (e.g., the glass,
the table, the positions of objects, etc.) are processed in different
brain areas. Despite this, we consciously see a “glass here”
and a “table there”. The relationship between the two objects
and their two locations is trivially apparent. It’s a unitary perception
in consciousness, despite the spatial distribution and complexity of the
brain processes involved.

BP: This is the core of the phenomenon of consciousness, or I would say
awareness (I define consciousness as the combination of awareness and
something to be aware of, the content of consciousness). If we liken
consciousness or awareness to some kind of input function, it has a
property that no other input function in the hierarchy has: it can
encompass more than one kind of perception at the same time. The
“glass” over “here” and the “table” over
“there”. And the colors and textures and shapes in addition to
positions, all at once, up to at least “seven plus or minus
two.”

But what kind of system this is an input function for I have no
idea.

MT: My question is whether the
various controllable perceptions are necessarily unitary as they
are when they come to consciousness
. Is there experimental
evidence or theoretical argument to suggest that there MUST be a single
scalar variable somewhere in the brain that corresponds to the relative
location of the glass on the table.

BP: All we know about that is that at least two dimensions of positioning
are needed, and that requires controlling two independently variable
quantities. It doesn’t matter if we think of those dimensions in matrix
or scalar terms or what two dimensions we think of; two independent
control processes have to happen in any case. But we can be conscious of
both dimensions at once.

This quest tails off into confusion and mist for me – I think I have to
stick with problems that seem more amenable to solving. We’ll know if the
time comes that we simply can’t go any further without finding
answers.

Best,

Bill P.

···

[From Rick Marken (2011.11.04.1700)]

Bill Powers (2011.11.04.1555 MDT)--

MT: At a much higher processing level, the "what" and the "where" of the
various consciously perceptible attributes of the environment (e.g., the
glass, the table, the positions of objects, etc.) are processed in different
brain areas. Despite this, we consciously see a "glass here" and a "table
there". The relationship between the two objects and their two locations is
trivially apparent. It's a unitary perception in consciousness, despite the
spatial distribution and complexity of the brain processes involved.

BP: This is the core of the phenomenon of consciousness, or I would say
awareness (I define consciousness as the combination of awareness and
something to be aware of, the content of consciousness). If we liken
consciousness or awareness to some kind of input function, it has a property
that no other input function in the hierarchy has: it can encompass more
than one kind of perception at the same time. The "glass" over "here" and
the "table" over "there". And the colors and textures and shapes in addition
to positions, all at once, up to at least "seven plus or minus two."

Thanks, Bill, now I understand what Martin was asking about. And if
that wasn't what he was asking about, I still like how you were
talking about it;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2011.11.11.04.23.04]

[From Rick Marken (2011.11.04.1320)]

Martin Taylor (2011.11.11.04.15.05)--

Bill Powers (2011.11.04.1125 MDT)--

Rick Marken (2011.11.04.1015) --

MT: Is the restriction of control to scalar perceptual variables a
conceptual convenience (leading to your "I don't see how it could be
otherwise"), a> theoretical necessity, or a fact of nature?

RM: Until you show us a model that controls perceptual vectors or
whatever it is you have in mind then the PCT model is not a conceptual
convenience since there was no "less convenient" alternative from
which it was selected.

BP: You guys are going around and around about a non-problem. All vectors are
composed of scalars. The control equations are exactly the same whether you
use matrix notation or expand it into scalar notation. The model is not
changed in the slightest.

This is like having a conversation in a house of mirrors. Who is
arguing about whether vectors are composed of scalars or not? I
thought this was about whether perceptual signals could be vectors in
a control model, where a vector would be the controlled variable. I'm
saying that I don't see how it can be; Martin seems to continue
wondering whether it can.

That's because I don't understand how "I don't see how it can be" is either a theoretical proof that it cannot be or an experimental demonstration that it isn't.

Both you and Bill have never addressed the question at issue. Bill talks about the problem of consciousness, which is not relevant to my questions _except_ insofar as we consciously perceive the world as constituted of unitary things, whereas we know (if you believe the neurophysiologists) that the different attributes of these "unitary things" are processed in different parts of the brain.

Initially I was led into this line of enquiry by Bill's expression of difficulty in understanding how place-mapping can be integrated into a PCT vision of how perceptions are generated, which is at heart a magnitude-mapped vision (magnitude of a neural firing frequency, for example). My first facile thought, which I sent to CSGnet, was that it shouldn't be a problem for PCT, because the perceptions that were controlled were at a much higher level of analysis, at which the controlled properties had been composed into scalar-valued properties, as we consciously see them. We can't control the millions of magnitudes being streamed from the various little detectors associated with different retinal positions, magnitudes that can change with bandwidths of tens of Hz. But we can control a few, more integrated, perceptions that are disturbed at bandwidths much lower.

Thinking about this a little more, I realized that one of the precepts of PCT is that controlled perceptions have nothing to do with conscious perceptions, except insofar as we probably are able to become conscious of any perception we control. To me, this was a liberating realization, because it meant that it was quite conceivable that what is controlled may not be represented in the brain in the unitary way we perceive it. It seemed to me quite possible that "consciousness", whatever it might be, might have the equivalent of perceptual input functions that were not part of the control hierarchy. These inputs to consciousness might themselves be responsible for the apparent scalar nature of the controlled perceptions, while what was actually being controlled might sometimes be a vector of elements that were not composed into a scalar within the control hierarchy.

Now this possibility introduced a complexity that seemed unnecessary, and I normally reject apparently unnecessary complication, but it did seem to account for a couple of phenomena that kind of hide under the table much of the time. One is doing something that feels "intuitively" right (and proves to have been right) without being able to explain why you do it. I can't speak for anyone else, but I find that happens to me fairly often. Sometimes I can retroactively produce what Bruce Gregory calls a "story" to explain why I did what I did, but I'm inclined to think that if what was controlled was a vector of perceptions that together constitute a perception of a situation, and there was no corresponding single scalar perception in the control hierarchy, the conscious effects would probably be similar.

The other phenomenon that seems to flow from considering the possibility that controlled perceptions can be distributed is situation-dependent learning, a.k.a associative memory. A single scalar value is a poor tool for addressing a large memory pool, but a vector of scalar values is a good one. Thinking about Bill's good idea that the reference values for control units might be remembered perceptions addressed by the output of a higher-level perception, and recognizing that each reference input is produced by a multitude of higher-level outputs, I realized that these many higher-level outputs would form a vector that would have the large addressing capability needed to produce a reference value from a huge associative memory, and moreover, that the resulting reference value would be influenced by the situational context. Furthermore, since these same outputs from higher levels apply to the reference inputs of many lower-level control units, what is controlled at the lower level is inherently a vector, whether or not the associative memory idea is correct..

For either of these explanations to work, it must be possible for a controlled perception to be a vector, despite that in consciousness no controlled perception is a vector. So I have been asking the CSGnet community over and again whether anyone knows of any theoretical reason or experimental evidence that a controlled perception must be a scalar quantity.

Does this make it seem any less mystical? Can the question be answered?

Martin

(Gavin Ritz 2011.11.05.17.52NZT)

[Martin Taylor
2011.11.11.04.23.04]

[From Rick Marken
(2011.11.04.1320)]

Martin Taylor
(2011.11.11.04.15.05)–

Bill Powers
(2011.11.04.1125 MDT)–

Rick Marken
(2011.11.04.1015) –

So I have been asking the CSGnet

community over and again whether anyone knows of any
theoretical reason

or experimental evidence that a controlled perception
must be a scalar

quantity.

Does this make it seem any less mystical? Can the
question be answered?

Martin

First of all, nice city
you live, one of the great cities of the world. I spent a few days there a few months
ago.

Next thing the issue of quantity
as scalar and vector is a not such an issue, a more important issue is about
whether it’s an intensive variable or an extensive variable.

This is an important
issue.

A controlled variable
will more likely be an extensive variable. But that’s not certain, because
it may be an intensive variable under some conditions and but I have never
given it any thought.

The internal references (inside)
will probably be intensive variables (not sure either) and the relationship between
the two sets is like a choice and determination problem of conceptual
mathematics. So the conceptual maps would be f of g equals the identity of set
A (internal references) and g of f equals the identity of B (controlled variables),
where f and g is the controlling functions.

Regards

Gavin

···

[From Rick Marken (2011.11.05.0930)]

Martin Taylor (2011.11.11.04.23.04)--

RM This is like having a conversation in a house of mirrors. Who is
arguing about whether vectors are composed of scalars or not? I
thought this was about whether perceptual signals could be vectors in
a control model, where a vector would be the controlled variable. I'm
saying that I don't see how it can be; Martin seems to continue
wondering whether it can.

MT: That's because I don't understand how "I don't see how it can be" is either
a theoretical proof that it cannot be or an experimental demonstration that it isn't.

RM: It's not meant to be either a theoretical proof or experimental
demonstration that it [using a vector as a controlled variable] can't
be done. I'm just saying I don't see how it can be done. That doesn't
mean I think it can't be done or that doing it wouldn;t be a wonderful
contribution to the development of PCT. All I'm saying is that I don't
know how you can model a control system with a vector as the
controlled variable. I would love to see how it's done. I'm not going
to try developing it myself because a) I'm not that smart b) I've got
20 other things I'm currently working on and c) what I am doing with
the PCT modeling doesn't seem to require such a change in the model's
architecture; it's doing fine as is.

MT: Both you and Bill have never addressed the question at issue. Bill talks
about the problem of consciousness, which is not relevant to my questions
_except_ insofar as we consciously perceive the world as constituted of
unitary things, whereas we know (if you believe the neurophysiologists) that
the different attributes of these "unitary things" are processed in
different parts of the brain.

RM: I think neurophysiological findings should _constrain_, not guide,
functional models, like PCT, which is being developed within the
constraints of what is known of the relevant neurophysiology.
Neurophysiological findings are themselves influenced by assumptions
about how the nervous system work. For example, the finding that some
neurons (afferents) carry signals into and some out of (efferents) the
CNS led to the still persistent idea that the nervous system is an
input-output system. PCT suggests that all efferents other than those
that enter muscles and glands are reference, not output, neurons.
This aspect of the PCT architecture is consistent with the
neurophysiological findings, it's just not subject to the conventional
interpretation of those findings. I think the same applies to the
"different paths" findings you mention. The PCT model is consistent
with (constrained by) the finding that different perceptual attributes
(variables) are processed by different parts of the brain; it just
doesn't subject itself to the conventional interpretation of those
findings of those interpretations don't help the functional model
account for the behavioral data.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.11.05.1053 MST)]

Martin Taylor 2011.11.11.04.23.04

···

MT: Both you and Bill have never addressed the question at issue.
Bill talks about the problem of consciousness, which is not relevant to
my questions except insofar as we consciously perceive the world as
constituted of unitary things, whereas we know (if you believe the
neurophysiologists) that the different attributes of these “unitary
things” are processed in different parts of the brain.
BP:I said something about this in my last post. But I think there’s
a problem with attaching “unitary” to consciousness. The very
point I made was that consciousness, as an input function to some
unknown system, can do something that no PCT input function can do:
examine a field of experience in which there are many different
perceptions. They do not look like one single – unitary – thing, but
like many separate things. There is only one unitary field of experience
encompassing all these things. but there are multiple things in it, in
different places or states. This is not like perceiving a blue color or a
relationship like “above”, which are indeed unitary and can
vary only in the degree to which the single experience
exists.

My basic conjecture about awareness is that we are aware only of
perceptual signals. Those signals are provided by the perceptual input
functions in the hierarchy. Awareness can indeed receive some number of
these signals at the same time, not just from one level but from any of
them, bottom to top. The intensity of pain from a stubbed toe can steal
attention away from pondering the nature of
God.

MT: Thinking about this a little more, I realized that one of the
precepts of PCT is that controlled perceptions have nothing to do with
conscious perceptions, except insofar as we probably are able to become
conscious of any perception we control. To me, this was a liberating
realization, because it meant that it was quite conceivable that what is
controlled may not be represented in the brain in the unitary way we
perceive it.
BP: I draw exactly opposite conclusion from the same observations.
In order for a perception to be controlled, it must exist as a single
signal entering a comparator along with a single reference signal. And
what we perceive consciously is very specifically NOT unitary, but
multiple – that being the chief characteristic that distinguishes
consciousness from hierarchical perceptual input functions. We can be
conscious of both controlled and uncontrolled scalar perceptual signals.
Those signals continue to exist, and those under control continut to be
controlled, whether we are conscious of them or not (that assertion
requires some experimental
investigation).

MT: It seemed to me quite possible that “consciousness”,
whatever it might be, might have the equivalent of perceptual input
functions that were not part of the control hierarchy. These inputs to
consciousness might themselves be responsible for the apparent scalar
nature of the controlled perceptions, while what was actually being
controlled might sometimes be a vector of elements that were not composed
into a scalar within the control hierarchy.
BP: Again, exactly the opposite of what I conclude. Does that
special set of input functions somehow sense reality directly rather than
through sensory neural signals? The inputs to consciousness remain
separate from each other, which is the only way we could experience a
multiplicity of different perceptions at the same time. The scalar nature
of controlled perceptions is the “ground truth” of perception;
only a single signal can be controlled relative to a single reference
magnitude. I don’t think of consciousness as “having”
perceptual input functions, but as “being” a set of perceptual
input functions. And what we are conscious of is specifically the set of
scalar perceptual signals in the hierarchy, which are the inputs to the
input functions we call
consciousness.

MT:Now this possibility introduced a complexity that seemed
unnecessary, and I normally reject apparently unnecessary complication,
but it did seem to account for a couple of phenomena that kind of hide
under the table much of the time. One is doing something that feels
“intuitively” right (and proves to have been right) without
being able to explain why you do it.

BP: I think that phenomenon usually can be traced to a higher-order
control system which is setting a reference signal that determines what
will be considered “right.” If awareness is operating from the
viewpoint of the lower system exclusively, there will be no way to know
why a certain perception leads to a sense of error, while a different
state of that perception seems just right. The reference signal from the
higher system is not sensed by the lower system – that is, it doesn’t
become a perceptual signal.

As to such intuitions feeling and proving right, if a reference signal is
the source, then it is most likely that one will find it being matched by
a perceptual signal – the error goes away. Also, we tend to forget all
those many cases in which something that feels intuitively right turns
out to be factually wrong. But even so, if they’re right half of the
time, that’s an excellent batting average, isn’t
it?

MT:I can’t speak for anyone else, but I find that happens to me
fairly often.

BP: Fairly often? Would that mean that as many as half of your intuitions
prove to have been right? If that many, isn’t the logical conclusion that
they’re related randomly to
truth?

MT: Sometimes I can retroactively produce what Bruce Gregory
calls a “story” to explain why I did what I did, but I’m
inclined to think that if what was controlled was a vector of perceptions
that together constitute a perception of a situation, and there was no
corresponding single scalar perception in the control hierarchy, the
conscious effects would probably be similar.
BP: What makes the difference between a vector of perceptions that
all contribute to a measure of some aspect of a situation, and a vector
made of unrelated perceptual signals? If you can characterise that
difference, you have defined a perceptual input function. If not, then
the characteristic is being imagined.

MT: The other phenomenon that seems to flow from considering the
possibility that controlled perceptions can be distributed is
situation-dependent learning, a.k.a associative memory.

I’ll skip that one.

MT: Furthermore, since these same outputs from higher levels apply to
the reference inputs of many lower-level control units, what is
controlled at the lower level is inherently a vector, whether or not the
associative memory idea is correct…

BP: You’re saying that any random collection of signals is a vector –
which is true, of course, but trivial. In order for such a collection to
be a meaningful vector, there must be some regular relationship holding
among the signals. To detect whether that relationship is present
requires a perceptual input function: just a list of the signals is
insufficient. Without the perceptual input function, even if there is
such a regular relationship it will never have any regular effects. There
will be nothing to control. There might be a regular relationship between
any two perceptions – the phase of the moon and the sunspot distribution
– but unless there is something that detects and reports the state of
that relationship, nobody will ever know about
it.

MT: For either of these explanations to work, it must be possible for
a controlled perception to be a vector, despite that in consciousness no
controlled perception is a vector. So I have been asking the CSGnet
community over and again whether anyone knows of any theoretical reason
or experimental evidence that a controlled perception must be a scalar
quantity.

Does this make it seem any less mystical? Can the question be
answered?

All controlled perceptions are vectors, of course, because that’s only a
way of representing a collection of signals and it changes nothing about
the way the system works. All controlled vectors, by the same token, are
collections of controlled scalar signals. You can’t control a vector
without simultaneously controlling the scalars of which it is composed,
or some specific function of the scalars. Behind the vector and scalar
ways of seeing the system, the very same control system is doing the very
same things. It makes no sense to say a set of signals is
“really” a vector or “really” a set of scalar
variables. It’s either one you want it to be.

Best,

Bill P.

[Martin Taylor 2011.11.05.13.24]

[From Rick Marken (2011.11.05.0930)]

Martin Taylor (2011.11.11.04.23.04)--
MT: Both you and Bill have never addressed the question at issue. Bill talks
about the problem of consciousness, which is not relevant to my questions
_except_ insofar as we consciously perceive the world as constituted of
unitary things, whereas we know (if you believe the neurophysiologists) that
the different attributes of these "unitary things" are processed in
different parts of the brain.

RM: I think neurophysiological findings should _constrain_, not guide,
functional models, like PCT, which is being developed within the
constraints of what is known of the relevant neurophysiology.

Quite so. The neurophysiologists often find that neurons and neural structures can do things that the experimental psychologists had said ought to be so, but had been told that neurons couldn't do.

Neurophysiological findings are themselves influenced by assumptions
about how the nervous system work. For example, the finding that some
neurons (afferents) carry signals into and some out of (efferents) the
CNS led to the still persistent idea that the nervous system is an
input-output system. PCT suggests that all efferents other than those
that enter muscles and glands are reference, not output, neurons.
This aspect of the PCT architecture is consistent with the
neurophysiological findings, it's just not subject to the conventional
interpretation of those findings. I think the same applies to the
"different paths" findings you mention. The PCT model is consistent
with (constrained by) the finding that different perceptual attributes
(variables) are processed by different parts of the brain; it just
doesn't subject itself to the conventional interpretation of those
findings of those interpretations don't help the functional model
account for the behavioral data.

You lost me in the second half of that paragraph. You have said (in earlier messages) that you don't see how a perception that is a vector could be controlled, but here you say that the PCT model is consistent with place-coded attribute analysis. That is precisely the problem that gave Bill pause and that led me to ask my questions. You resolve the issue by a simple assertion that the PCT model is consistent with this, whereas Bill asked himself whether it was. I guess he will be happy to know that his question was not necessary.

I agree that any realistic PCT model MUST be consistent with these findings. I don't see how you make the leap from that almost tautology to saying that the current model IS consistent, while asserting (previously) that the PCT model says that each controlled perception that corresponds to some state of the environment is represented neurally by the magnitude of some scalar quantity. I'm not saying that the scalar-magnitude-only model for perceptual signals is inconsistent with the neurophysiological data. I'm just saying that it's a bit of an intellectual leap of faith to assert that it is consistent.

Martin

[From Rick Marken (2011.11.05.1300)]

Martin Taylor (2011.11.05.13.24)--

MT: I agree that any realistic PCT model MUST be consistent with these findings.
I don't see how you make the leap from that almost tautology to saying that
the current model IS consistent, while asserting (previously) that the PCT
model says that each controlled perception that corresponds to some state of
the environment is represented neurally by the magnitude of some scalar
quantity. I'm not saying that the scalar-magnitude-only model for perceptual
signals is inconsistent with the neurophysiological data. I'm just saying
that it's a bit of an intellectual leap of faith to assert that it is
consistent.

I don't know Martin. It seems consistent to me.

I have no idea why I get into these discussions with you, especially
when I don't really understand what they are about. I'm outta here. I
agree with everything Bill said in his recent post [(Bill Powers
(2011.11.05.1053 MST)]. In future conversations with you I'll let Bill
speak for me since you just use me as a foil anyway to make it seem
like Bill agrees with you and that I'm the odd man out. If it's just
you and Bill perhaps things Bill says like "I draw exactly opposite
conclusion from the same observations." and "Again, exactly the
opposite of what I conclude" will have a better chance of sinking in.

Bye

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2011.11.11.05]

[From Rick Marken (2011.11.05.1300)]

Martin Taylor (2011.11.05.13.24)--
MT: I agree that any realistic PCT model MUST be consistent with these findings.
I don't see how you make the leap from that almost tautology to saying that
the current model IS consistent, while asserting (previously) that the PCT
model says that each controlled perception that corresponds to some state of
the environment is represented neurally by the magnitude of some scalar
quantity. I'm not saying that the scalar-magnitude-only model for perceptual
signals is inconsistent with the neurophysiological data. I'm just saying
that it's a bit of an intellectual leap of faith to assert that it is
consistent.

I don't know Martin. It seems consistent to me.

Well, maybe I misunderstood you. Sorry if I did. It's a side-issue anyway. The question isn't whether the idea that each perceptual signal corresponding to an environmental attribute could be a single scalar, as we conventionally assume, but whether it is _necessarily_ so. The neurophysiological evidence of distributed processing seems to me to suggest that it might not be, but I guess that it's no more than a suggestion. Probably I should have said that it was a leap of faith to suggest that the scalar-magnitude-only model is the only consistent model.

I have no idea why I get into these discussions with you, especially
when I don't really understand what they are about.

You could ask about the aspects you don't understand. I'm obviously not the clearest writer, especially when I'm fumbling about trying to understand a problem that is only gradually becoming clearer to me because of my different attempts to make myself understood..

  I'm outta here. I
agree with everything Bill said in his recent post [(Bill Powers
(2011.11.05.1053 MST)]. In future conversations with you I'll let Bill
speak for me since you just use me as a foil anyway to make it seem
like Bill agrees with you and that I'm the odd man out. If it's just
you and Bill perhaps things Bill says like "I draw exactly opposite
conclusion from the same observations." and "Again, exactly the
opposite of what I conclude" will have a better chance of sinking in.

If Bill and I draw opposite conclusions from the same data, there are three possibilities: (1) one or both of us is using faulty logic, (2) the data are inadequate to define a conclusion, or (3) we are not actually talking about the same thing. In the present case, I think (3) is true, though (1) and (2) remain possibilities.

I'm in the process of drafting a response to Bill. It may be done this evening, but I will say now that I don't contradict anything he says except for denying that what he says is the _only_ possible truth in several instances. In other words, I am generally saying that I agree that what he says could be true, but that I need proof or demonstration that it is.

Martin

[From Bill Powers (2011.11.05.1520 MDT)]

Martin Taylor 2011.11.05.13.24 --

MT: I'm not saying that the scalar-magnitude-only model for perceptual signals is inconsistent with the neurophysiological data. I'm just saying that it's a bit of an intellectual leap of faith to assert that it is consistent.

All that's needed in order to be consistent with neurophysiological data is to show that a neural signal can vary only in the dimension of magnitude because of the all-or-none nature of a neural impulse. Given that (almost) all impulses in any given axon have the same amplitude and shape, the only way for them to vary is in the rate at which they occur. The rate can vary rapidly or slowly, in regular or irregular patterns, but there is no other way for them to vary. Hence, neural signals are scalar -- one-dimensional.

If you like, you can then look for groups of signal pathways that carry signals at the same time and from the same general source to the same general destination, and call them vectors. Since you define vectors that way, they are vectors. That does not give them any new properties or any new relationships to external variables. Each signal is still the output of one neuron, and it terminates at synapses on one or more other neurons. The parallel pathways may carry similar information, as in the connection between a set of spinal motor neurons and a common muscle at the destination. Or they may carry different information from independent sources.

If the information carried by some axons in a vector is related to information carried in others, for example in the signals representing color from cones in the fovea, that information remains implicit until the related signals reach a perceptual input function. At that point, in the case of color signals, they can be computationally combined so the maximum perceptual signal (at the next level) is generated for a narrow range of magnitudes in the three color channels.

Color signals from each cone in the fovea obviously do not expand into a million or so signal pathways for different scalar color signals. Instead, the initial color signals are carried all the way to the back of the brain as a vector, where they reach synapses in the primary visual cortex V1 (don't be impressed, I'm using the Web). Color vision arises there, where the separate scalar intensity signals enter perceptual input functions that extract the color information as new signals. The details get murky there, but we clearly have the signals in one vector being combined to produce new scalar signals, which is what perceptual input functions in PCT are supposed to do. I don't see any evidence that color is seen before the signals synapse in the visual cortex. The mere existence of the three color signals does not provide the experience of color. They have to be neurally combined into signals of the next order.

It still seems unlikely to me that awareness can get anything out of a vector of perceptual signals that is not in the individual signals. It's the brain that has to do the extracting of different levels of perceptual signals. Of course if awareness is actually a kind of neural input function at a level we haven't identified yet, it could perform the construction of a new level of perception in the usual way. No spooky conjectures necessary. But that would make awareness even less understandable, with its ability to expand and contract its range and to select information from any level in the nervous system.

We seem to be lacking a very large chunk of vital information here.

Best,

Bill P.