twilight of the idols

To wholegroup from Bill P.

("Wholegroup" is a new mailbox nickname I'm starting in Eudora. It will have all the above addresses in it, though I'll delete anyone's if they ask. Saves using "reply to all" every time, which is easy to forget.)

HY: Few bothered to examine the firing of these neurons under other conditions, because the tuning curve experiments appear to be the most scientifically rigorous. Needless to say, the question of what the animal is 'seeing' is entirely neglected. It's really quite outrageous when all the perceptual neuroscientists talk incessantly about 'coding' and 'decoding' and so on, and really all they are doing is to replace any scientific explanation with the results of a simple tuning curve experiment, to let the cells they carefully selected to answer to the question of vision. See, this cell could be a detector for 'x' when the animal is anesthetized, so that explains how animals can see 'x'. Lord, they still think they are studying receptors...

BP: Thanks for that, Henry. I am encouraged. It never occurred to me that people weren't challenging these results all over the place. Isn't that what we're supposed to do when we say we're doing science? I've been taking the "tuning curve" idea as a well-established fact, but as soon as I read your post another possibility popped up.

The basic perceptual model in PCT is one in which a perceptual input function receives multiple input signals from lower-order perceptual functions and applies some computation to the set to generate the next level of perceptual signal. Generally many different input functions may receive copies of the same lower signal, but of course use different computations so the same signal can contribute differently to several (maybe many) perceptions at the higher level.

If you just stuck multiple electrodes into a visual nucleus where there were many perceptual input functions and then presented some simple scene to the eyes, what (according to the PCT model) would the electrodes show you? You would see many neurons responding to different degrees. If you changed the scene, the degree of the responses would change: some would increase, some would decrease.

This would tell you that whatever it is about the scene that you're changing, some perception of it at one level contributes to a number of different perceptions at the next level up. If the scene were a set of parallel lines, we might think of perceptions of spacing. Shading. Slant. Perspective. Deleted section of a paragraph. Cross section. Mineral content in a geological map. Political subdivision. All these perceptions could change when a set of parallel lines is presented in different orientations. The change in multiple neuronal responses doesn't tell you what variable is being represented; it tells you only that the lines contribute something to detecting it at some level in the perceptual hierarchy. Maybe somewhere, buried in the neurons preceding (signal-wise) the ones being measured, we might find one or two signals, say X and Y coordinate signals, which are providing directional information to all the cells we're looking at, with different weightings. Wouldn't that account for the H&W observations?

HY: So the answers to your questions are all 'don't knows', though if you ask a visual neuroscientist he will try to overwhelm you with details.

BP earlier: what does the magnitude of the response from one of these cells represent, and more to the point, to what system would those signals be an input? And what would the destination system do with those signals? Is there some principle of mapping or conformal transformation or something else esoteric that we need to know about to understand how this thing works?

HY: It's actually pretty easy to falsify the conclusions of all the tuning curve experiments, but I will refrain from that experiment, not only because I'm occupied now with setting up new things, but also because there are a few labs already doing it. And I don't mean to attack visual neuroscience, as most other fields in neuroscience are even worse. At least H&W collected a lot of replicable data, and Wiesel's deprivation experiments were excellent and started the productive field of plasticity in sensory cortical areas. So they did the simple experiments and deserved the Nobel. Their followers are the problem.

The Purves textbook you have should have decent, though fairly mainstream, chapters on this topic, as 3 of the authors--Purves, Katz, and Fitzpatrick--have studied vision. Katz used to work with Wiesel. Purves, an outsider who is interested in real vision (as opposed to tuning curves), actually agrees with me on H&W, but he is in a minority. You've watched his lecture on visual illusions.

BP: Well, I am much comforted, though we haven't heard the rebuttals yet and still could be quite wrong. How fast is that light at the end of the tunnel approaching? Faster than we're moving toward it?

Best,

Bill

···

At 12:05 PM 10/29/2011 -0400, Henry Yin wrote:

[From Rick Marken (2011.10.30.0945)]

To wholegroup from Bill P.

I'm getting kind of confused here. I received this via CSGNet with no
"wholegroup" cc. So I presume my response is going just to CSGNet.
What's the "wholegroup" thing supposed to accomplish? Why don't we
just get everyone who has an interest in PCT (especially those who
understand it and/or have a willingness to learn it) on CSGNet and
ignore (or delete, I think I can do that as a list manager) the noise.

("Wholegroup" is a new mailbox nickname I'm starting in Eudora. It will have
all the above addresses in it, though I'll delete anyone's if they ask.
Saves using "reply to all" every time, which is easy to forget.)

It doesn't show up when I " reply all".

HY: Few bothered to examine the firing of these neurons under other
conditions, because the tuning curve experiments appear to be the most
scientifically rigorous. Needless to say, the question of what the animal is
'seeing' is entirely neglected...

BP: Thanks for that, Henry. ..

The basic perceptual model in PCT is one in which a perceptual input
function receives multiple input signals from lower-order perceptual
functions and applies some computation to the set to generate the next level
of perceptual signal...

This would tell you that whatever it is about the scene that you're
changing, some perception of it at one level contributes to a number of
different perceptions at the next level up....

This is why I use (in my PCT seminar) the Hubel & Weisel single cell
work as evidence for the PCT model. It's not that I think there are
actually horizontal and vertical line detectors and such. It's that
the results show that single cells respond differentially to different
patterns of input on the sensory surface. The "receptive field" is
essentially equivalent to the PCT perceptual function. The only
difference is that H&W (and apparently their students) see the
receptive field as a yes/no detector of particular patterns of input.
In PCT we see the receptive field (perceptual function) as an analog
of a perceptual variable. It looks like Henry and Bruce Abbott are
designing a study to test the PCT analog model of the receptive field.
This is really great news and I look forward to hearing how it goes.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.10.30.1420 MDT)]

Rick Marken (2011.10.30.0945) --

RM: I'm getting kind of confused here. I received this via CSGNet with no
"wholegroup" cc. So I presume my response is going just to CSGNet.]

BP: Yes, but I will get it because of that. The cc list is all people who are not on CSGnet. What I didn't realize is that reply-to-all would work only if the people are in the cc field rather than the TO field. Fooey.

RM: What's the "wholegroup" thing supposed to accomplish? Why don't we
just get everyone who has an interest in PCT (especially those who
understand it and/or have a willingness to learn it) on CSGNet and
ignore (or delete, I think I can do that as a list manager) the noise.

BP: I think "ignore" is the best option. That way we don't have to make the decision as to who is worthy and who isn't.

RM: This is why I use (in my PCT seminar) the Hubel & Weisel single cell
work as evidence for the PCT model. It's not that I think there are
actually horizontal and vertical line detectors and such. It's that
the results show that single cells respond differentially to different
patterns of input on the sensory surface.

BP: The problem with the H&W data is that you have different cells responding as if they are tuned to give maximum response for lines in a particular direction, while other cells nearby respond most to other directions. That makes it look like a physically different input function for each direction instead of one function with a signal indicating the direction by its frequency. Actually I'm not yet convinced that this is not real. We also have similar problems in the visual mapping areas where apparently position in space is represented by the location of the responding cell in a visual mapping area. As an object moves, as I understand it, the location of its representation moves around in the neural map.

I don't think we're through with this problem yet. We still have to explore how that visual-mapping method would work.

RM: The "receptive field" is essentially equivalent to the PCT perceptual function. The only difference is that H&W (and apparently their students) see the receptive field as a yes/no detector of particular patterns of input.

BP: I don't get that impression. The magnitude of the cell response falls off on either side of the maximum point as the direction changes. It's not just on and off. At best we have a different perceptual input function for each direction discriminated, but their ranges overlap.

But maybe all these cells are just part of the inner workings of the same perceptual input function. I'm still waiting for the insight that will clear all this up.

Bill

[From Rick Marken (2011.10.30.1445)]

Bill Powers (2011.10.30.1420 MDT)--

�RM: The "receptive field" is essentially equivalent to the PCT perceptual
function. The only difference is that H&W (and apparently their students)
see the receptive field as a yes/no detector of particular patterns of
input.

BP: I don't get that impression. The magnitude of the cell response falls
off on either side of the maximum point as the direction changes. It's not
just on and off. At best we have a different perceptual input function for
each direction discriminated, but their ranges overlap.

I know. I meant that the receptive field concept -- of a neural
network that maps an array of sensory inputs into the firing rate of a
single cell - is similar to the concept of a PCT perceptual function.
The H&W idea of what the data means is yes/no detector in the sense
that they see the single cell firing rate as an indication of the
degree to which the input fits the field's "template". The idea is
like the Selfridge demons; the outputs of each receptive field (single
cell firing) say how much the input matches the field's template; the
biggest output is taken to be what is "really" out there. But I agree
with all your caveats about why the firing rates might vary with
differing pattern inputs. The receptive fields may be perceptual
functions looking for who knows what and lines of differing
orientation, say, happen to produce differing magnitudes of output.
The cell might be perceiving sin(x+y+z) and we find it increases and
then decreases with x but it actually is an analog measure of
sin(x+y+z).

Receptive fields helped me understand perceptual functions in PCT. But
maybe that's just a result of my successful way of misunderstanding
things;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.10.31.0945 MDT)

Hi, Rick –

I’ve put my “whole group” list into the CC field (all but
CSGnet which is in the TO field) – see if that works for “reply to
all”. I’ve included your post here.

To everybody: I know that everyone on the cc list probably has too many
email list subscriptions, but really, the way to do this most easily is
to subscribe to CSGnet. Unless someone has a better idea.

Rick Marken
(2011.10.30.1445)]

RM: I meant that the receptive field concept – of a neural
network that maps an array of sensory inputs into the firing rate of a
single cell - is similar to the concept of a PCT perceptual
function.
The H&W idea of what the data means is yes/no detector in the
sense
that they see the single cell firing rate as an indication of the
degree to which the input fits the field’s “template”. The
idea is
like the Selfridge demons; the outputs of each receptive field
(single
cell firing) say how much the input matches the field’s template;
the
biggest output is taken to be what is “really” out there.
But I agree
with all your caveats about why the firing rates might vary with
differing pattern inputs. The receptive fields may be perceptual
functions looking for who knows what and lines of differing
orientation, say, happen to produce differing magnitudes of output.
The cell might be perceiving sin(x+y+z) and we find it increases and
then decreases with x but it actually is an analog measure of
sin(x+y+z).

BP: Yes, that’s how I was thinking of it, too. However, let’s keep
looking into the idea that the H&W data is correct, because entirely
aside from the problem of receptive fields, we know that neural maps
exist in the brain, and that some measures (like position in a visual
field) that we treat as continuous analog variables (as in X-Y
coordinates) may really be represented by which cells in the map are
firing (though how that information gets to other subsystems remains
unexplained). That doesn’t affect the functional model, as far as I know,
but it may provide easy ways to carry out complicated functions that
would be hard to implement with the analog model. Of course maybe we’re
just talking about two views of the same elephant.

Best,

Bill

[Martin Taylor 2011.10.30.12.59]

This would tell you that whatever it is about the scene that you're
changing, some perception of it at one level contributes to a number of
different perceptions at the next level up....

This is why I use (in my PCT seminar) the Hubel& Weisel single cell
work as evidence for the PCT model. It's not that I think there are
actually horizontal and vertical line detectors and such. It's that
the results show that single cells respond differentially to different
patterns of input on the sensory surface. The "receptive field" is
essentially equivalent to the PCT perceptual function. The only
difference is that H&W (and apparently their students) see the
receptive field as a yes/no detector of particular patterns of input.
In PCT we see the receptive field (perceptual function) as an analog
of a perceptual variable. It looks like Henry and Bruce Abbott are
designing a study to test the PCT analog model of the receptive field.
This is really great news and I look forward to hearing how it goes.

Summary: I wouldn't worry too much about tuning curves and the like when considering neurophysiological implications for PCT, but instead, I would just take the results as interesting if you are interested in them. I can't see any _extra_ problem to PCT from a realization that different orientations are signalled by different neural paths as compared to the long-existing realization that brightnesses in different retinal points are signalled by different paths, or that the retinal point corresponding to any specific environmental direction change radically and rapidly. Those signals aren't the perceptions you control. What you control has been reintegrated somewhere much higher in the perceiving brain.

------random observtions------

A long time ago, probably in the middle or late 1960s, I came across a neural network learning study by (I think) Christian von der Maltzberg or some similar name, in Kybernetik. I haven't found it recently, but someone else may be able to if those clues are not due to a memory mixup with some other study.

Be that as it may, the study exposed a learning neural network to natural scenes moving over the artificial retina, rather than to the laboratory grids, lines and patches. The finding that fascinated me, and why it stuck in my memory, was that the network developed exactly the kind of detectors the neurophysicists were finding -- on-centre/off surround and its inverse, oriented edge detectors at all orientations (oriented line detectors are just pairs of those, and I don't remember whether the study found oriented line detectors), and similar stuff (including line-end detectors, I seem to remember but won't guarantee).

In my graduate school days (late 1950s), we had argued, for purely perceptual reasons that I now forget, that the early stages of visual analysis probably included oriented detectors, and we treated the Hubel and Wiesel findings just as corroboration. I remember us being surprised that their findings were taken to be important and novel.

In respect of PCT, remember that although all behaviour is the control of perception, not all perception is controlled by behaviour. In particular, one would expect the early stages of visual and auditory perception (at least) to be reorganized to take advantage of correlations that exist in the visual and auditory environment, whether that organization is developed over evolutionary time or by individual experience. Controllable perceptions tend to be of coherent structures in the environment, rather than of the outputs of individual low-level perceptual elements such as receptors or edge-detectors.

It is also not a bad idea to remember that "what" and "where" seem to be processed in different brain areas, and that experimental biases in the visual environment for newborn kittens are followed by complementary biases in neural sensitivities.

Martin

[From Rick Marken (2011.10.31.1720)]

Martin Taylor (2011.10.30.12.59)

BP (I think)This would tell you that whatever it is about the scene that you're
changing, some perception of it at one level contributes to a number of
different perceptions at the next level up....

RM: This is why I use (in my PCT seminar) the Hubel& �Weisel single cell
work as evidence for the PCT model. It's not that I think there are
actually �horizontal and vertical line detectors and such. It's that
the results show that single cells respond differentially to different
patterns of input on the sensory surface. The "receptive field" is
essentially equivalent to the PCT perceptual function.

MT: Summary: I wouldn't worry too much about tuning curves and the like
when considering neurophysiological implications for PCT, but instead, I
would just take the results as interesting if you are interested in them.

RM: Gee, I didn't know that that's what I said. I mean to say that I
would take these results as providing a nice physiological model of
the perceptual functions in PCT.

Best

Rick

···

I can't
see any _extra_ problem to PCT from a realization that different
orientations are signalled by different neural paths as compared to the
long-existing realization that brightnesses in different retinal points are
signalled by different paths, or that the retinal point corresponding to any
specific environmental direction change radically and rapidly. Those signals
aren't the perceptions you control. What you control has been reintegrated
somewhere much higher in the perceiving brain.

------random observtions------

A long time ago, probably in the middle or late 1960s, I came across a
neural network learning study by (I think) Christian von der Maltzberg or
some similar name, in Kybernetik. I haven't found it recently, but someone
else may be able to if those clues are not due to a memory mixup with some
other study.

Be that as it may, the study exposed a learning neural network to natural
scenes moving over the artificial retina, rather than to the laboratory
grids, lines and patches. The finding that fascinated me, and why it stuck
in my memory, was that the network developed exactly the kind of detectors
the neurophysicists were finding -- on-centre/off surround and its inverse,
oriented edge detectors at all orientations (oriented line detectors are
just pairs of those, and I don't remember whether the study found oriented
line detectors), and similar stuff (including line-end detectors, I seem to
remember but won't guarantee).

In my graduate school days (late 1950s), we had argued, for purely
perceptual reasons that I now forget, that the early stages of visual
analysis probably included oriented detectors, and we treated the Hubel and
Wiesel findings just as corroboration. I remember us being surprised that
their findings were taken to be important and novel.

In respect of PCT, remember that although all behaviour is the control of
perception, not all perception is controlled by behaviour. In particular,
one would expect the early stages of visual and auditory perception (at
least) to be reorganized to take advantage of correlations that exist in the
visual and auditory environment, whether that organization is developed over
evolutionary time or by individual experience. Controllable perceptions tend
to be of coherent structures in the environment, rather than of the outputs
of individual low-level perceptual elements such as receptors or
edge-detectors.

It is also not a bad idea to remember that "what" and "where" seem to be
processed in different brain areas, and that experimental biases in the
visual environment for newborn kittens are followed by complementary biases
in neural sensitivities.

Martin

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2011.11.01.00.23]

MT: Summary: I wouldn't worry too much about tuning curves and the like
when considering neurophysiological implications for PCT, but instead, I
would just take the results as interesting if you are interested in them.

RM: Gee, I didn't know that that's what I said. I mean to say that I
would take these results as providing a nice physiological model of
the perceptual functions in PCT.

Yes, I was not commenting on anything you said, with which I largely agreed. I just thought it worth saying that these findings don't really change anything problematic about PCT, since it has always been accepted that the raw sensory information is sent through spatially distinct neural channels that change over time. Bill was making it seem like a new problem.

Martin

[Martin Taylor
2011.10.30.12.59]
MT: Summary: I wouldn’t worry too much about tuning curves and the
like when considering neurophysiological implications for PCT, but
instead, I would just take the results as interesting if you are
interested in them. I can’t see any extra problem to PCT from a
realization that different orientations are signalled by different neural
paths as compared to the long-existing realization that brightnesses in
different retinal points are signalled by different paths, or that the
retinal point corresponding to any specific environmental direction
change radically and rapidly. Those signals aren’t the perceptions you
control. What you control has
been reintegrated somewhere much higher in the perceiving
brain.
It’s possible that the position of a stimulus in a map simply
preserves information for a higher level to handle in a way more like the
PCT model. In other words, we’re seeing a relatively low-level perceptual
signal even though it’s located more centrally in the brain. So I agree.
That’s a possibility.

Bill

[Martin Taylor 2011.11.01.10.39]

    [Martin Taylor

2011.10.30.12.59]
MT: Summary: I wouldn’t worry too much about tuning curves
and the like when considering neurophysiological
implications for PCT, but instead, I would just take the
results as interesting if you are interested in them. I
can’t see any extra problem to PCT from a realization that
different orientations are signalled by different neural
paths as compared to the long-existing realization that
brightnesses in different retinal points are signalled by
different paths, or that the retinal point corresponding to
any specific environmental direction change radically and
rapidly. Those signals aren’t the perceptions you control.
What you control has
been reintegrated somewhere much higher in the perceiving
brain.
It’s possible that the position of a stimulus in a map simply
preserves information for a higher level to handle in a way more
like the PCT model. In other words, we’re seeing a relatively
low-level perceptual signal even though it’s located more
centrally in the brain. So I agree. That’s a possibility.

This thread has reopened for me a question that has been in the back

of my mind for a very long time: What is there about PCT that
requires any specific controlled perception to be a scalar value?

This may seem an odd question, since consciously one does perceive

many controlled perceptions to be single-valued. An object is “here”
and you want it “there” and the error is so many centimeters. But
I’m not thinking of that. I’m looking at the structure of the
hierarchy.

Consider a single-valued (scalar) perceptual variable being

controlled by a conventional ECU (Elementary Control Unit). Call
that unit ECU_0. The output of ECU_0 is distributed to the reference
inputs of several ECUs at the next lower level. But those units do
not “know” that their references are coming form any specific place,
or that there is any relationship among them. Indeed, each of them
could be receiving reference inputs from several different ECUs at
the same level as ECU_0. The pattern of those reference inputs, not
the output of ECU_0 alone. is what determines the reference values
for the ECUs below ECU_0. Conceptually, this is the same as
controlling a vector-valued perception with a vector-valued
reference.

But now at the level below ECU_0, the ECUs are controlling scalar

variables as before. Nothing has changed. But the argument can apply
equally to any one of them. None provide unique reference values to
the control units below them. Together they are conceptually
equivalent to controllers of vector-valued perceptions.

The question is "In standard HPCT, every control unit controls one

scalar-valued perception; is this restriction simply a convenience
in thinking about the problem, or is there some underlying
theoretical or physiological reason why all control must be of
scalar-valued perceptions?"

An associated question is: "Is the conscious perception that we

control single-valued perceptions a property of consciousness or of
the control system itself?"

Yet another associated question is: "Theoretically it would seem

that control of perceptions above category level should be scalar,
if not binary-valued; does this apply at the analogue levels before
any categorization has been done?"

You might call these musings, but I don't have answers to them.

Furhtermore, if the answer is that there is no necessary restriction
of control to scalar-valued properties, the problem of control using
property values scattered across the brain might go away.

Martin
···

On 2011/11/1 10:02 AM, Bill Powers wrote:

[From Rick Marken (2011.11.01.1415)]

Martin Taylor (2011.11.01.10.39)--

MT: This thread has reopened for me a question that has been in the back of my
mind for a very long time: What is there about PCT that requires any
specific controlled perception to be a scalar value?

This may seem an odd question, since consciously one does perceive many
controlled perceptions to be single-valued.

RM: I don't think that has anything to do with the fact that we use
scalar variables (rather than vectors, I presume) as the signals
(perceptual and error) in a control loop. It's because control systems
control scalar variables. I don't see how this could be implemented
any other way, at least not easily.

MT: Consider a single-valued (scalar) perceptual variable being controlled by a
conventional ECU (Elementary Control Unit). Call that unit ECU_0. The output
of ECU_0 is distributed to the reference inputs of several ECUs at the next
lower level. But those units do not "know" that their references are coming
form any specific place, or that there is any relationship among them.
Indeed, each of them could be receiving reference inputs from several
different ECUs at the same level as ECU_0.

RM: Then they would be sending reference signals to themselves. And I
think that would mess things up. You are right that lower level
systems don't know (or care) from whence they get their reference
input. But in the HPCT architecture they always get their reference
input (which can be a sum of the outputs of several control systems)
from higher level systems.

MT: The pattern of those reference
inputs, not the output of ECU_0 alone. is what determines the reference
values for the ECUs below ECU_0. Conceptually, this is the same as
controlling a vector-valued perception with a vector-valued reference.

RM: I think a diagram might help. Even better would be a diagram + a
working simulation. I could cobbled together the simulation pretty
quickly if you just tell me how this vector -based system is supposed
to work.

MT: But now at the level below ECU_0, the ECUs are controlling scalar variables
as before. Nothing has changed. But the argument can apply equally to any
one of them. None provide unique reference values to the control units below
them. Together they are conceptually equivalent to controllers of
vector-valued perceptions.

RM: That sounds interesting but I don't fully understand. Again, a
diagram and simulation would help me a lot.

MT: The question is "In standard HPCT, every control unit controls one
scalar-valued perception; is this restriction simply a convenience in
thinking about the problem, or is there some underlying theoretical or
physiological reason why all control must be of scalar-valued perceptions?"

RM:For me it's the only way I can imagine it being done. But if you
can produce a working vector based simulation of a control system (or
hierarchy of control systems) that would be great.

MT: An associated question is: "Is the conscious perception that we control
single-valued perceptions a property� of consciousness or of the control
system itself?"

RM: Right now I'd say it's a property of the control system; the
outputs of perceptual functions are assumed to be scalar neural
currents.

MT: Yet another associated question is: "Theoretically it would seem that
control of perceptions above category level should be scalar, if not
binary-valued; does this apply at the analogue levels before any
categorization has been done?"

RM:Seems like an empirical question to me.

MT: You might call these musings, but I don't have answers to them. Furhtermore,
if the answer is that there is no necessary restriction of control to
scalar-valued properties, the problem of control using property values
scattered across the brain might go away.

RM: I didn't know there was such a problem. I guess I've been living
in a fool's paradise.

Best

Rick

···

Martin

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Hello, all –

[Martin Taylor
2011.11.01.10.39]

MT: This thread has reopened for
me a question that has been in the back of my mind for a very long time:
What is there about PCT that requires any specific controlled perception
to be a scalar value?

This may seem an odd question, since consciously one does perceive many
controlled perceptions to be single-valued. An object is “here”
and you want it “there” and the error is so many centimeters.
But I’m not thinking of that. I’m looking at the structure of the
hierarchy.

Consider a single-valued (scalar) perceptual variable being controlled by
a conventional ECU (Elementary Control Unit). Call that unit ECU_0. The
output of ECU_0 is distributed to the reference inputs of several ECUs at
the next lower level. But those units do not “know” that their
references are coming form any specific place, or that there is any
relationship among them. Indeed, each of them could be receiving
reference inputs from several different ECUs at the same level as ECU_0.
The pattern of those reference inputs, not the output of ECU_0 alone. is
what determines the reference values for the ECUs below ECU_0.
Conceptually, this is the same as controlling a vector-valued perception
with a vector-valued reference.

BP: It would be if the lower system’s reference signal were something
more than the sum or average of all the different reference signals
reaching the same comparator. In the present model, there is no reference
“vector” except in the trivial sense that you can make a list
of the reference signals. They don’t interact with each other and there
is nothing that constrains their values except the channel capacity of
each signal. In my models, lower systems do receive reference inputs from
many different higher-order systems. But they are simply added up – they
don’t have any effects that could be traced back to any one higher-order
system.

In the Todorov paper, and Steve Scott’s, I notice a lot of matrix
algebra, which certainly reminds one of signals as vectors. But the
matrix algebra is just a computational convenience; in the real system,
all of the operations that are summarized in the matrix notation must
actually be carried out in full detail by the system in question. There
is no magical built-in matrix algebra function in the nervous system,
like the ones in MatLab.

In fact, I use matrix algebra in the “ArmControlReorg” program,
Demo 8-1 in LCS3. The reason is not that there are vector variables in
the model, but simply to make sure I kept track properly of all the
control equations in all 14 dimensions of the model. All the
multiplications, divisions, additions, and subtractions that would have
to be done in the 14 separate control systems are still done, but they’re
done in an organized way by shared functions and according to the rules
of matrix algebra that helped me stop making mistakes.

My main reason for not using the vector concept is simply that one neural
signal can carry only one dimension of information: how much. You might
say that variations in the how-muchness constitute other dimensions, but
the mere existence of such variations isn’t enough; they have to be
recognized first and represented as an explicit signal somewhere else.
Only actual signals can have actual effects. The rate of rise or fall of
signal amplitude, for example would have to be detected, with the output
of the detector being once again a single signal that can indicate only
the magnitude of the rate of rise or fall.

The same goes for relationships between signals. One signal may vary
twice as fast as another, but that fact has no consequences until the
signals are input to a computing function that generates a signal
representing the state of the relationship. Nothing implicit matters;
only the explicit can have real effects.

I hope any mathematicians reading this are not holding their sides and
roaring with laughter. As you know, I’m not comfortable in the upper
realms of ungrounded abstraction.

MT: But now at the level below
ECU_0, the ECUs are controlling scalar variables as before. Nothing has
changed. But the argument can apply equally to any one of them. None
provide unique reference values to the control units below them. Together
they are conceptually equivalent to controllers of vector-valued
perceptions.

BP: Implicitly, perhaps. Explicitly, no. If you prefer to handle them
using vector and matrix notation, that’s fine, and as I said I’ve done
that here and there. But nothing in the model is changed by doing that.
All that’s affected is the way you think about it.

You do make me uneasy, though. It may be that my “scalar”
stance is the very thing that is preventing me from grasping how
higher-order perceptual input functions work. So be it. Somebody else
will get the glory for working that out. I can’t do it.

Best,

Bill

···

At 03:51 PM 11/1/2011 -0400, Martin Taylor wrote:

[Martin Taylor 2011.11.01.17.35]

[From Rick Marken (2011.11.01.1415)]

Martin Taylor (2011.11.01.10.39)--
MT: This thread has reopened for me a question that has been in the back of my
mind for a very long time: What is there about PCT that requires any
specific controlled perception to be a scalar value?

This may seem an odd question, since consciously one does perceive many
controlled perceptions to be single-valued.

RM: I don't think that has anything to do with the fact that we use
scalar variables (rather than vectors, I presume) as the signals
(perceptual and error) in a control loop. It's because control systems
control scalar variables. I don't see how this could be implemented
any other way, at least not easily.

You are presuming the answer to the question, whereas what I am looking for is a demonstration or proof that it must be so.

MT: Consider a single-valued (scalar) perceptual variable being controlled by a
conventional ECU (Elementary Control Unit). Call that unit ECU_0. The output
of ECU_0 is distributed to the reference inputs of several ECUs at the next
lower level. But those units do not "know" that their references are coming
form any specific place, or that there is any relationship among them.
Indeed, each of them could be receiving reference inputs from several
different ECUs at the same level as ECU_0.

RM: Then they would be sending reference signals to themselves.

How so? All the control units at level 0 get reference values only from units at level 1, and so forth. I don't see where there could be a loop-back such as you propose.

  And I
think that would mess things up. You are right that lower level
systems don't know (or care) from whence they get their reference
input. But in the HPCT architecture they always get their reference
input (which can be a sum of the outputs of several control systems)
from higher level systems.

MT: The pattern of those reference
inputs, not the output of ECU_0 alone. is what determines the reference
values for the ECUs below ECU_0. Conceptually, this is the same as
controlling a vector-valued perception with a vector-valued reference.

RM: I think a diagram might help. Even better would be a diagram + a
working simulation. I could cobbled together the simulation pretty
quickly if you just tell me how this vector -based system is supposed
to work.

You have one already -- your three-level three unit excel spreadsheet. It's a good example of what I am talking about. The top-level reference is a pattern of three values -- a vector. The system controls a three-element vector perception. There is no top-level scalar perception being controlled through these three levels. What is controlled is a pattern perception.

My question is whether it is _necessary_ that such vector perceptions be controlled only by way of their individual vector elements.

I imagine you were taught in your perception classes about integral and separable variables. Integral variables interact, in the way that colour hue does, whereas separable variables done, in the way that length and width don't. Colour can be described as a three-variable vector, but until Newton and his successors started doing scientific experiments with colour, nobody imagined that all colours could be described by three numbers, and even now, there are many different three-number sets that can be used to describe a colour. And when you have done that, it won't describe the perceived colour, which depends greatly on context. What people would do when trying to control colour would be to say "a little more pink...no, a bit of beige... perhaps lighter and a bit bluer...", which doesn't sound like they were controlling a scalar variable.

MT: But now at the level below ECU_0, the ECUs are controlling scalar variables
as before. Nothing has changed. But the argument can apply equally to any
one of them. None provide unique reference values to the control units below
them. Together they are conceptually equivalent to controllers of
vector-valued perceptions.

RM: That sounds interesting but I don't fully understand. Again, a
diagram and simulation would help me a lot.

MT: The question is "In standard HPCT, every control unit controls one
scalar-valued perception; is this restriction simply a convenience in
thinking about the problem, or is there some underlying theoretical or
physiological reason why all control must be of scalar-valued perceptions?"

RM:For me it's the only way I can imagine it being done. But if you
can produce a working vector based simulation of a control system (or
hierarchy of control systems) that would be great.

MT: An associated question is: "Is the conscious perception that we control
single-valued perceptions a property of consciousness or of the control
system itself?"

RM: Right now I'd say it's a property of the control system; the
outputs of perceptual functions are assumed to be scalar neural
currents.

MT: Yet another associated question is: "Theoretically it would seem that
control of perceptions above category level should be scalar, if not
binary-valued; does this apply at the analogue levels before any
categorization has been done?"

RM:Seems like an empirical question to me.

MT: You might call these musings, but I don't have answers to them. Furhtermore,
if the answer is that there is no necessary restriction of control to
scalar-valued properties, the problem of control using property values
scattered across the brain might go away.

RM: I didn't know there was such a problem. I guess I've been living
in a fool's paradise.

And you haven't been reading Bill P's recent writings, starting with his conceptual problem with oriented line detectors.

Imagine your Excel spreadsheet expanded in width to represent at the top level a pattern of so much line at 72 degrees, and so much at 127 degrees, such-and-such a position on the blue-yellow continuum, thus and so location with respect to a dark-light edge oriented at 24 degrees, ...., all of which could be seen consciously as "this object under that lighting that near the table edge". You could _make_ a unitary perception out of the pattern, and consciously we probably do. But is it necessary in order that we properly control the pattern/object?

I don't know if I'm asking a sensible question, or one that is answerable by experiment. I hope it makes sense to you now, and to me in the morning.

Martin

Martin

···

On 2011/11/1 5:15 PM, Richard Marken wrote:

[From Bill Powers (2011.11.01.1845 MDT)]

Martin Taylor 2011.11.01.17.35 --

MT to RM: You have one already -- your three-level three unit excel spreadsheet. It's a good example of what I am talking about. The top-level reference is a pattern of three values -- a vector. The system controls a three-element vector perception. There is no top-level scalar perception being controlled through these three levels. What is controlled is a pattern perception.

BP: That's a vector only in the trivial sense of a list of arbitrary magnitudes. The pattern is in your perceptual system, not Rick's spreadsheet. In the spreadsheet there are simply three unrelated reference signals at the top level, any of which can be set to any value regardless of the others. There is no "pattern" unless you pick one and set those signals to match it, in which case it is your pattern-controller at the top level.

MT: My question is whether it is _necessary_ that such vector perceptions be controlled only by way of their individual vector elements.

I imagine you were taught in your perception classes about integral and separable variables. Integral variables interact, in the way that colour hue does, whereas separable variables done, in the way that length and width don't. Colour can be described as a three-variable vector, but until Newton and his successors started doing scientific experiments with colour, nobody imagined that all colours could be described by three numbers, and even now, there are many different three-number sets that can be used to describe a colour. And when you have done that, it won't describe the perceived colour, which depends greatly on context. What people would do when trying to control colour would be to say "a little more pink...no, a bit of beige... perhaps lighter and a bit bluer...", which doesn't sound like they were controlling a scalar variable.

BP: That's a good example. Could this be an example of the mapping phenomenon that's been bugging me? Is this happening in a "color space" with three dimensions, the location in which is set by the magnitudes of the three variable intensities? This is very much like what happens in taste-space, too, with four variable intensities of taste signals.

The thing is that with color, you can construct any color by adjusting the three intensities, and you can build a perceptual input function as a weighted summation that will give a maximum signal for only one combination of intensities (normalized to a constant sum). That would be the current PCT way of representing the colors. There would be one input function per color, which doesn't sound very practical, does it? This screen can show 16 million colors.

However, I wrote a color-matching program for David Goldstein in a different way, more like the mapping approach (not on purpose, I was just following my nose without any deep thinking). As you move the cursor from left to right in a square field, the red intensity (of the whole square) decreases from maximum toward zero and the green intensity increases from zero toward maximum; as you move the cursor down from the top, the blue intensity increases from zero to maximum. This seems to allow matching of any color I could find, even purple and orange and brown (the mouse wheel controlled the average brightness).

This method creates a color by positioning a point in two dimensions inside a square. The resulting three color signals exit the retina and enter the midbrain, and after some magic occurs, we see a unitary color: the whole square is one color. So do those three signals somehow locate a point in a color space or volume in a brain map? And how does that connect to the fact that we see one uniform color over a square area instead of just one point? I could have separated the color display, just showing a color patch outside the square where the mouse pointer was being moved. That might have lessened the confusion, but it doesn't solve the problem. How can the whole patch seem to be of one uniform color? How would position on a color map get attached to a whole geometric area?

I know this is at a level of detail that doesn't interest you much, but it seems to me that there must be some principle here that would be very useful to understand. It would be a new way of generating perceptions, perhaps not as signals per se but simply as some other aspect of neural activity -- maybe "slow potentials." It could be that true vector algebra would be appropriate, but I just don't know enough to say. The difficulty lies in how to get the "vector" to matter to the perceiving system rather than just to an observer looking at the brain from outside.

Best,

Bill P.

[From Rick Marken (2011.11.02.2200)]

Martin Taylor (2011.11.01.22.15)–

MT: So, rather than pursuing these delightful speculations, I would like

to return to my ill-defined question about whether it is necessary
to restrict consideration of control to the control of scalar
variables in individual control units.

I don't think it's at all necessary to restrict consideration of control to the control of scalar variables. I think it is only necessary to restrict our consideration to models that predict the data accurately. All the models we have that do that represent perceptions as scalar variables. So I see no need to try to develop a non-scalar model (I wouldn't know how to do it anyway) since the scalar ones work better than anything else. But if you can develop a non-scalar model that can account for the data better than a scalar model then that would be great. Not only would you have developed a better model but then I could see what the hell a non-scaler model is;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.11.03.1015 MDT)]

[From Rick Marken
(2011.11.02.2200)]

Martin Taylor (2011.11.01.22.15)–

MT: So, rather than pursuing these delightful speculations, I would
like to return to my ill-defined question about whether it is necessary
to restrict consideration of control to the control of scalar variables
in individual control units.

RM: I don’t think it’s at all necessary to restrict consideration of
control to the control of scalar variables. I think it is only necessary
to restrict our consideration to models that predict the data accurately.
All the models we have that do that represent perceptions as scalar
variables. So I see no need to try to develop a non-scalar model (I
wouldn’t know how to do it anyway) since the scalar ones work better than
anything else. But if you can develop a non-scalar model that can account
for the data better than a scalar model then that would be great. Not
only would you have developed a better model but then I could see what
the hell a non-scaler model is;-)
BP: See Figures 3.11 and 3.12 in B:CP.
Fig. 3.11 is shown as the way the nervous system appears to be organized.
Many input signals come into a sensory function (nucleus), and many come
out of it. That is a vector representation, in which the whole input
vector is transformed into the output vector by a transformation-matrix
process.
Fig. 3.12 shows how I chose to represent the relationship between inputs
and outputs. Each output is an explicit function, normally a
different function, of all the inputs. The repetition of all the
inputs where they join to produce one of the outputs is necessary because
in general the signals interact; this arrangement preserves the
interactions.
These two diagrams represent exactly the same physical network and
exactly the same set of input-output transformations. But the
representation I chose seems to me to show more clearly how the
vector transformation works, and when I make models, this is the
representation that best helps me to understand what is going on – how
to understand conflict, for example, or the “tuning” of
different perceptual input functions. Fig. 3.12 shows the function in
terms of scalar variables, but it’s the same function shown in
3.11.
In truth, the second diagram, contrary to the text, is the one that shows
the real organization of the nervous system. As I said yesterday, the
matrix representation is only a convenience for the analyst. There is no
actual matrix algebra in the nervous system, except for that pattern of
processes and rules that allows an analyst’s brain to employ this way of
doing things with pencil and paper, by using canned programs, or by
writing a program. Matrix algebra is something the brain does, a
behavior, not a primitive neural function.

Here is a zipped version of “Multicontrol”, with source code
for Delphi 7 and including the executable file MulticontrolPrj.exe
(Windows XP). It was written to demonstrate that with N random
input weights for each of N control system controlling N environmental
variables, simply making the output matrix into the transpose of the
input matrix, with an underlying integral output function, is sufficient
to get a stable set of N control systems. This program allows switching
back and forth between a sine-wave and a cosine wave of reference signal
settings, just to show how control is working. You can also set the
number of control systems to values from 10 to 500. It’s not possible to
get purely independent control of each input variable because the random
weighting of the inputs does not guarantee that an exact solution exists.
So after the program has run for 5 or 10 minutes (with 500 systems), the
perceptual variables will be close to but not exactly matching the
reference variables. Needless to say, the program runs with scalar
variables, although some matrix operations are used for convenience.
There is no reorganization in this demo; with reorganization a much more
exact solution occurs and convergence is much faster. Here’s the link to
my dropbox:

[
http://dl.dropbox.com/u/35647848/MultiControl.zip

](http://dl.dropbox.com/u/35647848/MultiControl.zip)Best,

Bill P.

[From Rick Marken (2011.11.03.1210)]

Bill Powers (2011.11.03.1015 MDT)--

RM: I don't think it's at all necessary to restrict consideration of control
to the control of scalar variables. I think it is only necessary to restrict
our consideration to models that predict the data accurately...

BP: Here is a zipped version of "Multicontrol", with source code for Delphi 7
and including the executable file MulticontrolPrj.exe (Windows XP). It� was
written to demonstrate that with N random input weights for each of...

Thanks. But I don't see how this is relevant to Martin's question. I
_think_ Martin is suggesting that perceptual signals that are the
output of a perceptual function (which has a vector input) could
itself be a vectors: the perceptual signal would be a vector. I was
just saying that that may be true but I can't think of how to use
such a signal in a control model and besides there is no data that
seems to require that kind of change in model architecture.

Your multicontroller is a whole bunch of control systems each
controlling a scalar perceptual variable. I don't believe this is the
architectural alternative Martin was suggesting.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2011.11.03.16.27]

[From Rick Marken (2011.11.03.1210)]

Bill Powers (2011.11.03.1015 MDT)--
RM: I don't think it's at all necessary to restrict consideration of control
to the control of scalar variables. I think it is only necessary to restrict
our consideration to models that predict the data accurately...
BP: Here is a zipped version of "Multicontrol", with source code for Delphi 7
and including the executable file MulticontrolPrj.exe (Windows XP). It was
written to demonstrate that with N random input weights for each of...

Thanks. But I don't see how this is relevant to Martin's question. I
_think_ Martin is suggesting that perceptual signals that are the
output of a perceptual function (which has a vector input) could
itself be a vectors: the perceptual signal would be a vector. I was
just saying that that may be true but I can't think of how to use
such a signal in a control model and besides there is no data that
seems to require that kind of change in model architecture.

Doesn't Bill's description of the multi-input multi-output neuron at least suggest the possibility of a vector perceptual signal in real biology?

I'm not surprised you have difficulty interpreting my question, since I don't have it properly sorted out in my own mind, at least not in its full ramification, of which the vector-valued perceptual signal is one aspect.

Anyway, that's not the core of what I was talking about, which was derived from two things: (1) a consideration of the standard HPCT hierarchy, and (2) the fact that different properties of visual objects are represented in different parts of the brain or in different neural channels (e.g. oriented line detectors, or at a higher level "what" and "where").

Your spreadsheet is quite sufficient to illustrate my original question in its simplest form. The question can be rephrased (I think I haven't put it this way earlier -- apologies if it is actually a repeat).

Suppose that single degree of freedom changes in a particular environmental property result in coordinated changes in the perceptual variables at all levels of the spreadsheet, can that property be controlled if there exists no single control unit that controls a perception whose function reflects the coordinated changes in the environmental property. To make this idea concrete, just think of an object moving from light to shade. All the surfaces change brightness by a similar ratio. Or for a more complex example, imagine an object of complex shape rotating around an axis. For a small rotation angle, most of the surfaces change their shapes slightly and in a coordinated way, and their locations relative to the global outline of the object also change in a related way. In this latter case, the question is whether in order to control the rotation angle it is _necessary_ that there be a single control unit for which the perceptual variable is rotation angle.

We may _consciously_ perceive a unitary rotation angle, but what is consciously perceived is not necessarily what is controlled. What is controlled may be a pattern rather than a scalar -- or is that necessarily a false statement?

I think Bill's message suggests that it may be impossible to tell the answer by experiment, and one would have to map the model structure onto the physiological structure to distinguish the possibilities.

Martin

[From Bill Powers (2011.11.03.1910 MDT)]

Rick Marken (2011.11.03.1210) –

RM: Thanks. But I don’t see how
this is relevant to Martin’s question. I

think Martin is suggesting that perceptual signals that are the

output of a perceptual function (which has a vector input) could

itself be a vectors: the perceptual signal would be a vector.

BP: Yes, that’s the implication: output vector = matrix times input
vector.

RM: I was just saying that that
may be true but I can’t think of how to use

such a signal in a control model and besides there is no data that

seems to require that kind of change in model
architecture.

BP: There’s no mystery – we just treat the matrix operation as the sum
of a lot of scalar operations, which is how matrix algebra is done
anyway. The matrix notation is a lot more compact than writing out all
the details, but you get the identical result either way. Did you look at
those two figures, 3.11 and 3.12, in B:CP?

RM: Your multicontroller is a
whole bunch of control systems each

controlling a scalar perceptual variable. I don’t believe this is
the

architectural alternative Martin was suggesting.

BP: Sure it is. Look at the source code: all the operations that would
appear in a matrix and vector treatment are there, but just spelled out
in detail rather than relying on the systematic cycling of indices and so
on that makes the matrix treatment easier to keep track of without making
mistakes.
Suppose we have three linear equations in three unknowns, x1, x2, and x3,
the values of the three expressions being y1, y2, and y3. We can
write
y1 := a11x1 + a12x2 + a13x3
y2 := a21
x1 + a22x2 + a23x3
y3 := a31x1 + a32x2 + a33*x3
There are all the operations that go into these three equations. If we
want to treat the equations separately, each one is simply a scalar
equation. But we can also write
__Y = A * X__Where Y and X are vectors and A is the matrix of
coefficients. Now we can do matrix algebra with this single equation, but
it means exactly the same thing as the three equations above. If you
expand the matrix equation you get the three scalar equations
back.
In a somewhat different form, those could be three equations for three
control systems sharing a common environment, each sensing the same set
of environmental variables and acting on all of them. The y’s would be
perceptual signals. That collection of control systems could be written
in matrix notation in a similar way, with Y being a vector
standing for the three perceptual signals. X would be the vector
of three reference signals, and there would also be vectors of
three disturbances and three environmental variables. Three, that is, if
we want a fully-determined system.

I happen to prefer the scalar way of writing (and programming) the
equations, because I get a better picture of what’s going on when I can
see all the details. The matrix notation hides everything interesting.
But sometimes, when dealing with fairly large and complicated systems, I
try to use matrix algebra simply because it’s been boiled down to some
simple rules which, if I follow them carefully, pretty much guarantee
that I won’t make mistakes. Often the choice is a toss-up – when I get
confused about subscripts I’m as likely to make a mistake in implementing
the matrix algebra as I am in using the scalar version. But the matrix
algebra can be very useful, as in that program I just posted in which the
number of systems involved is adjustable and can be fairly
large.

In short, the scalar/vector choice is simply a choice between
representations, and has nothing to do with the underlying physical
architecture.

Ok, Taylor and Kennaway, I await your judgement of all this with
trepidation.

Best,

Bill P.

[From Rick Marken (2011.11.03.2000)]

> Bill Powers (2011.11.03.1910 MDT)--

RM: Your multicontroller is a whole bunch of control systems each
controlling a scalar perceptual variable. I don't believe this is the
architectural alternative Martin was suggesting.

BP: Sure it is. Look at the source code: all the operations that would
appear in a matrix and vector treatment are there, but just spelled out in
detail rather than relying on the systematic cycling of indices and so on
that makes the matrix treatment easier to keep track of without making
mistakes.

Suppose we have three linear equations in three unknowns, x1, x2, and x3,
the values of the three expressions being y1, y2, and y3. We can write

y1 := a11*x1 + a12*x2 + a13*x3
y2 := a21*x1 + a22*x2 + a23*x3
y3 := a31*x1 + a32*x2 + a33*x3

There are all the operations that go into these three equations. If we want
to treat the equations separately, each one is simply a scalar equation. But
we can also write

Y = A * X

Where Y and X are vectors and A is the matrix of coefficients. Now we can do
matrix algebra with this single equation, but it means exactly the same
thing as the three equations above. If you expand the matrix equation you
get the three scalar equations back...

In short, the scalar/vector choice is simply a choice between
representations, and has nothing to do with the underlying physical
architecture.

RM: If all this is about is the difference between using matrix
algebra versus simultaneous equations then I am dumbfounded. Is that
why Martin keeps referring to my spreadsheet? In fact, my spreadsheet
does use matrix operations to compute the level 2 perceptual signals
and the level 1 reference signals. The matrix approach is just a
simpler way of organizing the computations; I could have done the same
thing with linear equations. If Martin's question is whether control
computations can be done with matrix algebra rather than linear
equations then the answer is clearly "yes". And "who cares?"

I thought Martin was saying that the perceptual signal could itself be
a vector, as in

input f(x) p

···

___________
x1 -------> | |
                     > >
x2 -------> | | -------> p1
                     > > ------->p2
x3 -------> | | -------> pn
... | |
x1 -------> | |
                     >__________|

p1,p2...pn would then be a vector input to the comparator. And it's at
this point that I have no idea how this would be turned into an error
to drive output.

But if this is not what Martin meant; all he meant is that f(x), the
perceptual function, could be a matrix operation with a single
perceptual output (as in my spreadsheet) then "never mind":wink:

Best

Rick

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com