Vectors and scalars

[From Bill Powers (2011.11.12.0725 MST)]

Note change in subject
field.

Martin Taylor 2011.11.11.23.02 –

BP earlier: Can you imagine any way in which a collection of
independently generated variables can lead to a perception of some
unitary thing without first being represented as signals, and then
passing into a computing function that generates some particular
value?

MT: No.
The question is whether this function is necessarily a component of
the perceptual control hierarchy, or exists only as a process of
consciousness.
Let me repeat myself: Is it necessarily true that what is consciously
perceived as scalar is so perceived because the corresponding controlled
variable is itself a scalar?
BP: What is a “process of consciousness?” I have been
assuming that all perceptual input functions are neural and operate
independently of consciousness. My observations and those of other people
are that consciousness itself simply receives; it does not interpret,
create patterns, or do any of those other things we think of as
perception. It observes, and so far that is all I have been able to
identify as a process of consciousness. The content of consciousness, as
far as I can see, is created and given shape by neural systems.

My main piece of concrete evidence concerning this assumption is that
control of any kind I can think of can go on either consciously or
unconsciously. Since the PCT model assumes that all control, without
exception, is organized to control perceptual signals, unconscious
control implies existence of the same perceptual signals that are
involved in conscious control. Therefore perception itself is not a
conscious process.

That being accepted, we must then conclude that the nature of a
controlled variable is determined by the nature of the neural
computations that generate perceptual signals.

You’re proposing that the nature of neural computations is analogous to
the principles of holography, in which any small piece removed from
anywhere within a hologram contains the same image that the entire
hologram contains, but at lower resolution. I don’t think you can support
this interpretation as representing the way the nervous system works.
There are some superficial resemblances, in that neural signals are in
general redundant, more than one pathway carrying the same kind of
information, and “a neural signal” being (as I suggested in
B:CP) a summation over many pathways. But that summation is just a
summation, not a combination of phase-related interference patterns that
are excited by some kind of coherent radiation. Isolating, for example,
the neural signals representing the left arm does not preserve the
information about the distance between the two arms or hands; in fact no
localized neural signals I know about represent anything but a small
portion of the totality of experience. It’s not possible to reconstruct a
present-time representation of the whole of experience from any
randomly-selected set of signals in the brain. Not, that is, that I have
ever heard of.

In B:CP I mentioned Pribram’s hologram postulate. I said that if the
brain is analogous to holograms, they would have to be small localized
holograms. I also added that while the brain might be like a hologram, it
is also possible (conveying, I hope, somewhat MORE possible) that the
brain is NOT like a hologram.

Anyway, the crux of this discussion doesn’t have to involve complex
propositions about holograms and such, Karl Pribram notwithstanding.
Basically, a vector is nothing more or less than a collection of
variables, each of which has a single magnitude at a given time and is
thus a scalar variable. Inside the nervous system, the main variables we
talk about are neural signals, repetitive trains of action potentials
moving from one place to another through the nervous system. Those
signals are the only way in which perceptions can be related to any prior
or subsequent activities in the brain, glands, or muscles. So any signal
in the nervous system that has a physical effect on a neuron or anything
else must be a scalar variable. I think that’s almost an axiom.

The only thing that ties together the individual variables in a vector
(other than an observer who is mentally – and arbitrarily –
grouping them, as you seem to be suggesting) is some physical interaction
between them or between the processes generating them. And that is
exactly what the PCT model of perception is designed to handle. Any
systematic relationship between the magnitudes of the variables can be
measured by a suitable perceptual input function. Multiple perceptual
input functions receiving the same input vector can extract signals
representing different aspects of the vector, an aspect being some
systematic relationship among variables. I went through that in my post
to Erling Jorgensen, including the observation that the maximum number of
independently-controllable aspects is the number of different variables
in the vector. Redundancy, of course, reduces that number.

The main point here is that the “aspects” in PCT are more than
mere conscious groupings of otherwise unrelated variables. Each aspect
comes to be measured as a scalar perceptual signal just like any other
scalar perceptual signal. Each such signal can have the same kinds of
physical effects that any neural signal can have. Since those are the
only effects the brain has on anything we can observe, observable
controlled variables must all be scalar perceptual
signals.

MT: What we perceive as environmental attributes such as objects
enter our neural system as patterns distrubuted widely across different
pathways, and most of those pathways are influenced by several different
things we consciously perceive as distinct. This is analogous to a
holographic representation, in which each object is represented in a
widely distributed area of the hologram, and each point of the hologram
is influenced by many different objects.

I disagree: the analogy is too incomplete to use. It is not the case that
signals in any small set of arbitrarily selected pathways can be used to
reconstruct the whole pattern contained in all the pathways. And anyway,
the pattern is not just “detected” in the manner of Gibson’s
“picking up” of information. It is constructed, and
there are many alternate constructions possible, and actual, both across
and within people (the Necker cube, for
example).

MT:Using this wording, we come once again to the question of whether
any or all controlled perceptions (outside of consciousness) that
correspond to environmental attributes could be represented as vectors at
each level within the perceptual control hierarchy. And for yet another
rephrasing, is it possible that some or all of the scalar controlled
variables at any one level within the hierarchy are influenced by several
environmental attributes at that level? We know that even in
consciousness perceptions are often quite
context-dependent.

Of course they are; that is already part of PCT and the idea of
many-to-one perceptual input functions. Change some of the inputs that
contribute to a given perception, and that perception and others are
likely to change magnitude, or disappear, or
appear.

BP earlier: For any function of multiple signals to have any real
effect, there must be a physical function that computes a new variable
that has real physical existence. Only variables that actually exist as
physical representations can have physical effects.
MT: What is a “real effect” in this context? Is it that
control works in the environment or that there is a conscious perception
of a single variable? And what do you mean by “real physical
existence”.

A “real effect” I define as the effect a neural signal can have
on a target neuron, or a muscle, or a gland, or anything else physically
affected by action potentials.

MT: Is this reality one perceived by the actor or one perceived by
the analyst when something the analyst perceives as an environmental
attribute is influenced by the actor?

It’s the one in our physical and neurological
models.

MT: My view is a little different. It comes from the ability to
control. If varying the magnitude of a vector influences something
consciously seen to be a single attribute, then the vector has unitary
significance.

BP: You’re simply putting properties of neural perceptual input functions
into something you call consciousness, making it into another part of the
neural hierarchy. That is begging the question of the nature of
consciousness. I do not assume that consciousness shares any physical
properties with the neural hierarchy. Extracting the aspect of
consciousness I call awareness, all I can say about it now is that it
receives information from neural signals. What it does with the
information and what happens next I can’t say. My own experiences extend
only to the mere fact of totally passive observation without
interpretation. Everything I can experience is in the object of
observation, not in the observer. I can’t perceive the observer as I
perceive other things. I simply know that I observe. Descartes got it
backward. I observe my thinking; therefore I am an observer. I am not the
thinking. If that’s what he meant, it didn’t survive
translation.

MT: If it is not possible, then there must be some general theorem
relating to reorganization that would show how what is holoform in the
sensory periphery is translated into an idoform representation before any
control level of perception. How could we find such a
theorem?

BP: First, demonstrate that there is such a thing as a holoform. Then I
might play the holoform-idioform game. Maybe. Let’s not just go on piling
one premise on top of another without stopping to demonstrate anything.
Make sure the increasingly ornate castle doesn’t rest on air, or
sand.

Bill P.

[Martin Taylor 2011.11.12.13.14]

[From Bill Powers (2011.11.12.0725 MST)]

  Note change in subject

field.

Yes. It's not a very helpful change, as I see it. There's never been

an issue about the relationship between vectors and scalars.

Your responses throughout this interchange have given two

impressions: (1) that you think I have some disagreement with the
standard HPCT picture, and (2) that I don’t understand that you
believe that your subjective conscious experience gives the plain
truth about how the control structure must be constructed.

Neither is true. I have no reason to doubt that the standard HPCT

structure performs as you say; and I do understand that you believe
your subjective experience is evidence for what goes on outside of
consciousness. I understand that you don’t like thinking of the
possibility that some or all environmental attributes might be
represented internally as vectors. The only problem is that I don’t
consider your personal preferences to be definitive evidence. I
don’t like the idea either, but I don’t find my discomfort to be
scientific evidence.

My question, which so far as I can determine you have not yet

addressed, has been rephrased in many different ways. Here is yet
another: Without reference to the hardware implementation, is it
possible to distinguish by experiment or simulation whether any
environmental variable is internally represented by one of these two
possibilities: (a) a single scalar signal, where “single” includes
parallel redundant pathways, or (b) a vector whose elements are also
influenced by other environmental variables?

Supplemental question: If it is possible to distinguish these two

possibilities for some one controlled perception, is it possible to
make the distinction for all controlled perceptions or to define
classes of controlled perceptions for which the distinction can be
made?

        MT: If it is not possible, then there must be some general

theorem
relating to reorganization that would show how what is
holoform in the
sensory periphery is translated into an idoform
representation before any
control level of perception. How could we find such a
theorem?

  BP: First, demonstrate that there is such a thing as a holoform.
Simple: The visual sensors report the brightness of a particular

point in a scene, relative to the immediate past brightness of that
visual direction and possibly to the brightness of neighbouring
points in the scene. The brightness of any part of an object depends
on the directional reflectivity of that part of the object and on
the light impinging on the object. The object is represented as a
vector over very many sensor outputs, each of which reports values
corresponding to at least two environmental attributes. That is a
holoform.

  Then I

might play the holoform-idioform game. Maybe. Let’s not just go on
piling
one premise on top of another without stopping to demonstrate
anything.
Make sure the increasingly ornate castle doesn’t rest on air, or
sand.

I dispute "increasingly ornate". The question hasn't changed since I

first brought it up. I pose it in different ways in the hope that
one of them might elicit a useful response.

For many days now, I have been trying to get you to show me that the

idea DOES rest on air or sand. I am personally uncomfortable with
the notion that controlled perceptions might be vector-valued
representations of things we consciously perceive as scalar and
unitary. But my own discomfort is no more persuasive to me than is
your discomfort with the idea.

I have not been able to find an argument to show that the controlled

perception corresponding to some environmental variable we see as
unitary must be represented by a scalar signal in the hardware,
either by my own thinking or by reading your messages. At first I
thought that by posing the question I would myself see a proof of
the necessity for there to be a scalar signal somewhere that
represents each environmental variable whose perception is
controlled. Then I thought that if I posed the question on CSGnet,
you or Richard or someone would provide such a proof, but that has
not happened. So I am reluctantly being drawn more and more to the
conclusion that there is no easily discoverable way to determine
whether controlled perceptions are or must be represented in the
hardware as scalar-valued signals. If there is such an easily
discoverable proof, one of us would probably have found it by now.

It is clear that the analyst must see conrolled perceptions as

scalar-valued quantities. However, those quantities apparently
could be the magnitude of a vector of one element (a scalar), of a
few elements, or of many elements. I hope this statement is
incorrect, but I want to be shown that it is incorrect because I
can’t prove it for myself. A mathematical proof would be ideal, but
a logical proof in words might suffice. The following is
insufficient:

          BP earlier: For any function of multiple signals to have

any real
effect, there must be a physical function that computes a
new variable
that has real physical existence. Only variables that
actually exist as
physical representations can have physical effects.
MT: What is a “real effect” in this context? Is it that
control works in the environment or that there is a conscious
perception
of a single variable? And what do you mean by “real physical
existence”.

  A "real effect" I define as the effect a neural signal can have

on a target neuron, or a muscle, or a gland, or anything else
physically
affected by action potentials.

For multiple influences to have an effect on a single value, there

is no need for any function to compute how those influences combine
to create the total influence, unless you believe, with Wolfram,
that the universe computes everything. If I push an object with 10
lb force and you, opposing, pull it with 5 lb force, I can compute
that the net force is 5 lb, but that doesn’t mean that somewhere in
the universe there is a new variable with “real physical existence”
creating the 5 lb push.

Martin
···

On 2011/11/12 11:01 AM, Bill Powers wrote:

[From Bill Powers (2011.11.12.1235 MST)]

Martin Taylor 2011.11.12.13.14

···

MT: I have no reason to doubt that the standard HPCT structure
performs as you say; and I do understand that you believe your subjective
experience is evidence for what goes on outside of consciousness. I
understand that you don’t like thinking of the possibility that some or
all environmental attributes might be represented internally as vectors.

It’s not exactly that I don’t like it; I simply can’t imagine how to
construct a model in which perceptions are represented as collections of
variables without ever being boiled down to individual perceptions that
can change magnitude. When I think I’m about to understand it, I realize
that I’m just thinking of the way perceptual input functions are
conceived now in my understanding of PCT.

If you can explain how this could possibly work, now would be a good time
to do so. Your example of the forces adding together to give a resultant
of 5 pounds is a good analog of a perceptual input function that outputs
a scalar signal representing a weighted sum of input scalars. In that
sense, yes, the universe computes the resultant: it’s an analog computer,
though, not a set of rule-driven symbol manipulations.

Best,

Bill P.

[From Bill Powers (2011.11.12.0725 MST)]
Note change in subject field.

Martin Taylor 2011.11.11.23.02 --

BP earlier: Can you imagine any way in which a collection of
independently generated variables can lead to a perception of some
unitary thing without first being represented as signals, and then
passing into a computing function that generates some particular value?

MT: No.
The question is whether this function is necessarily a component of
the perceptual control hierarchy, or exists only as a process of consciousness.
Let me repeat myself: Is it necessarily true that what is
consciously perceived as scalar is so perceived because the
corresponding controlled variable is itself a scalar?

BP: What is a "process of consciousness?" I have been assuming that
all perceptual input functions are neural and operate independently
of consciousness. My observations and those of other people are that
consciousness itself simply receives; it does not interpret, create
patterns, or do any of those other things we think of as perception.
It observes, and so far that is all I have been able to identify as a
process of consciousness. The content of consciousness, as far as I
can see, is created and given shape by neural systems.

My main piece of concrete evidence concerning this assumption is that
control of any kind I can think of can go on either consciously or
unconsciously. Since the PCT model assumes that all control, without
exception, is organized to control perceptual signals, unconscious
control implies existence of the same perceptual signals that are
involved in conscious control. Therefore perception itself is not a
conscious process.

That being accepted, we must then conclude that the nature of a
controlled variable is determined by the nature of the neural
computations that generate perceptual signals.

You're proposing that the nature of neural computations is analogous
to the principles of holography, in which any small piece removed
from anywhere within a hologram contains the same image that the
entire hologram contains, but at lower resolution. I don't think you
can support this interpretation as representing the way the nervous
system works. There are some superficial resemblances, in that neural
signals are in general redundant, more than one pathway carrying the
same kind of information, and "a neural signal" being (as I suggested
in B:CP) a summation over many pathways. But that summation is just a
summation, not a combination of phase-related interference patterns
that are excited by some kind of coherent radiation. Isolating, for
example, the neural signals representing the left arm does not
preserve the information about the distance between the two arms or
hands; in fact no localized neural signals I know about represent
anything but a small portion of the totality of experience. It's not
possible to reconstruct a present-time representation of the whole of
experience from any randomly-selected set of signals in the brain.
Not, that is, that I have ever heard of.

In B:CP I mentioned Pribram's hologram postulate. I said that if the
brain is analogous to holograms, they would have to be small
localized holograms. I also added that while the brain might be like
a hologram, it is also possible (conveying, I hope, somewhat MORE
possible) that the brain is NOT like a hologram.

Anyway, the crux of this discussion doesn't have to involve complex
propositions about holograms and such, Karl Pribram notwithstanding.
Basically, a vector is nothing more or less than a collection of
variables, each of which has a single magnitude at a given time and
is thus a scalar variable. Inside the nervous system, the main
variables we talk about are neural signals, repetitive trains of
action potentials moving from one place to another through the
nervous system. Those signals are the only way in which perceptions
can be related to any prior or subsequent activities in the brain,
glands, or muscles. So any signal in the nervous system that has a
physical effect on a neuron or anything else must be a scalar
variable. I think that's almost an axiom.

The only thing that ties together the individual variables in a
vector (other than an observer who is mentally -- and arbitrarily
-- grouping them, as you seem to be suggesting) is some physical
interaction between them or between the processes generating them.
And that is exactly what the PCT model of perception is designed to
handle. Any systematic relationship between the magnitudes of the
variables can be measured by a suitable perceptual input function.
Multiple perceptual input functions receiving the same input vector
can extract signals representing different aspects of the vector, an
aspect being some systematic relationship among variables. I went
through that in my post to Erling Jorgensen, including the
observation that the maximum number of independently-controllable
aspects is the number of different variables in the vector.
Redundancy, of course, reduces that number.

The main point here is that the "aspects" in PCT are more than mere
conscious groupings of otherwise unrelated variables. Each aspect
comes to be measured as a scalar perceptual signal just like any
other scalar perceptual signal. Each such signal can have the same
kinds of physical effects that any neural signal can have. Since
those are the only effects the brain has on anything we can observe,
observable controlled variables must all be scalar perceptual signals.

MT: What we perceive as environmental attributes such as objects
enter our neural system as patterns distrubuted widely across
different pathways, and most of those pathways are influenced by
several different things we consciously perceive as distinct. This
is analogous to a holographic representation, in which each object
is represented in a widely distributed area of the hologram, and
each point of the hologram is influenced by many different objects.

I disagree: the analogy is too incomplete to use. It is not the case
that signals in any small set of arbitrarily selected pathways can be
used to reconstruct the whole pattern contained in all the pathways.
And anyway, the pattern is not just "detected" in the manner of
Gibson's "picking up" of information. It is constructed, and there
are many alternate constructions possible, and actual, both across
and within people (the Necker cube, for example).

MT:Using this wording, we come once again to the question of whether
any or all controlled perceptions (outside of consciousness) that
correspond to environmental attributes could be represented as
vectors at each level within the perceptual control hierarchy. And
for yet another rephrasing, is it possible that some or all of the
scalar controlled variables at any one level within the hierarchy
are influenced by several environmental attributes at that level? We
know that even in consciousness perceptions are often quite context-dependent.

Of course they are; that is already part of PCT and the idea of
many-to-one perceptual input functions. Change some of the inputs
that contribute to a given perception, and that perception and others
are likely to change magnitude, or disappear, or appear.

BP earlier: For any function of multiple signals to have any real
effect, there must be a physical function that computes a new
variable that has real physical existence. Only variables that
actually exist as physical representations can have physical effects.

MT: What is a "real effect" in this context? Is it that control
works in the environment or that there is a conscious perception of
a single variable? And what do you mean by "real physical existence".

A "real effect" I define as the effect a neural signal can have on a
target neuron, or a muscle, or a gland, or anything else physically
affected by action potentials.

MT: Is this reality one perceived by the actor or one perceived by
the analyst when something the analyst perceives as an environmental
attribute is influenced by the actor?

It's the one in our physical and neurological models.

MT: My view is a little different. It comes from the ability to
control. If varying the magnitude of a vector influences something
consciously seen to be a single attribute, then the vector has
unitary significance.

BP: You're simply putting properties of neural perceptual input
functions into something you call consciousness, making it into
another part of the neural hierarchy. That is begging the question of
the nature of consciousness. I do not assume that consciousness
shares any physical properties with the neural hierarchy. Extracting
the aspect of consciousness I call awareness, all I can say about it
now is that it receives information from neural signals. What it does
with the information and what happens next I can't say. My own
experiences extend only to the mere fact of totally passive
observation without interpretation. Everything I can experience is in
the object of observation, not in the observer. I can't perceive the
observer as I perceive other things. I simply know that I observe.
Descartes got it backward. I observe my thinking; therefore I am an
observer. I am not the thinking. If that's what he meant, it didn't
survive translation.

...

MT: If it is not possible, then there must be some general theorem
relating to reorganization that would show how what is holoform in
the sensory periphery is translated into an idoform representation
before any control level of perception. How could we find such a theorem?

BP: First, demonstrate that there is such a thing as a holoform. Then
I might play the holoform-idioform game. Maybe. Let's not just go on
piling one premise on top of another without stopping to demonstrate
anything. Make sure the increasingly ornate castle doesn't rest on
air, or sand.

Bill P.

···

Sent from my LG phone
Bill Powers <powers_w@FRONTIER.NET> wrote:

[From Bill Powers (2011.11.12.1455 M<ST)]

Hello jannim, whoever you are. Your message didn't appear to contain any comments from you, though I couldn't swear to it. It appeared on my screen as one continuous stream of text with no paragraphs or other organization. Below is a sample to show you my problem:

Best,

Bill P.

···

At 04:39 PM 11/12/2011 -0500, jannim@comcast.net wrote:

Sent from my LG phone Bill Powers <powers_w@FRONTIER.NET> wrote: >[From Bill Powers (2011.11.12.0725 MST)] >Note change in subject field. > >>Martin Taylor 2011.11.11.23.02 -- > > >>BP earlier: Can you imagine any way in which a collection of >>independently generated variables can lead to a perception of some >>unitary thing without first being represented as signals, and then >>passing into a computing function that generates some particular value? >>> >>MT: No. >>The question is whether this function is necessarily a component of >>the perceptual control hierarchy, or exists only as a process of consciousness. >>Let me repeat myself: Is it necessarily true that what is >>consciously perceived as scalar is so perceived because the >>corresponding controlled variable is itself a scalar? >BP: What is a "process of consciousness?" I have been assuming that >all perceptual input functions are neural and operate independently >of consciousness. My observations and those of other people are that >consciousness itself simply receives; it does not interpret, create >patterns, or do any of those other things we think of as perception. >It observes, and so far that is all I have been able to identify as a >process of consciousness. The content of consciousness, as far as I >can see, is created and given shape by neural systems. > >My main piece of concrete evidence concerning this assumption is that >control of any kind I can think of can go on either consciously or >unconsciously. Since the PCT model assumes that all control, without >exception, is organized to control perceptual signals, unconscious >control implies existence of the same perceptual signals that are >involved in conscious control. Therefore perception itself is not a >conscious process.

[From Rick Marken (2011.11.12.0845 PST)]

Bill Powers (2011.11.12.1235 MST)--

MT: ...I understand that you
don't like thinking of the possibility that some or all environmental
attributes might be represented internally as vectors.

BP: It's not exactly that I don't like it; I simply can't imagine how to
construct a model in which perceptions are represented as collections of
variables without ever being boiled down to individual perceptions that can
change magnitude. When I think I'm about to understand it, I realize that
I'm just thinking of the way perceptual input functions are conceived now in
my understanding of PCT.

If you can explain how this could possibly work, now would be a good time to
do so. Your example of the forces adding together to give a resultant of 5
pounds is a good analog of a perceptual input function that outputs a scalar
signal representing a weighted sum of input scalars. In that sense, yes, the
universe computes the resultant: it's an analog computer, though, not a set
of rule-driven symbol manipulations.

RM: I believe I said almost exactly the same thing back when I was
participating in this discussion. So it's nice to see it finally come
back to where I had it back then. Let's hope that you are more
successful than I was at getting such an explanation.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2011.11.14.12.55]

[From Rick Marken (2011.11.12.0845 PST)]

  Bill Powers (2011.11.12.1235 MST)--
MT: ...I understand that you
don't like thinking of the possibility that some or all environmental
attributes might be represented internally as vectors.

BP: It's not exactly that I don't like it; I simply can't imagine how to
construct a model in which perceptions are represented as collections of
variables without ever being boiled down to individual perceptions that can
change magnitude. When I think I'm about to understand it, I realize that
I'm just thinking of the way perceptual input functions are conceived now in
my understanding of PCT.

If you can explain how this could possibly work, now would be a good time to
do so. Your example of the forces adding together to give a resultant of 5
pounds is a good analog of a perceptual input function that outputs a scalar
signal representing a weighted sum of input scalars. In that sense, yes, the
universe computes the resultant: it's an analog computer, though, not a set
of rule-driven symbol manipulations.

RM: I believe I said almost exactly the same thing back when I was
participating in this discussion. So it's nice to see it finally come
back to where I had it back then. Let's hope that you are more
successful than I was at getting such an explanation.

I thought it was I who has been looking for an explanation of how it _cannot_ work! I haven't seen any explanation more cogent than "I don't see how it could work". I have no problem with your inability to see how it could work, but it would be nice if such protestations were accompanied by a reason why you don't see how it could work. What do you see as preventing it from working that I don't see?

Last night as I was going to bed, I thought I had such an explanation, but on further consideration, it failed. To illustrate the line along which I was thinking, here is my failed explanation.

---------Faulty explanation-------

I considered only the top level of a hierarchy, whether it be a simulation or a biological system. If the question can be resolved for the top level, then it is resolved for every level (or so I believe). The top level consists of a number of elementary control units, each of which controls a scalar perceptual signal to maintain a fixed reference value. I start by assuming an initial condition in which perceptual signal is at that reference value and the error value is zero in every such control unit.

In the standard HPCT view, each of those scalar perceptual signals represents some identifiable element of the exterior environment. For each, the reference value is fixed at some predetermined value. This value is typically assumed to be zero, but that assumption isn't necessary. I waived it in my faulty explanation. When any element of the exterior environment changes (is disturbed), the error value becomes non-zero in the corresponding elementary control unit, and only in that unit. The unit produces output to compensate for the disturbance.

In the variant HPCT view (let's call it VHPCT), none of those scalar perceptual signals represents an identifiable element of the exterior environment. When any element of the exterior environment changes, the value of the scalar perceptual signal changes in every top-level elementary control unit. The change is vector valued. Since now all of the top-level elementary control units have a non-zero value for the error signal, all of them will be producing their corresponding output values.

In my faulty thinking, I assumed that this multitude of outputs would necessarily generate a confusion of effects, especially when several different elements of the environment are disturbed simultaneously. I thought that this confusion would make it unlikely in VHPCT that any particular disturbance to one element in the external environment could be countered in isolation, without disturbing other elements of the environment.

At that point, I thought I had an explanation of why the representation of a controllable perception of an environmental entity must be a scalar.

-------End faulty explanation-----

Thinking through this explanation a little more deeply, I realized that at the level below the top level of the simulation or of the biological hierarchy (call it NLD -- next level down), there's no necessary difference between the set of reference values generated by the distributed output of a single scalar top-level control unit and the set of reference values generated by a vector of such outputs from control units responsive to a distributed perceptual vector. The so-called "confusion" would be identical, and each system could influence any particular environmental entity or attribute with the same independence of influence on other environmental entities -- and hence with equal mutual interference or potential for conflict (the "confusion").

For the standard HPCT structure to work, reorganization must have created a perceptual function that creates a single scalar value for some perceptible attribute of the environment. For mutual interference and potential conflict to be avoided, reorganization must ensure that all these perceptual functions are orthogonal (in the way x+y and x-y are orthogonal), and that the output of any one control unit has no side-effects on environmental entities that influence other perceptual signal values at that level.

For the VHPCT structure to work with the same lack of mutual interference and conflict potential, the vectors representing the perceptions of the different environmental entities or attributes must be orthogonal, but reorganization does not need to produce any corresponding specialized perceptual functions or control units. Also, the NLD vector of reference signal values depends on weights produced by reorganization, and these could result in the same set of reference signal values as would be produced by the single scalar control unit of the standard HPCT structure under corresponding conditions of environmental disturbance.

Hence my explanation fails. I have been, and am still, hoping that someone will produce a better one.

Having noted that my explanation failed, I also noticed something else. The standard HPCT structure can be seen as a special case of the VHPCT structure, and there are intermediate possbilities.

How so? Consider the extreme case in which the top-level control units are actually orthogonal, in that what one perceives is independent of what any other perceives, and the output of one does not disturb the perception of another. In standard HPCT, this is accomplished by the construction of the perceptual input functions and the distributions of outputs to NLD reference inputs. In VHPCT, it is accomplished by the distribution of NLD perceptual values to the top-level perceptual input functions and by the distribution of output signals to NLD reference inputs.

The effect in either case is to produce a set of orthogonal perceptual signal vectors at the top level, and the same set of reference values at the NLD. Using the VHPCT structure, the HPCT signal values could be produced by appropriate distribution of the NLD perceptual signals, so as to create a top-level perceptual vector whose weights referenced to environmental entities and attributes were all of the form {..., 0, 0, 1, 0 0 ....}.

I have hitherto spoken of the vector representation of environmental entities and attributes as though the distinction between vector and scalar representation was all-or-none, a binary choice -- either the perception of any entity or attribute is isolated to a single scalar signal path or it is distributed across all paths and all paths are shared by all entities. But looking at it as suggested above, we see that intermediate forms are possible, in which the vector representation of an entity is distributed across only a few of the many available perceptual signals. Such a representation would be called "coarse coding" in standard engineering practice. It is a way of getting precise, smooth, and robust representation of the value of a variable (and is the way we see a finely graded realm of different colours using the overlap in the acceptance spectra of our red, green, and blue cones).

The failure of my explanation, and my further consideration of its implications, leads me to look more favourably on VHPCT. That worries me a little.

What would be really nice would be a theoretical or simulation demonstration that reorganization tends to converge on a system in which each controlled perception is a scalar that represents a single environmental entity or attribute, rather than maintaining the holoform character of the initial sensory input.

Martin

[From Rick Marken (2011.11.12.1210)]

Martin Taylor (2011.11.14.12.55)--

BP: f you can explain how this could possibly work, now would be a good time
to do so...

RM: I believe I said almost exactly the same thing back when I was
participating in this discussion.

MT: I thought it was I who has been looking for an explanation of how it
_cannot_ work!

Ah, yes, that's why I got out of the discussion. You want an
explanation of how something that doesn't exist cannot exist. I'm
afraid that such an explanation would be way above my pay grade;-)

MT: The failure of my explanation, and my further consideration of its
implications, leads me to look more favourably on VHPCT. That worries me a
little.

What worries me even more is why I re-entered this thread;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2011.11.13.15.19]

[From Rick Marken (2011.11.12.1210)]

MT: I thought it was I who has been looking for an explanation of how it
_cannot_ work!

Ah, yes, that's why I got out of the discussion. You want an
explanation of how something that doesn't exist cannot exist.

Are you so sure it doesn't exist? How?

At least you satisfy my lesser request, and say what it is about the possibility that I can't see that leads you -- you personally -- to say that you don't see how it could work.

Martin

[From Rick Marken (2011.11.12.1325)]

MT: �I thought it was I who has been looking for an explanation of how it
_cannot_ work!

RM: Ah, yes, that's why I got out of the discussion. You want an
explanation of how something that doesn't exist cannot exist.

MT: Are you so sure it doesn't exist? How?

It's kind of like god. I have seen no evidence that it exists so,
unless some evidence of its existence turns up, I'll stick with the
null hypothesis;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.11.13.1530 MST)]

Martin Taylor 2011.11.14.12.55 –

---------Faulty explanation-------

I considered only the top level
of a hierarchy, whether it be a simulation or a biological system. If the
question can be resolved for the top level, then it is resolved for every
level (or so I believe).

The top level consists of a number of elementary control units, each of
which controls a scalar perceptual signal to maintain a fixed reference
value. I start by assuming an initial condition in which perceptual
signal is at that reference value and the error value is zero in every
such control unit.

In the standard HPCT view, each of those scalar perceptual signals
represents some identifiable element of the exterior environment.

This assumption is unverifiable. The nearest we could get would be to
find some correspondence between a controlled perception and a variable
in another model of the world, such as physics – which is also made of
perceptions. It is unlikely that any system concept would correspond to
an identifiable element of the exterior world.
I use the term “aspect” in place of “identifiable
element.” The latter term suggests that there are objective elements
of the exterior world, and all the perceptual system has to do is to
identify them. “Aspect,” on the other hand, implies only a way
of looking at reality.

···

==========================================================================
Origin: 1350–1400; Middle English < Latin aspectus
appearance, visible form, the action of looking at, equivalent to aspec-
(variant stem of aspicere to observe, look at; a-
a-5 +
-spicere, combining form of specere to see) + -tus suffix of v.
action

At the levels where I have explored reorganization, a sensation is
assumed to be a weighted sum of intensity signals. By the vector thesis,
the weighted sum appears to be unnecessary for controlling a sensation:
the separate intensity signals would be sufficient. To allow independent
control of different sensations, under the vector approach, it would be
necessary somehow to pick some regularity out of all the intensity
signals that covary in proportions like those in a particular weighted
sum, and control their relative proportions without ever computing them.
As soon as you introduce any kind of computation, we’re back in the
perceptual input function model with scalar variables. As far as I can
see, that is.
There’s something I don’t understand about this discussion. Why is it so
important to you to have only vectors in the system? I ask especially
because you haven’t said how you think that would actually be
accomplished – there’s a lot of magic lurking in the background. As the
PCT model stands, there are still many things unexplained, such as
qualia, but the model of perception itself, aside from the question of
consciousness, seems fairly well in hand. The representation of vector
characteristics as scalars seems simple enough in principle even if we
don’t know what kind of computations would be involved. But there seems
to be something unsatisfactory to you about that idea. What is
it?
The way it seems to me right now is that the vector concept lacks any
mechanism to distinguish patterns – it’s just a collection of variables.
Of course we, looking at examples, can see patterns, but there’s nothing
in the model that can see them. That, to me, is a big gap in the
vector-only idea of perception.

By the way, a vector is not by itself “orthogonal”.
Orthogonality is conferred by computing sets of functions of the
components of the vector such that the functions are related through
orthogonal trajectories. Except for using that term, that’s what chapters
7 and 8 of LCS3 are about. Different vectors in N-space can be orthogonal
to each other, but not by themselves. This happens when the components
vary in such a way as to change the magnitude of one vector without
changing other vectors in the same space. To say “in the same
space” is to say that the same components are part of all vectors in
that space, in different proportions.

Best,

Bill P.

[Martin Taylor 2011.11.13.23.02]

[From Bill Powers (2011.11.13.1530 MST)]

  Martin Taylor 2011.11.14.12.55 --



  ---------Faulty explanation-------
    I considered only the

top level
of a hierarchy, whether it be a simulation or a biological
system. If the
question can be resolved for the top level, then it is resolved
for every
level (or so I believe).

    The top level consists of a number of elementary control units,

each of
which controls a scalar perceptual signal to maintain a fixed
reference
value. I start by assuming an initial condition in which
perceptual
signal is at that reference value and the error value is zero in
every
such control unit.

    In the standard HPCT view, each of those scalar perceptual

signals
represents some identifiable element of the exterior
environment.

  This assumption is unverifiable. The nearest we could get would be

to
find some correspondence between a controlled perception and a
variable
in another model of the world, such as physics – which is also
made of
perceptions. It is unlikely that any system concept would
correspond to
an identifiable element of the exterior world.

  I use the term "aspect" in place of "identifiable

element." The latter term suggests that there are objective
elements
of the exterior world, and all the perceptual system has to do is
to
identify them. “Aspect,” on the other hand, implies only a way
of looking at reality.

Good. I prefer "aspect" but stopped using it a few messages ago

because the implications you mention seemed to be getting lost. The
aspects I was interested in were those that could be influenced by
one’s actions, the environmental correlates of controllable
perceptions, so I did not need to consider triangulation.

  At the levels where I have explored reorganization, a sensation is

assumed to be a weighted sum of intensity signals. By the vector
thesis,
the weighted sum appears to be unnecessary for controlling a
sensation:
the separate intensity signals would be sufficient. To allow
independent
control of different sensations, under the vector approach, it
would be
necessary somehow to pick some regularity out of all the intensity
signals that covary in proportions like those in a particular
weighted
sum, and control their relative proportions without ever computing
them.
As soon as you introduce any kind of computation, we’re back in
the
perceptual input function model with scalar variables. As far as I
can
see, that is.

I'm afraid I don't see the connection between the two halves of your

second-last sentence.

Overall, I agree with the thrust of this paragraph, but I sense an

underlying viewpoint that is encapsulated in “it would be necessary
to pick out.” Who does the picking, and why would it be necessary?
As I imagine it, what would be necessary would be for the different
interconnection weights between levels to adjust in the way
reorganization is assumed to adjust weights in both standard HPCT
and VHPCT, converging toward an organization in which control is
adequate to keep intrinsic variables within their genetically
determined viable ranges.

If a VHPCT variant truly represents what happens in a living

organism, there would be no need for anyone or any function to
compute correlations or demonstrate orthogonality. An analyst might,
when teasing apart what reorganization had produced, but that would
not mean that the organism contained such a computational mechanism,
any more than we expect the universe to compute that a 5 lb force
opposing a 10 lb force will produce the same acceleration as would a
5 lb force acting alone, and then applying that 5 lb force.

  There's something I don't understand about this discussion. Why is

it so
important to you to have only vectors in the system?

It isn't. As one who would prefer a system simple to analyze, I

would prefer the VHPCT hypothesis to be eliminated from
consideration, but not by Rick’s ostrich trick.

As I explained a couple of times, the possibility that controllable

perceptions might be represented as vectors in the hardware
initially troubled me and I tried to prove to myself that it could
not be true. Having failed to prove to myself its impossibility, I
asked the CSGnet community if such a proof could be found. Having
failed in that also, I find that I must give it equal status with
the scalar representation “standard theory”, in the sense that
neither can be dismissed as being better attested by evidence than
the other.

  I ask especially

because you haven’t said how you think that would actually be
accomplished – there’s a lot of magic lurking in the background.

We may be coming from different backgrounds here, because I would

say that the degree of “magic” is fairly similar. It hinges on the
properties of reorganization. The VHPCT magic requires only that
reorganization produce a structure that controls a variety of
perceptual representations of aspects that can be influenced in the
environment. Standard HPCT adds the bit of magic that additionally,
the perceptions are composed into individual signal paths, each
corresponding to some identifiable aspect of the environment. (For
another way of saying the same thing, see the last part of this
message).

It may be true that reorganization naturally performs this added bit

of magic, in which case I think I would no longer need to be
concerned with VHPCT as a possible alternative to standard HPCT.
Have you ever studied reorganization in a simulated environment that
has sufficient richness in its relationships to make multi-level
control of simultaneously varying perceptions appreciably superior
to single-level control? I know you have developed simulated systems
with multilevel control, but so far as I know, they have assumed the
HPCT structure rather than allowing it to emerge. And I know you
have simulated multi-level reorganization, but have you done both
together in a rich complex environment?

  The way it seems to me right now is that the vector concept lacks

any
mechanism to distinguish patterns – it’s just a collection of
variables.
Of course we, looking at examples, can see patterns, but there’s
nothing
in the model that can see them. That, to me, is a big gap
in the
vector-only idea of perception.

Does the notion that reorganization does the same job in VHPCT as in

standard HPCT remove this gap for you? It does for me.

  By the way, a vector is not by itself "orthogonal".

Orthogonality is conferred by computing sets of functions of the
components of the vector such that the functions are related
through
orthogonal trajectories. Except for using that term, that’s what
chapters
7 and 8 of LCS3 are about. Different vectors in N-space can be
orthogonal
to each other, but not by themselves. This happens when the
components
vary in such a way as to change the magnitude of one vector
without
changing other vectors in the same space. To say “in the same
space” is to say that the same components are part of all vectors
in
that space, in different proportions.

Quite so. Did you write this to convince yourself, or to explain to

lurkers what I took for granted anyone reading this interchange
seriously would know?

To continue the explanation for the benefit of lurkers, what we are

talking about is a family of vectors in N-space, where N is the
number of elementary control units at a level controlling scalar
perceptual signals. Each member of the family varies in magnitude as
a consequence of changes in some particular aspect of the
environment. VHPCT and standard HPCT are identical in this.

In order to specify a vector in N-space, one must specify a basis,

which is a set of orthogonal vectors such as the left-right,
front-back, up-down directions we often use in the 3-space of
everyday life (The basis vectors don’t have to be mutually
orthogonal, but for convenience orthogonality is often assumed).
There’s nothing mathematically special about those three directions
except that they are orthogonal (their projections on each other
have zero magnitude). Their specialness derives from life in a world
defined by gravity: horizons are on average left-right, heavy things
and light balloons fall and rise up-down, and things come and go
front-back. “Up to the right and a bit back” is not a natural
direction to choose as a basis vector, though mathematically it is
as good a start as any other.

In the 3-space of colour, it is less easy to specify a set of three

natural directions to serve as a basis. One sees this difficulty in
the different ways one is allowed to specify colour on a computer.
Nevertheless, three numbers is all you need to specify a colour once
you define the basis.

In N-space, in the general case it is even less easy to find a

natural basis set of directions. However, when we are talking about
perceptions at one level of an HPCT hierarchy, it seems natural to
use the perceptual input functions of the N elementary control
systems at that level, even though they are not guaranteed to be
mutually orthogonal. For ease of description just now, let us assume
they are orthogonal, in that the inputs to all N perceptual input
functions can be simultaneously varied in such a way that any one
perceptual signal can be varied by itself without changing any of
the others.

In an HPCT hierarchy, every perceptible change in the environment

will cause a change in the pattern of values of the N perceptual
signals. That change in pattern defines a direction in the N-space
for which the individual perceptual input functions form a basis.
The direction is defined by a list of numbers {…, 3, -2, 5, …},
otherwise known as a vector.

In standard HPCT, whenever an aspect of the world perceived as

unitary changes, the change vector at the corresponding perceptual
level has the form {…, 0, 0, M, 0, 0, …}, where M represented
the magnitude of the change. If a different aspect changes, the same
region of the vector might look like {…, 0, K, 0, 0, 0, 0, …}, a
different vector element being non-zero. The two vectors are
orthogonal. The only difference for VHPCT is that under the same
conditions, the change vector would have more than one non-zero
element. Changes in different independent unitary aspects of the
environment would still produce orthogonal vectors of change in the
perceptual N-space.

One can look at a change of basis as a rotation of the space.

Likewise, one can look at the perceptual change vectors in a level
of the VHPCT hierarchy as just a rotation of what the change vectors
would be in a standard HPCT hierarchy.

The orientation of the basis with respect to manipulable aspects of

the environment is fixed in HPCT. If there is a manipulable
perceptible aspect, there is one perceptual function that is altered
by changes in that aspect. In VHPCT the orientation of the basis is
arbitrary, but the demands on the control machinery seem to be
identical. So yet another way of wording my question could be: “In
our gravity-based everyday world, there are 3 natural directions to
the basis space for describing movement; is there an analogous
natural set of perceptual directions toward which reorganization
tends to develop the complete hierarchical control structure?”

Martin
···

[From Rick Marken (2011.11.14.0900)]

Martin Taylor (2011.11.13.23.02)--

MT: Good. I prefer "aspect" but stopped using it a few messages ago because the
implications you mention seemed to be getting lost. The aspects I was
interested in were those that could be influenced by one's actions, the
environmental correlates of controllable perceptions, so I did not need to
consider triangulation.

RM: I can't think of any perception (aspect of the external
environment) that is not influenced by one's actions. The fact that
sensors are situated on a moveable frame (our bodies) suggests that
everything we perceive is influenced by what we are doing (or not
doing) at any instant. We exist in a feedback loop and there is no way
to separate what we get (perception) from what we do (action/non
action).

BP: There's something I don't understand about this discussion. Why is it so
important to you to have only vectors in the system?

MT: It isn't. As one who would prefer a system simple to analyze, I would prefer
the VHPCT hypothesis to be eliminated from consideration, but not by Rick's
ostrich trick.

There is no need for me to bury my head to avoid the VHPCT hypothesis
because there is no there there. The VHPCT hypothesis is just a bunch
of assertions that you are making. You are trying to make it sound
like VHPCT is some kind of alternative to the PCT architecture -- an
alternative that is implied (demanded?) by some neurophysiological
data. But your description of the VHPCT architecture has been very
vague and, therefore, it has been difficult (impossible) for me to
understand it as an alternative to the PCT architecture. It just
sounds to me like you are trying to make some kind of mountain out of
something less than a mole hill.

As I said awhile ago, I would be very interested in seeing what the
VHPCT model _is_, assuming there is some there there. It would be
easier for me to understand what the VHPCT model is if you could build
a working implementation of it, either as a set of equations or,
better, as a computer program. How about writing the VHPCT model of a
simple two-dimensional tracking task. If the controlled variable is
two dimensional I presume that would mean at least two elements in the
Vector component of the VHPCT model, so a model of a two dimensional
tracking task should make it possible to see the difference between
the VHPCT and PCT models.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2011.11.15.08.06]

[From Rick Marken (2011.11.14.0900)]

Martin Taylor (2011.11.13.23.02)--
MT: Good. I prefer "aspect" but stopped using it a few messages ago because the
implications you mention seemed to be getting lost. The aspects I was
interested in were those that could be influenced by one's actions, the
environmental correlates of controllable perceptions, so I did not need to
consider triangulation.

RM: I can't think of any perception (aspect of the external
environment) that is not influenced by one's actions. The fact that
sensors are situated on a moveable frame (our bodies) suggests that
everything we perceive is influenced by what we are doing (or not
doing) at any instant. We exist in a feedback loop and there is no way
to separate what we get (perception) from what we do (action/non
action).

Interesting way of looking at it. Are you saying that I can influence my perception of the roughness of the lake surface I am now regarding by moving to the other end of my window? (I have been at my cottage for a few days now, preparing for winter).

BP: There's something I don't understand about this discussion. Why is it so
important to you to have only vectors in the system?

MT: It isn't. As one who would prefer a system simple to analyze, I would prefer
the VHPCT hypothesis to be eliminated from consideration, but not by Rick's
ostrich trick.

There is no need for me to bury my head to avoid the VHPCT hypothesis
because there is no there there. The VHPCT hypothesis is just a bunch
of assertions that you are making.

I regard it as a bunch of questions I am asking. Maybe you can go back in my messages and pick up an assertion or two, but I don't remember making any relevant to the issue I raised. I've made mathematical assertions, such as that the VHPCT concept of the representation of a perception can be seen as a rotation of the HPCT concept in N-space, but I don't think such assertions are very radical.

  You are trying to make it sound
like VHPCT is some kind of alternative to the PCT architecture

I'm trying to to make it as clear as I can that the VHPCT architecture IS the HPCT architecture. Initially I did this by referring to your spreadsheet as an example of the VHPCT architecture.

If I was trying to make it sound as though VHPCT is an alternative to the PCT architecture, I wouldn't use a simple HPCT structure as an example, would I? I can certainly think of plausible alternative PCT architectures that are not standard HPCT (which is why I always use the "H" when talking about the standard hierarchic architecture), but if I want to talk about an alternative, I will specify where it differs from HPCT, while still being strict PCT.

  -- an
alternative that is implied (demanded?) by some neurophysiological
data.

"Suggested" would be a better word.

  But your description of the VHPCT architecture has been very
vague and, therefore, it has been difficult (impossible) for me to
understand it as an alternative to the PCT architecture.

The possibility that one might see it as vague was the reason I used your spreadsheet as a concrete example, to illustrate by example that I am asking a question about the behaviour of the standard architecture, not about the architecture itself.

As I said awhile ago, I would be very interested in seeing what the
VHPCT model _is_, assuming there is some there there. It would be
easier for me to understand what the VHPCT model is if you could build
a working implementation of it, either as a set of equations or,
better, as a computer program.

You already have your spreadsheet, so that part has been done. The difficulty with using it to answer my question is that the spreadsheet doesn't reorganize in order to maintain any intrinsic variables near their reference levels.

How about writing the VHPCT model of a
simple two-dimensional tracking task. If the controlled variable is
two dimensional I presume that would mean at least two elements in the
Vector component of the VHPCT model, so a model of a two dimensional
tracking task should make it possible to see the difference between
the VHPCT and PCT models.

Any of your models of 2-D tracking would be what you are asking for, if they incorporated a reorganization system so that the representation of the tracked variables was not at the whim of the programmer.

Let me simply ask my question for the umpty-zillionth time, using slightly different words yet again: Assuming that an HPCT hierarchy is the correct architecture for biological control systems, is there any evidence from theoretical analysis or practical example in biology or simulation that a scalar-valued controlled perception corresponding to something in the environment MUST be embodied in a scalar-valued signal somewhere in the HPCT hierarchy?

At first sight, any example of the Test would seem to answer the question. You make a model that incorporates control of a variable you guess might be the controlled variable and see how well it fits the subject's actual data. The model's structure includes your guessed variable as a scalar quantity, and if you find a guess for which the model data is a very good fit to the subject's data, you say "either this or something very like it is what the subject is tracking." Since the model incorporates a scalar value for the controlled perception, you assume that what the subject is controlling is a scalar value.

In VHPCT, all of that would be the same. The only difference is how this scalar value is represented inside the subject. Is this value the magnitude of a signal you could find by probing the appropriate neural pathway, or is it the magnitude of a vector whose individual elements are represented over several neural pathways? In VHPCT the number of elements of the vector could be anything from one to trillions. In standard HPCT it can only be one.

So the Test does not address the question. It provides an analyst's view. It finds a scalar variable that must be very like a scalar value that is controlled, but it does not indicate anything about how that value is represented in the subject.

Does the VHPCT structure and the question I am asking make more sense now? Nothing in the VHPCT structure violates the standard HPCT structure. All it does is remove one assertion that has always been implicit in HPCT, always assumed to be almost axiomatic, in the same way that the Euclidean "parallel lines extended infinitely will never meet" was assumed to be a necessary axiom about real space until the mid-nineteenth century. I'm asking whether this axiom of HPCT is necessary in theory or is a fact in biology, and whether there is any evidence one way or the other.

Martin

···

On 2011/11/14 12:01 PM, Richard Marken wrote:

[Martin Taylor 2011.11.15.09.00]

[From Bill Powers (2011.11.14.0735 MST)]

  Martin Taylor 2011.11.13.23.02



  It's almost feeling natural to type mSt instead of mDt. I shall

give the
third finger left hand a reinforcer. Any
suggestions?

Personally, I consider the time stamp only as a convenient way of

referring to a message when one wants to look back in the archives.
Some of my time stamps refer to times some days before the message
was actually committed to CSGnet, but they are guaranteed to be
unique.

        MT: I prefer "aspect" but stopped using it a few messages

ago because the implications you mention seemed to be
getting lost. The
aspects I was interested in were those that could be
influenced by one’s
actions, the environmental correlates of controllable
perceptions, so I
did not need to consider triangulation.
OK, aspect it
is.

          BP earlier: As soon as you introduce any kind of

computation, we’re
back in the perceptual input function model with scalar
variables. As far
as I can see, that is.
MT: I’m afraid I don’t see the connection between the two
halves of
your second-last sentence.
BP: Let’s try this again – maybe it will be a shortcut.

  How is your picture of the result of reorganization different from

Fig.
3-11 in B:CP?

My picture of VHPCT reorganization is, by definition, identical to

that of HPCT. If you were to invent or discover a different method
of reorganization, or alter the influence of different kinds of
intrinsic variable on the reorganization process, those inventions
and discoveries would apply equally in VHPCT.

...
  BP: The present model doesn't compute the weightings

needed to
achieve orthogonality between different functions of the same set
of
input elements, nor does it compute any correlations. The
orthogonality
simply grows out of reorganization, for the simple reason that the
multiple control systems involved control best when the assorted
functions sense and affect orthogonal functions of the common set
of
input variables.

No differences here between standard HPCT and VHPCT.
      There's something I

don’t
understand about this discussion. Why is it so important to
you to have
only vectors in the system?
MT: It isn’t. As one who would prefer a system simple to
analyze, I
would prefer the VHPCT hypothesis to be eliminated from
consideration,
but not by Rick’s ostrich trick.
As I explained a couple of times, the possibility that
controllable
perceptions might be represented as vectors in the hardware
initially
troubled me and I tried to prove to myself that it could not
be true.
Having failed to prove to myself its impossibility, I asked
the CSGnet
community if such a proof could be found. Having failed in
that also, I
find that I must give it equal status with the scalar
representation
“standard theory”, in the sense that neither can be
dismissed
as being better attested by evidence than the other.

  BP: I have shown above, I think, that a representation of the

relationship between any two vectors can be transformed into the
form I
chose for representing the control hierarchy. This shows that (a)
a
vector represention of the hierarchy is possible, and (b) that it
is
equivalent to the functional representation I use. What more do
you want?

If I read this correctly, you are now saying that it is inherently

impossible to distinguish between standard and V versions of HPCT.
That is a definite answer to my question, just as was your assertion
to the contrary a few days ago. I have come, reluctantly, to give
more credence to this answer than I did when I first asked the
question, but I don’t trust either very far. If your current answer
is correct, I suppose we have to include the V version of HPCT in
further consideration of the possible behaviours of the HPCT
architecture.

...
  But multi-level control is not just an alternative to

single-level
control. To control a logical variable, you can’t use weighted
sums.

This brings out a point that I often have wanted to make. There is

no value, mathematically speaking, in any multi-level control that
uses only weighted sums as the perceptual input functions of the
higher levels. All you accomplish is a rotation of the
representation space. You don’t get anything new, as you can when
you incorporate nonlinear processes in the perceptual input
functions. Perhaps you gain convenience, perhaps not. (Maybe I
should also enter a caveat here about reference input functions as
well, but we aren’t talking about them here).

...
        BP earlier: The

way it seems to
me right now is that the vector concept lacks any mechanism
to
distinguish patterns – it’s just a collection of variables.
Of course
we, looking at examples, can see patterns, but there’s
nothing * in the
model* that can see them. That, to me, is a big gap in
the vector-only
idea of perception.

      Does the notion that reorganization does the same job in VHPCT

as in
standard HPCT remove this gap for you? It does for me.

    But reorganization creates the functional relationships you are

ignoring.
It does the same job in VHPCT and in HPCT, if I understand your
distinctions correctly, which may not be true. The vectors are
NOT just
collections of variables: they are functionally related.

One can imagine any kind of pattern in the perceptual signals, and

possibly control it, but whether controlling that pattern has any
influence on intrinsic variables will depend on how the side-effects
of controlling that pattern affect the intrinsic variables. Only
when controlling that pattern is more likely to keep the intrinsic
variables within safe limits than to drive them out of their safe
zone will reorganization tend to maintain that particular structure.
So yes, one might choose to control some function of the height of
skirt hemlines and one’s balance between cash and stocks – and
people do just that – but we tend to call people who control such
functions “superstitious”. They may not easily be reorganized out of
existence, because they usually don’t much influence intrinsic
variables one way or another, but on balance, they should be more
likely to go away than to survive. That is as true in VHPCT as it is
in standard HPCT.

On reorganization demonstrations...

Have you experimented with reorganization when the set of intrinsic

variables that influence the rate of reorganization includes more
than just quality of control? I don’t have your books with me, but I
don’t remember you mentioning such an experiment here or in what I
have read. Maybe I am wrong, but it seems to me that you can test
reorganization using control quality as your single intrinsic
variable only if you define ex officio the quantity or quantities
being controlled at the top level.

Come to think of it, it might be possible to use that last

observation in testing my question.

We could set up a system to reorganize so that it controls a small

number of scalar values that correspond to states of an environment
the experimenter can disturb, and by fiat provide the reference
levels for those values to the top level of the reorganizing
structure. To make the experiment worthwhile, these states should be
functions of several quantities as seen at the sensor level.
Normally, you would provide each reference value separately to the
individual top-level control units, but for this experiment the
reference values would be provided as the magnitudes of a set of
orthogonal vectors, one example of which would be to provide the
reference values in the normal way as individual quantities to the
individual top-level control units, while other examples would
provide the reference values as sets of mutually orthogonal vectors
distributed across the top-level units.

If, after reorganization, the system with the individually specified

reference values outperforms the other examples when you disturb the
corresponding scalar environmental quantities, or if it reorganizes
to criterion appreciably faster than the vector-valued reference
examples, my question would be addressed. Such a result would
strongly suggest that we should dismiss VHPCT from further
consideration. On the other hand, if all the examples perform
similarly, such a result would say that VHPCT should continue to be
considered as a viable form of PCT for further analysis and
experiment.

Does that make sense?

Martin
···

On 2011/11/14 12:02 PM, Bill Powers wrote:

[From Bill Powers (2011.11.15.0759 MST)]

Martin Taylor 2011.11.15.08.06

···

Rick Martin (RM) commented: But your description of the VHPCT
architecture has been very vague and, therefore, it has been difficult
(impossible) for me to understand it as an alternative to the PCT
architecture.
MT: The possibility that one might see it as vague was the reason I
used your spreadsheet as a concrete example, to illustrate by example
that I am asking a question about the behaviour of the standard
architecture, not about the architecture itself.
BP: This isn’t really a clarification, because the spreadsheet is an
HPCT model, which you seem to be saying is not a VHPCT model. But you
haven’t shown us what a VHPCT model would look like, so how are we to
tell the difference?
RM: As I said awhile ago, I would be very interested in seeing what
the
VHPCT model is, assuming there is some there there. It would
be
easier for me to understand what the VHPCT model is if you could
build
a working implementation of it, either as a set of equations or,
better, as a computer program.
MT: You already have your spreadsheet, so that part has been done.
Pardon me if I draw false conclusions from superficial appearances,
but you appear to be avoiding the task of designing and programmming a
VHPCT model that works, so we can study it and learn what you mean. This
is not fair, because all of my ideas have been illustrated, to the best
of my ability, by working models that allow anyone to test them, and I’ve
provided the source code so anyone can see how the models work. If you
can’t do that, I’m entitled to claim (until proven wrong) that you don’t
actually have a clear concept to communicate. If that’s the case, you’re
just bringing up random alternatives as they pop into your head with no
idea whether they’re viable. If there is some systematic principle in the
background that leads you to think that the vector-only system is a real
possibility, you haven’t revealed it. You also haven’t shown us how it
would look as a
model.
RM: How about writing the VHPCT model of a simple two-dimensional
tracking task. If the controlled variable is two dimensional I presume
that would mean at least two elements in the Vector component of the
VHPCT model, so a model of a two dimensional tracking task should make it
possible to see the difference between the VHPCT and PCT models.
MT: Any of your models of 2-D tracking would be what you are asking
for, if they incorporated a reorganization system so that the
representation of the tracked variables was not at the whim of the
programmer.

BP: So incorporate a reorganization system and demonstrate that what you
say is true. This discussion has gone as far as it can with nothing but
allegations on your side and no reason to accept that they are true. If
you can’t yourself generate a demonstration of what you’re talking about,
are you really talking about
anything?

MT: Let me simply ask my question for the umpty-zillionth time, using
slightly different words yet again: Assuming that an HPCT hierarchy is
the correct architecture for biological control systems, is there any
evidence from theoretical analysis or practical example in biology or
simulation that a scalar-valued controlled perception corresponding to
something in the environment MUST be embodied in a scalar-valued signal
somewhere in the HPCT hierarchy.
At first sight, any example of the Test would seem to answer the
question. You make a model that incorporates control of a variable you
guess might be the controlled variable and see how well it fits the
subject’s actual data. The model’s structure includes your guessed
variable as a scalar quantity, and if you find a guess for which the
model data is a very good fit to the subject’s data, you say “either
this or something very like it is what the subject is tracking.”
Since the model incorporates a scalar value for the controlled
perception, you assume that what the subject is controlling is a scalar
value.
In VHPCT, all of that would be the same. The only difference is how
this scalar value is represented inside the subject. Is this value the
magnitude of a signal you could find by probing the appropriate neural
pathway, or is it the magnitude of a vector whose individual elements are
represented over several neural pathways? In VHPCT the number of elements
of the vector could be anything from one to trillions. In standard HPCT
it can only be one.
BP: Please try to restrain your impatience with us slow and ignorant
dolts who can’t understand how right you are. That shoe belongs on the
other foot. Do you have any demonstration that what you call a vector
control system without scalar representations can even exist? Is there
any way to build any real system in which the controllable variables of
interest are not scalars?

You’re raising the bar now, asking that we solve the problem of how
perceptions that have objective effects on intrinsic variables can come
to be created by perceptual input functions. I’ve already admitted that I
can’t program real perceptual input functions in a hierarchy that extends
above the level of sensations, so you know that you’re asking the
impossible. We can take any one type of perception and represent it with
the canonical model, but that’s done by skipping the step of showing how
the perceptual input function is organized. To model a realistic
hierarchy we have to know what the real function is, and that is beyond
us right now.

But that doesn’t let you off the hook. You’re the one making claims and
asking questions that you seem to think make sense. It’s up to you to
show that you’re not just tossing disorganized thoughts out for viewing.
It’s fine to have disorganized thoughts that just pop up – that’s called
reorganization. But until you can select some of those thoughts and test
them, there’s no reason to take them seriously, is there? Reorganization
is a private matter; all that the rest of the world cares about is what
you end up selecting out of the random products, discarding the
rest.

MT: So the Test does not address the question. It provides an
analyst’s view. It finds a scalar variable that must be very like a
scalar value that is controlled, but it does not indicate anything about
how that value is represented in the subject.

Does the VHPCT structure and the question I am asking make more sense
now? Nothing in the VHPCT structure violates the standard HPCT structure.
All it does is remove one assertion that has always been implicit in
HPCT, always assumed to be almost axiomatic, in the same way that the
Euclidean “parallel lines extended infinitely will never meet”
was assumed to be a necessary axiom about real space until the
mid-nineteenth century. I’m asking whether this axiom of HPCT is
necessary in theory or is a fact in biology, and whether there is any
evidence one way or the other.
BP: No, it doesn’t make more sense because you still haven’t said
what you’re proposing as an alternative. You’re not just removing one
assertion about scalars; you’re introducing the assertion that real
systems can work without scalars. How would a model work if it had no
scalar representations in it? You’re just alluding to the answer without
giving it. I can’t even imagine how to do it. Can you? I don’t think so.

If I could imagine a model without scalar representations in it, I
could at least compare its behavior with that of the model with scalar
representations. But the latter is all I have to look at. If you want me
to consider your proposition, then please tell me what it is. SHOW me
what it is. Don’t just keep pointing to something that remains offstage
and out of sight.

Remember that you said that a set of parallel redundant signals is still
a scalar.

To change to a more productive mode, here is a section from my
ArmControlReorg demonstration: If you want to test your ideas, feel free
to use this way of doing it. You will notice that this extract begins
with some operations using matrices and vectors.

=============================================================================

[“MaxMax” is 14]

type VectorType = array[1…MaxMax] of Single;

   CoeffType = array[1..MaxMax] of

VectorType;

procedure MM(var M1, M2, M3: CoeffType);

// Multiply two matrices to get a third matrix.

var

I, J, K: Integer;

begin

for J := 1 to MaxMatrix do

for I := 1 to MaxMatrix do

begin

  M3[J, I] := 0.0;

  for K := 1 to MaxMatrix do

  M3[J, I] := M3[J, I] + M1[I, K] * M2[K,

J];

end;

end;

procedure MV(var M: CoeffType; var V1, V2: VectorType);

// Premultiply vector by square matrix: V2 = M * V1

var I, J: Integer;

begin

for I := 1 to MaxMatrix do

begin

V2[I] := 0.0;

for J := 1 to  MaxMatrix do

  V2[I] := V2[I] + M[I, J] * V1[J]

end;

end;

procedure VS(var A, B, C: VectorType);

// Vector subtraction: C = A - B

var I: Integer;

begin

for I := 1 to MaxMatrix do

begin

C[I] := A[I] - B[I];

if C[I] > 2000 then C[I] := 2000

else if C[I] < -2000 then C[I] := -2000;

end;

end;

procedure VA(var A, B, C: VectorType);

// Vector addition: C = A + B

var I: Integer;

begin

for I := 1 to MaxMatrix do

C[I] := A[I] + B[I];

end;

procedure VI(var A, B: VectorType; K: Double);

// Vector integration: B = B + 0.01*(K*A - B) * dt

var I: Integer;

begin

for I := 1 to MaxMatrix do

B[I] := B[I] + 0.005 * (K * A[I] - B[I]) * dt;

end;

[INTERVENING PROCEDURES OMITTED…]

procedure TestOrtho(var Ortho: Double);

// ------ Make Reference and Disturbance Patterns -----------

procedure MakeSmoothPattern;

procedure MakeDisturbances;

// Each call provides next smoothed disturbance value for each control
system

procedure ClearDisturbances;

// Each call supplies a zero disturbance value for each control
system

// ----------- Initialize Input and Output Matrices --------

[CONTROL SYSTEM OPERATION AND REORGANIZATION]

procedure DoControlLoops;

var

I, J: Integer;

Emax: Double;

begin

if Reorg then // add corrections to coefficients (“swim in
straight line”)

for I := 1 to MaxMatrix do // for i-th control system

begin

Emax := Sqrt(LastErSq[I]);

if Emax > 100 then Emax := 100;

for J := 1 to MaxMatrix do // for j-th weight

  Wo[I, J] := Wo[I, J] + Emax * GainReorg *

InputCorrect[I, J];

{Note misnaming – that should be called OutputCorrect}

end;

// One iteration of all control systems

MV(Wi, Qi, P); { p
= coeffs*qi – matrix times vector}

VS(R, P,
E);
{ e = r - p — vector subtract}

VI(E, O,
Ko);
{ o = o + koint(e) - k20 – vector integrate}

MV(Wo ,O,
Temp);
{ temp = wo^*o — output matrix x output vector}

if Disturbances then MakeDisturbances else ClearDisturbances;

VA(Temp, D,
Qi); {
qi = temp + d — vector add disturbance}

// Keep joint positions within physical limits

for I := 1 to MaxMatrix do

begin

with QiLimits[I] do

begin

  if Qi[I] < QiMin then Qi[I] :=

QiMin

  else if Qi[I] > QiMax then Qi[I] :=

QiMax;

  if R[I] < QiMin then R[I] := QiMin

  else if R[I] > QiMax then R[I] :=

QiMax;

end;

end;

// Load values for displaying arm

ShldR := Round(Qi[1]);

ShldP := -Round(Qi[2]);

ShldY := Round(Qi[3]);

ElbowP := Round(Qi[4]);

ForeR := Round(Qi[5]);

WrY := Round(Qi[6]);

WrP := Round(Qi[7]);

Th1R := Round(Qi[8]);

Th1P := Round(Qi[9]);

Th2P := Round(Qi[10]);

Th3P := Round(Qi[11]);

F1P := Round(Qi[12]);

F2P := Round(Qi[13]);

F3P := Round(Qi[14]);

TotalError := 0.0;

SumOut := 0.0;

for I := 1 to MaxMatrix do // for each control system

begin

  ErSq[I] := Sqr(E[I]);

  TotalError := TotalError + ErSq[I]; // for

overall gain reorganization

  SumOut := SumOut + Sqr(O[I]);

end;

TotalError := Sqrt(TotalError / MaxMatrix) {/ YScale};

SumOut := Sqrt(SumOut / MaxMatrix);

if MaxErSq <> 0 then

PctTotalError := 100 * TotalError / MaxErSq

else PctTotalError := 0.0;

if ReorgOK then

begin

for I := 1 to MaxMatrix do // for each control

system

begin

  if ErSq[I] >= LastErSq[I] then //

“tumble” that control system

    for J := 1 to MaxMatrix do //

for each weight: change direction

      InputCorrect[I, J]

:= 2.0 * (0.5 - Random);

  LastErSq[I] := ErSq[I];

end;

ReorgOK :=

False; // turn off
error detection

end;

end;

=============================================================================

Each control system has an integrating output function, the output from
which is fanned out through output weights Wo[system,joint] to add to the
effect on all the environmental variables (Qi: joint angles). Each
control system senses the magnitudes of all the joint angles through an
input weighting matrix Wi[system, joint]. The input matrix is initialized
to be the unity matrix in which the diagonal is all ones and all other
entries are zero. This was done to make it easy to see where
reorganization of the input could be applied. It would also be possible
to use random weightings to see what would happen.

You will notice that all the control systems (red text) contain the usual
scalar variables: p,r,e, and o. I don’t know how I would rewrite this
code to eliminate them. But you can show me. It would be considerably
harder to eliminate the scalar variables Qi, which are the joint angles.
If you want to ask how the input function knows it is supposed to be
sensing joint angles instead of, for example, flavors of ice cream, I
don’t know. I’m just assuming the relevant sensors exist and are used.
How would a vector-only model handle that?

If I’m still missing your point, perhaps the above details will help you
to make it more understandable.

Best,

Bill P.

[From Bill Powers (2011.11.15.1020 MST)]

Martin Taylor 2011.11.15.09.00

···

BP earlier: How is your picture of the result of reorganization
different from Fig. 3-11 in B:CP?
MT: My picture of VHPCT reorganization is, by definition, identical
to that of HPCT. If you were to invent or discover a different method of
reorganization, or alter the influence of different kinds of intrinsic
variable on the reorganization process, those inventions and discoveries
would apply equally in VHPCT.
BP:Then what are we arguing about? Identical
models?

BP earlier: The present model doesn’t compute the weightings needed
to achieve orthogonality between different functions of the same set of
input elements, nor does it compute any correlations. The orthogonality
simply grows out of reorganization, for the simple reason that the
multiple control systems involved control best when the assorted
functions sense and affect orthogonal functions of the common set of
input variables.
MT: No differences here between standard HPCT and
VHPCT.

BP earlier: There’s something I don’t understand about this
discussion. Why is it so important to you to have only vectors in the
system?
MT: It isn’t. As one who would prefer a system simple to analyze, I
would prefer the VHPCT hypothesis to be eliminated from consideration,
but not by Rick’s ostrich trick.
As I explained a couple of times, the possibility that controllable
perceptions might be represented as vectors in the hardware initially
troubled me and I tried to prove to myself that it could not be true.
Having failed to prove to myself its impossibility, I asked the CSGnet
community if such a proof could be found. Having failed in that also, I
find that I must give it equal status with the scalar representation
“standard theory”, in the sense that neither can be dismissed
as being better attested by evidence than the other.

I think you are confusing yourself. To say that “controllable
perceptions might be represented as vectors in the hardware” is to
play with the word “represent”. In PCT, variables are
represented as analogs: other variables with a magnitude that covaries
with the variable being represented. The perception of light intensity is
the frequency of neural impulses that covaries with the intensity of
light falling on a light receptor. It is a scalar because neural signals
can change only in frequency. The magnitude of one kind of physical
variable is used as a measure of the magnitude of another kind.

A vector, being a collection of separate signals, can represent at most
as many dimensions of the environment as there are elements of the
vector. Each element would be the output of one sensory cell. Each
sensory cell’s signal represents the summation of all
externally-influenced effects on it. So a vector perception is simply a
collection of separate scalar representations.

There are constraints on how this set of representations can change, due
to the fact that any one sensor is affected by many external variables.
The problem is similar to that of tomography, in which many different
sets of sensor signals have to be sorted out to yield a 2D cross-section
picture of the intervening mass.

But each sensory signal remains a scalar. The collection of scalars has
to be processed by some sort of computation before a new set of signals
can be generated representing different dimensions of the external world.
In the CAT scan, the processing of the lowest order of scalars results in
a new set of scalars, the brightnesses of pixels in a 2-D map. There are
many of these, so we still have a vector, but each element is a separate
scalar representation of a new dimension of the external
world.

BP earlier: I have shown above, I think, that a representation of the
relationship between any two vectors can be transformed into the form I
chose for representing the control hierarchy. This shows that (a) a
vector represention of the hierarchy is possible, and (b) that it is
equivalent to the functional representation I use. What more do you want?
MT: If I read this correctly, you are now saying that it is
inherently impossible to distinguish between standard and V versions of
HPCT. That is a definite answer to my question, just as was your
assertion to the contrary a few days ago.

I never asserted the contrary. You’re the one proposinjg VHPCT as an
alternative to HPCT, not I. I’ve been asserting that you haven’t shown
that there is any difference, at least not in a way I could
understand. I’m prepared to try to understand whatever difference you
propose, if I can agree with it. You haven’t proposed any model
yet.

MT: I have come, reluctantly, to give more credence to this
answer than I did when I first asked the question, but I don’t trust
either very far. If your current answer is correct, I suppose we have to
include the V version of HPCT in further consideration of the possible
behaviours of the HPCT architecture.
BP: If my answer is correct there is only one model. There is no
difference in architectures. There isn’t and never was any V version of
HPCT.

BP Earlier: But multi-level control is not just an alternative to
single-level control. To control a logical variable, you can’t use
weighted sums.
MT: This brings out a point that I often have wanted to make. There
is no value, mathematically speaking, in any multi-level control that
uses only weighted sums as the perceptual input functions of the higher
levels. All you accomplish is a rotation of the representation space. You
don’t get anything new, as you can when you incorporate nonlinear
processes in the perceptual input functions. Perhaps you gain
convenience, perhaps not. (Maybe I should also enter a caveat here about
reference input functions as well, but we aren’t talking about them
here).

Yes, precisely, which is why different levels in HPCT use different kinds
of input functions. I don’t rule out “fine structure” within
any levels I’ve proposed. Configurations are sometimes composed, it
seems, of smaller configurations and are parts of larger ones. But a
transition, the next level, is not any kind or size of configuration, and
neither is a sensation, the next lower level. I defined the levels so
they are, as far as I can see, different dimensions of
experience.

BP earlier: But reorganization creates the functional relationships
you are ignoring. It does the same job in VHPCT and in HPCT, if I
understand your distinctions correctly, which may not be true. The
vectors are NOT just collections of variables: they are functionally
related.
MT: One can imagine any kind of pattern in the perceptual signals,
and possibly control it, but whether controlling that pattern has any
influence on intrinsic variables will depend on how the side-effects of
controlling that pattern affect the intrinsic variables.

You’re using “one” in two different ways. I’m saying that the
analyst can observe patterns in a perceptual vector that the observed
system neither senses nor uses (assuming that the analyst could observe
variables corresponding to the vector
elements).

MT: On reorganization demonstrations…
Have you experimented with reorganization when the set of intrinsic
variables that influence the rate of reorganization includes more than
just quality of control? I don’t have your books with me, but I don’t
remember you mentioning such an experiment here or in what I have read.
Maybe I am wrong, but it seems to me that you can test reorganization
using control quality as your single intrinsic variable only if you
define ex officio the quantity or quantities being controlled at the
top level.

Not at all. In the armcontrolreorg code I sent you, the controlled
variable of the reorganization system is simply the sum of squares of
many samples of an error signal in a control system (reorganization is
local in this model). There is no other criterion for quality of
control.

MT: Come to think of it, it might be possible to use that last
observation in testing my question.
We could set up a system to reorganize so that it controls a small
number of scalar values that correspond to states of an environment the
experimenter can disturb, and by fiat provide the reference levels for
those values to the top level of the reorganizing structure. To make the
experiment worthwhile, these states should be functions of several
quantities as seen at the sensor level. Normally, you would provide each
reference value separately to the individual top-level control units, but
for this experiment the reference values would be provided as the
magnitudes of a set of orthogonal vectors, one example of which would be
to provide the reference values in the normal way as individual
quantities to the individual top-level control units, while other
examples would provide the reference values as sets of mutually
orthogonal vectors distributed across the top-level units.
BP: There is no need for the top-level reference values to vary
orthogonally.

Even with random input weightings, reorganization (of output weights) can
result in independent control of each higher-level variable with any
distribution of reference signals. Non-orthogonality shows up as
interactions among systems at the same level, mutual disturbances. All
that does is require larger amounts of output to achieve control, since
some of the output is used to nullify the effects of disturbances from
other systems at the same level. Actual perfect conflict is achieved only
when the input matrix has a determinant of exactly zero, so no solution
of the simultaneous equations exists. The tolerable limit occurs when the
amount of output required is more than can be produced by one or more
conflicting systems. Reference signals can be set to any values if the
tolerable limit of required output is not reached.

All this, other than tolerable limits which aren’t defined, is
illustrated in Demo 7-2, ThreeSys. Set the determinant to the lowest
value, and see how large the output excursions
become.

MT: If, after reorganization, the system with the individually
specified reference values outperforms the other examples when you
disturb the corresponding scalar environmental quantities, or if it
reorganizes to criterion appreciably faster than the vector-valued
reference examples, my question would be addressed. Such a result would
strongly suggest that we should dismiss VHPCT from further consideration.
On the other hand, if all the examples perform similarly, such a result
would say that VHPCT should continue to be considered as a viable form of
PCT for further analysis and experiment.
Does that make sense?
Since independent reference signals are not required, I think we are
about to conclude that VHPCT never existed as a separate version of HPCT.
It was a red herring. If you demur from that conclusion, it’s up to you
to demonstrate that we must consider it. Show us a working VHPCT system
that is not identically the same as an HPCT system.

Best,

Bill P.

[From Rick Marken (2011.11.15.1630)]

Martin Taylor (2011.11.15.08.06)--

RM: I can't think of any perception (aspect of the external
environment) that is not influenced by one's actions. The fact that
sensors are situated on a moveable frame (our bodies) suggests that
everything we perceive is influenced by what we are doing (or not
doing) at any instant. We exist in a feedback loop and there is no way
to separate what we get (perception) from what we do (action/non
action).

Interesting way of looking at it. Are you saying that I can influence my
perception of the roughness of the lake surface I am now regarding by moving
to the other end of my window?

I am saying that you influence all your perceptions (whether you want
to or not) simply because of the fact that your sensors are housed on
a moving body. How much each perception is influenced differs
depending on what the perception is and what you do to influence it.
In the case of the roughness of the surface of a lake, that will be
influence by movement away from the lake. The influence isn't very
strong so you have to get pretty far away from the lake for the
roughness perception to decrease noticeably (I'm sure you've seen how
smooth lakes look from an airplane). Of course, the easiest way to
change you perception of roughness is to close you eyes. The movement
of your eyelids has a very large (and abrupt) effect on visual
perceptions of all kinds.

The rest of your post is being rather well answered by Bill so I will
duck out of this again.

Bye

Best

Rick

···

BP: There's something I don't understand about this discussion. Why is it
so
important to you to have only vectors in the system?

MT: �It isn't. As one who would prefer a system simple to analyze, I
would prefer
the VHPCT hypothesis to be eliminated from consideration, but not by
Rick's
ostrich trick.

There is no need for me to bury my head to avoid the VHPCT hypothesis
because there is no there there. The VHPCT hypothesis is just a bunch
of assertions that you are making.

I regard it as a bunch of questions I am asking. Maybe you can go back in my
messages and pick up an assertion or two, but I don't remember making any
relevant to the issue I raised. I've made mathematical assertions, such as
that the VHPCT concept of the representation of a perception can be seen as
a rotation of the HPCT concept in N-space, but I don't think such assertions
are very radical.

�You are trying to make it sound
like VHPCT is some kind of alternative to the PCT architecture

I'm trying to to make it as clear as I can that the VHPCT architecture IS
the HPCT architecture. Initially I did this by referring to your spreadsheet
as an example of the VHPCT architecture.

If I was trying to make it sound as though VHPCT is an alternative to the
PCT architecture, I wouldn't use a simple HPCT structure as an example,
would I? I can certainly think of plausible alternative PCT architectures
that are not standard HPCT (which is why I always use the "H" when talking
about the standard hierarchic architecture), but if I want to talk about an
alternative, I will specify where it differs from HPCT, while still being
strict PCT.

�-- an
alternative that is implied (demanded?) by some neurophysiological
data.

"Suggested" would be a better word.

�But your description of the VHPCT architecture has been very
vague and, therefore, it has been difficult (impossible) for me to
understand it as an alternative to the PCT architecture.

The possibility that one might see it as vague was the reason I used your
spreadsheet as a concrete example, to illustrate by example that I am asking
a question about the behaviour of the standard architecture, not about the
architecture itself.

As I said awhile ago, I would be very interested in seeing what the
VHPCT model _is_, assuming there is some there there. �It would be
easier for me to understand what the VHPCT model is if you could build
a working implementation of it, either as a set of equations or,
better, as a computer program.

You already have your spreadsheet, so that part has been done. The
difficulty with using it to answer my question is that the spreadsheet
doesn't reorganize in order to maintain any intrinsic variables near their
reference levels.

How about writing the VHPCT model of a
simple two-dimensional tracking task. If the controlled variable is
two dimensional I presume that would mean at least two elements in the
Vector component of the VHPCT model, so a model of a two dimensional
tracking task should make it possible to see the difference between
the VHPCT and PCT models.

Any of your models of 2-D tracking would be what you are asking for, if they
incorporated a reorganization system so that the representation of the
tracked variables was not at the whim of the programmer.

Let me simply ask my question for the umpty-zillionth time, using slightly
different words yet again: Assuming that an HPCT hierarchy is the correct
architecture for biological control systems, is there any evidence from
theoretical analysis or practical example in biology or simulation that a
scalar-valued controlled perception corresponding to something in the
environment MUST be embodied in a scalar-valued signal somewhere in the HPCT
hierarchy?

At first sight, any example of the Test would seem to answer the question.
You make a model that incorporates control of a variable you guess might be
the controlled variable and see how well it fits the subject's actual data.
The model's structure includes your guessed variable as a scalar quantity,
and if you find a guess for which the model data is a very good fit to the
subject's data, you say "either this or something very like it is what the
subject is tracking." Since the model incorporates a scalar value for the
controlled perception, you assume that what the subject is controlling is a
scalar value.

In VHPCT, all of that would be the same. The only difference is how this
scalar value is represented inside the subject. Is this value the magnitude
of a signal you could find by probing the appropriate neural pathway, or is
it the magnitude of a vector whose individual elements are represented over
several neural pathways? In VHPCT the number of elements of the vector could
be anything from one to trillions. In standard HPCT it can only be one.

So the Test does not address the question. It provides an analyst's view. It
finds a scalar variable that must be very like a scalar value that is
controlled, but it does not indicate anything about how that value is
represented in the subject.

Does the VHPCT structure and the question I am asking make more sense now?
Nothing in the VHPCT structure violates the standard HPCT structure. All it
does is remove one assertion that has always been implicit in HPCT, always
assumed to be almost axiomatic, in the same way that the Euclidean "parallel
lines extended infinitely will never meet" was assumed to be a necessary
axiom about real space until the mid-nineteenth century. I'm asking whether
this axiom of HPCT is necessary in theory or is a fact in biology, and
whether there is any evidence one way or the other.

Martin

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.11.14.0735 MST)]

Martin Taylor 2011.11.13.23.02

It’s almost feeling natural to type mSt instead of mDt. I shall give the
third finger left hand a reinforcer. Any
suggestions?

MT: I prefer “aspect” but stopped using it a few messages
ago because the implications you mention seemed to be getting lost. The
aspects I was interested in were those that could be influenced by one’s
actions, the environmental correlates of controllable perceptions, so I
did not need to consider triangulation.
OK, aspect it
is.

BP earlier: As soon as you introduce any kind of computation, we’re
back in the perceptual input function model with scalar variables. As far
as I can see, that is.
MT: I’m afraid I don’t see the connection between the two halves of
your second-last sentence.
BP: Let’s try this again – maybe it will be a shortcut.

How is your picture of the result of reorganization different from Fig.
3-11 in B:CP?

Emacs!

And if those weighted connections exist in the box called
“Network”, is not Fig. 3-12 a valid alternate representation
(though, to avoid clutter, not all possible connections are shown)? Both
figures are from B:CP.

Emacs!

These diagrams could represent one level of what you call either HPCT or
VHPCT, couldn’t they? And the functions carry out any vector operation
you can define, don’t they?

MT: If a VHPCT variant truly represents what happens in a living
organism, there would be no need for anyone or any function to compute
correlations or demonstrate orthogonality. An analyst might, when teasing
apart what reorganization had produced, but that would not mean that the
organism contained such a computational mechanism, any more than we
expect the universe to compute that a 5 lb force opposing a 10 lb force
will produce the same acceleration as would a 5 lb force acting alone,
and then applying that 5 lb force.

BP: Naturally. The present model doesn’t compute the weightings needed to
achieve orthogonality between different functions of the same set of
input elements, nor does it compute any correlations. The orthogonality
simply grows out of reorganization, for the simple reason that the
multiple control systems involved control best when the assorted
functions sense and affect orthogonal functions of the common set of
input variables.

There’s something I don’t
understand about this discussion. Why is it so important to you to have
only vectors in the system?
MT: It isn’t. As one who would prefer a system simple to analyze, I
would prefer the VHPCT hypothesis to be eliminated from consideration,
but not by Rick’s ostrich trick.
As I explained a couple of times, the possibility that controllable
perceptions might be represented as vectors in the hardware initially
troubled me and I tried to prove to myself that it could not be true.
Having failed to prove to myself its impossibility, I asked the CSGnet
community if such a proof could be found. Having failed in that also, I
find that I must give it equal status with the scalar representation
“standard theory”, in the sense that neither can be dismissed
as being better attested by evidence than the other.

BP: I have shown above, I think, that a representation of the
relationship between any two vectors can be transformed into the form I
chose for representing the control hierarchy. This shows that (a) a
vector represention of the hierarchy is possible, and (b) that it is
equivalent to the functional representation I use. What more do you want?

BP earlier: I ask especially
because you haven’t said how you think that would actually be
accomplished – there’s a lot of magic lurking in the
background.
MT: We may be coming from different backgrounds here, because I would
say that the degree of “magic” is fairly similar.

It may be true that reorganization naturally performs this added bit
of magic, in which case I think I would no longer need to be concerned
with VHPCT as a possible alternative to standard HPCT. Have you ever
studied reorganization in a simulated environment that has sufficient
richness in its relationships to make multi-level control of
simultaneously varying perceptions appreciably superior to single-level
control?
But multi-level control is not just an alternative to single-level
control. To control a logical variable, you can’t use weighted sums. But
you need weighted sums to carry out the operations that logic control
requires as an output process. It’s not enough to reason out that you
should sell 100 shares of Microsoft. You then have to press the buttons
to call your broker, and talk to him. The logic system can’t do either
kind of behavior; it can only specify what behavior should be carried
out.

I know you have developed simulated systems with multilevel
control, but so far as I know, they have assumed the HPCT structure
rather than allowing it to emerge. And I know you have simulated
multi-level reorganization, but have you done both together in a rich
complex environment.

BP:: Look at demo 7-2, ThreeSys. In the lower right corner are radio
buttons allowing you to use “local” or “global”
reorganization modes. In the local mode, each control system’s set of
output weights is separately reorganized on the basis of that control
system’s squared error signal. In the global mode, all nine weights are
reorganized on the basis of the squared error summed over all three
systems. Doesn’t the Local mode correspond to your VHPCT idea, and the
global one to HPCT? The local mode might converge a little faster than
the global one, but they both converge and produce workable control
systems.

I see no reason why this wouldn’t work with each of multiple levels of
control, but since I don’t know how to write perceptual input functions
for configurations, I haven’t done that.

As to “richness,” you’re asking a lot. I haven’t progressed
above the sensation level in modeling any real perceptual functions. I
have shown that global reorganization still works for systems of N
control systems, each of which senses the sum of N randomly weighted
environmental variables and affects all N of them via N reorganizable
output weights. N can be anything from 10 to 500 systems (or more, I
guess, but it gets pretty slow for large numbers and I haven’t wanted to
wait that long). The reorganizing principle doesn’t care if it’s
operating on 10 systems or 500 systems, and it doesn’t know – or care –
whether the data it’s given represent one complex system or many simpler
ones.

I know that reading inadequately-commented source code by other people is
a crashing bore, but all the LCS3 programs have the source included, and
others have gone through them and understood them and reproduced the
results, so if you really care to understand, it seems possible if not
convenient.

BP earlier: The way it seems to
me right now is that the vector concept lacks any mechanism to
distinguish patterns – it’s just a collection of variables. Of course
we, looking at examples, can see patterns, but there’s nothing in the
model
that can see them. That, to me, is a big gap in the vector-only
idea of perception.

Does the notion that reorganization does the same job in VHPCT as in
standard HPCT remove this gap for you? It does for me.

But reorganization creates the functional relationships you are ignoring.
It does the same job in VHPCT and in HPCT, if I understand your
distinctions correctly, which may not be true. The vectors are NOT just
collections of variables: they are functionally related.

… [re “orthogonal
vector”]

MT: Quite so. Did you write this to convince yourself, or to explain
to lurkers what I took for granted anyone reading this interchange
seriously would know?
BP: I was commenting on your reference to an “orthogonal
vector.” Perhaps that was just a slip of the finger that escaped
your editing, and you meant to say “orthogonal functions of vector
elements.”

Thanks for the useful review of N-space
vectors.

MT: In an HPCT hierarchy, every perceptible change in the environment
will cause a change in the pattern of values of the N perceptual signals.
That change in pattern defines a direction in the N-space for which the
individual perceptual input functions form a basis. The direction is
defined by a list of numbers {…, 3, -2, 5, …}, otherwise known as a
vector.

In standard HPCT, whenever an aspect of the world perceived as
unitary changes, the change vector at the corresponding perceptual level
has the form {…, 0, 0, M, 0, 0, …}, where M represented the magnitude
of the change. If a different aspect changes, the same region of the
vector might look like {…, 0, K, 0, 0, 0, 0, …}, a different vector
element being non-zero. The two vectors are orthogonal. The only
difference for VHPCT is that under the same conditions, the change vector
would have more than one non-zero element. Changes in different
independent unitary aspects of the environment would still produce
orthogonal vectors of change in the perceptual N-space.
BP: You’re speaking here of the standard HPCT vector after ideal
completion of reorganization, where the input vector matrix is such that
orthogonal functions exist, as nearly as possible. In demo8-1,
ArmControlReorg, all the input weights are set to the form you mention,
so the functions are orthogonal (each system senses only its own joint
angle). The output weights begin at zero, and each system’s output
affects all the joint angles through weights that are
reorganized.

I haven’t ventured yet to reorganize the input weights in that demo,
partly because at the levels of control at which I know the kind of input
computations that work, it’s a trivial problem. But I have several other
demos in which the input weights are chosen at random, and the output
weights are simply set so the output matrix (over all systems) is the
transpose of the input matrix. That showed that the transpose will
produce a stable set of control systems that can independently control
their own perceptions, though not without some effects of conflict due to
non-orthogonal input weighting patterns in different input functions.
Pure conflict is very unlikely, but as orthogonality declines, the
outputs have to get larger and larger to maintain independent control.

It would be possible (and I’ve tried it), given a set of output weights,
to reorganize the input weights for best control. But the real problem
arises when you try to do both. I haven’t yet found criteria for
reorganizing that separate the effects of output and input weights. And
I’ve shied away from input-weight reorganization for another reason: once
you start down that road, you realize that solving that problem is the
whole ball game. The first person to solve it will have taken the first
long step toward a sentient robot. But it’s a very long and winding road
and I’ll not see the end of it.

MT: The orientation of the basis with respect to manipulable aspects
of the environment is fixed in HPCT. If there is a manipulable
perceptible aspect, there is one perceptual function that is altered by
changes in that aspect. In VHPCT the orientation of the basis is
arbitrary, but the demands on the control machinery seem to be identical.
So yet another way of wording my question could be: "In our
gravity-based everyday world, there are 3 natural directions to the basis
space for describing movement; is there an analogous natural set of
perceptual directions toward which reorganization tends to develop the
complete hierarchical control structure?"BP: what you’d call a tutorial, but useful nonetheless. I’m quite
sure that reorganization is not biased toward any particular basis
orientation, or even toward basis orthogonality (there can be skewed and
curved bases, can’t there?). The only input to the reorganizing algorithm
is a squared error summed over many control systems, and the
“tumbles” simply choose random new directions in perceptual
space which the changes in parameters then follow. So the reorganizing
system doesn’t care what the perceptual signal means, whether the
functions are linear, or whether they’re independent. Are you brave
enough to start down that long road?

Best,

Bill P.

(Attachment 93b115.jpg is missing)

(Attachment 93b2da.jpg is missing)