the frame problem

[from Jeff Vancouver (2005 Aug 19, 11:00 am)]

I am working on a chapter for a book on free will and distributed
cognition with a philosopher (most of the contributors to the book will
be philosophers, including Daniel Dennett who apparently is a bigwig in
the field). My contribution involves control theory and my research into
problems that presumably cannot be understood from a control theory
perspective (that is, I am describing computational models and tests of
them that address these presumably unaddressable problems, muting the
"free will" explanation). A question raised at the conference that
launched this book (and where I presented my research) was whether
control theory could handle the "frame problem." My question is whether
anyone has written on this issue in relation to control theory and if
so, can I read it and possibly cite it in my chapter? I am not
particularly interested in opinions expressed here on the net because
those make poor references. It would need to be something in print.

Thanks,

Jeff Vancouver

Jeffrey B. Vancouver
Associate Professor
Department of Psychology
Ohio University
Athens, OH 45701
740-593-1071

[From Bill Powers (2005.08.19.0938 MDT)]

Jeff Vancouver (2005 Aug 19,
11:00 am]

A question raised at the conference that

launched this book (and where I presented my research) was whether

control theory could handle the “frame
problem.”

I looked it up via Google and got this:

···

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The challenge, then, is to find a way to capture the non-effects of
actions more succinctly in formal logic. What we need, it seems, is some
way of declaring the general rule-of-thumb that an action can be assumed
not to change a given property of a situation unless there is
evidence to the contrary. This default assumption is known as the
common sense law of inertia. The (technical) frame problem can be
viewed as the task of formalising this law.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Is this what you’re talking about? If so, I don’t think control
theory has anything to contribute, except perhaps the statement that most
actions intended to control one variable end up affecting other variables
as well, though not controlling them. I don’t see any reason to assume
that an action will always have only its intended effects – if you do
adopt that “general rule of thumb,” I should think you would be
wrong more often than you are right. Moving a house into the garden is
quite likely to scrape some paint off and thus alter the color of the
house (the example used in the article I saw).

Do a Google search on “the frame problem” (in quotes) and
follow the first link (Stanford) to see what I looked at.

Best,

Bill P.

[From Bruce Nevin (2005 Aug 19, 11:53 am)]

Jeff Vancouver (2005 Aug 19, 11:00 am) --

Unless it must use the jargon of AI/CogSci with explicit references to
the Frame Problem as such, I should think there would be some relevant
references. Anything touching on how the the organism's perceptions are
unknowable to the observer, the distinctness of the observer's universe
of perception from that of the observed system, and the discovery and
verification of controlled variables as a way by this impasse, for
example, is a kind of converse of the Frame Problem. Any discussion of
reorganization in learning and in evolution bears on how frames come to
be established. I'm sure people can suggest relevant papers by Bill or
Rick. Gary Cziko's Without Miracles and some parts of Phil's big book
seem relevant.

A couple of general references to provide context for anyone not
familiar with the issues:
http://cogweb.ucla.edu/ep/Glossary.html (scroll down to "Frame
Problem")
http://cogweb.ucla.edu/ep/Sociobiology.html

   /Bruce Nevin

[From Jeff Vancouver (2005.08.19.1434 EST)]

From Bill Powers (2005.08.19.0938 MDT)

Bill, you found the same source I found on
the problem. My guess is that you would not consider this “problem”
all that relevant because you have not pursued the computational modeling
complexities involving in representing the thinking and other non-acting modes
for the control systems (that is not meant to be an insult, we all have our
priorities). That is, it is more of a problem when one has switched gates to
memory, and specifically, what prompts those gates to turn the signals back to
the environment. It is likely to be an issue for M. Taylor’s smooth
reorganization process (which I think I like), and certainly not one if
reorganization is simply random. It also seems to be conceptually not that
difficult to image a control theory model (or two) that could address it. Actually
creating such a model, however, will take a great deal of work so I was
wondering if anyone had (Martin? Rick?).

Jeff

Jeffrey B. Vancouver

Associate Professor

Department of Psychology

Ohio University

Athens, OH 45701

740-593-1071

[From Bill Powers (2005.08.19.1322 MDT)]

Jeff Vancouver
(2005.08.19.1434 EST) –

From Bill Powers
(2005.08.19.0938 MDT)

Bill, you
found the same source I found on the problem. My guess is that you would
not consider this �problem� all that relevant because you have not
pursued the computational modeling complexities involving in representing
the thinking and other non-acting modes for the control systems (that is
not meant to be an insult, we all have our priorities). That is, it is
more of a problem when one has switched gates to memory, and
specifically, what prompts those gates to turn the signals back to the
environment.

I don’t follow the last sentence. What do you mean by gates turning
signals back to the environment? I can’t make any sense of that idea. Did
you say what you meant to say?

As to the “computational modeling complexities” involved in
thinking and other non-acting modes of operation, has anybody come up
with computational models for those things, other than computer programs?
I would consider perception to be a non-acting mode of operation, and I
have to admit that I haven’s figured out how those higher-level
perceptual systems work. Has somebody beat me to it?

Best,

Bill P.

[From Jeff Vancouver (2005.08.22.1640 EST)]

[From Bill Powers (2005.08.19.1322 MDT)]

Jeff Vancouver (2005.08.19.1434 EST) –

[old]That is, it is more of a problem when one has switched gates to
memory, and specifically, what prompts those gates to turn the signals back to
the environment.

From Bill Powers (2005.08.19.0938 MDT)

I don’t follow the last sentence. What do you mean by gates turning signals
back to the environment? I can’t make any sense of that idea. Did you say what
you meant to say?

[new]I am referring
to what you call the memory switch being in vertical (Chapter 15, p. 220 and
thereabouts).

As to the “computational modeling complexities” involved in thinking
and other non-acting modes of operation, has anybody come up with computational
models for those things, other than computer programs? I would consider
perception to be a non-acting mode of operation, and I have to admit that I
haven’s figured out how those higher-level perceptual systems work. Has
somebody beat me to it?

[new]I suppose
that all depends on what you mean by the word “work.” Am I to
assume by “computer programs” you are referring to the production
models of Newell and Simon? It is in response to those models that the frame
problem was generated.

Anyway, it is good to know that I am not
reinventing any wheels.

Thanks for the input.

Jeff

[new]I am referring to what you
call the memory switch being in vertical (Chapter 15, p. 220 and
thereabouts).
[From Bill Powers (2005.08.22.1504 MDT)]

Jeff Vancouver
(2005.08.22.1640 EST) –

I think I see. You didn’t mean taking a signal on its way up and turning
it back (to the environment where it came from), you meant turning the
switch that was sending a downgoing signal back upward so it let the
signal go downward. Hope we have that one straightend out.

[new]I suppose that
all depends on what you mean by the word �work.� Am I to assume by
�computer programs� you are referring to the production models of Newell
and Simon? It is in response to those models that the frame problem was
generated.

Those were simply computer programs that did some of the same things the
human subjects did (i.e., they operated by similar rules). The
Newell-

Simon-Shaw programs were flow charts, not system diagrams; they were
charts of behaviors, not diagrams of a computer’s organization. So they
weren’t really models in the same sense that a PCT diagram of a control
system is a model.

I still don’t see what the frame problem really is. Is it figuring out
what to do, or is it the epistemological problem of showing that there is
something real out there that corresponds to the perceptions we
experience?

Best,

Bill P.

[From Jeff Vancouver (2005.08.23.1145 EST)]

[From Bill Powers (2005.08.22.1504 MDT)]

I think I see. You didn’t mean taking a signal on its way up and
turning it back (to the environment where it came from), you meant turning the
switch that was sending a downgoing signal back upward so it let the signal go
downward. Hope we have that one straightend out.

Yes!

I still don’t see what the
frame problem really is. Is it figuring out what to do, or is it the
epistemological problem of showing that there is something real out there that
corresponds to the perceptions we experience?

Two examples might help and also point to
the solution. First, consider yourself in a room looking at a wall in that
room. Shift a foot in either direction and all the
stimuli about that wall has changed, yet you do not really notice it. The wall
looks the same. At a very low level, the differences in the sensations do not
become differences in perceptions (or that low level perceptions differences do
not propagate up to the higher input functions). That is, somehow our minds
know what changes or differences to ignore (and presumably what to send up the
hierarchy). How do we do this?

A variation on this example is to consider
that some change in stimuli is important (i.e., a lion comes in view). How do
we know, of all the stimuli that are changing, what the important changing
stimuli are? One answer to this example is that we have evolved mechanisms for
differentiating meaningful from less than meaningful stimuli change (i.e., those
that input functions that red “lion – danger” survived to
pass on that input function; and we developed inhibitors to the changing
perspective problem.

A second example is the chess game. When
playing chess, the optimal move is theoretically knowable because the rules of
play are fixed. Yet, even Deep Blue does not do the processing required to find
the optimal solution because it is too onerous (too many options to consider). Humans
do even less of this than these kinds of programs. How do we “know”
what to consider, and when to stop considering?

These seem trackable,
though difficult problems for the computational modeler. Just wondered if
anyone had tackled them.

Jeff

[From Bill Powers (2005.08.23.1145 MDT)]

Two examples might
help and also point to the solution. First, consider yourself in a room
looking at a wall in that room. Shift a foot in either direction and all
the stimuli about that wall has changed, yet you do not really notice it.
The wall looks the same. At a very low level, the differences in the
sensations do not become differences in perceptions (or that low level
perceptions differences do not propagate up to the higher input
functions). That is, somehow our minds know what changes or differences
to ignore (and presumably what to send up the hierarchy). How do we do
this?
A variation on this example is
to consider that some change in stimuli is important (i.e., a lion comes
in view). How do we know, of all the stimuli that are changing, what the
important changing stimuli are?

That’s already taken care of in HPCT. A higher system’s
perception is some function of a set of lower-level perceptual signals.
In general, this is a many-to-one transformation, meaning that multiple
input signals produce a single higher-level perceptual signal. Consider a
very simple example with only two input signals and one output
(perceptual) signal:

p2 = k1ap1a + k1bp2a

This means that the perceptual signal of the higher order (p2) is
computed as the sum of two signals in the lower order of systems, p1a and
p1b. The two weights k1a and k1b are characteristics of the higher-order
perceptual input function and determine the relative contributions of p1a
and p1b to the perceptual signal p2.

Now we can say that there are ways in which the two lower-order
perceptual signals can change that will change p2, and other ways they
can change that will not change p2. Isn’t this the question you raise
above? Suppose k1a is 2 and k1b is 3. What values can p1a and p1b have
that will leave p2 with a value of, say, 12?

The equation is p1a2 + p1b3 = 12. Multiply both sides by 2 to
get

p1a + p1b*6 = 24

and subtract p1b*6 from both sides to get

p1a = 24 - p1b*6

Since p1a and p1b both have to be greater than or equal to 0 (we’re
talking about neural signals here), this means that p1b must be in
the range 0 to 4. For every value of p1b in that range, there is a value
of p1a that will cause p1a2 + p1b3 to be equal to 12 –
that is, that will make the higher-order perceptual signal equal to
12.

A plot of all values of p1a and p1b that will leave the value of p2 equal
to 12 will be a straight line (plotted as p1a vertical and p1b
horizontal). Therefore p1a and p1b can both change so as to leave the
value of p2 invariant.

Of course if p2 is a controlled variable, the control system can act to
oppose disturbances of p2 by altering p1, p2, or both when disturbances
of p1, p2, or both occur. You can model this with Vensim if you
make the output function an integrator so the system will be stable. You
will find that disturbances can move the point on a plot of p1 against p2
freely along the line where p1a2 + p1b3 = reference value, but
the control system will keep the point from moving off that line. change
the reference level, and it will be a different line along which
disturbances can move the pointwithout resistance.

So I really think the problem you describe is a nonproblem. Or better,
the way it’s stated, it’s a badly framed problem (talking about framing
problems!).

Who says we know what the (objectively) important changing stimuli are?
We don’t. We have to find out what they are, by experience or (advantage
of being human) hearing it from someone else (“Stay away from those
brown things with long teeth”). We learn to perceive things by
constructing perceptual input functions specialized to report them in the
form of variable perceptual signals. We develop thousands of input
functions each of which receives not two but hundreds of signals from
lower-order systems, so there are hundreds of dimensions in which the
environment can change either with or without changing any one perceptual
signal.

One answer to this
example is that we have evolved mechanisms for differentiating meaningful
from less than meaningful stimuli change (i.e., those that input
functions that red �lion � danger� survived to pass on that input
function; and we developed inhibitors to the changing perspective
problem.

Naah. That’s much too fuzzy a model. What’s “meaningful?”
What’s a changing percspective in terms of neural signals? What kind of
operation is “differentiating?” This sort of answer doesn’t
answer anything. I think the HPCT solution is a better direction to take.
At least it’s specific enough to make simple models with, and maybe to
make more complex ones eventually.

A second example is
the chess game. When playing chess, the optimal move is theoretically
knowable because the rules of play are fixed. Yet, even Deep Blue does
not do the processing required to find the optimal solution because it is
too onerous (too many options to consider). Humans do even less of this
than these kinds of programs. How do we �know� what to consider, and when
to stop considering?

It’s not as if there is something there to know and we just have
to find it out. We invent strategies,
and learn them from others, and try them. Some work better than others.
Some good ones take too much computing time, unless you have a computer
to carry them out for you. Nobody has figured out even how to define the
optimal moves, much less compute them on the fly. As soon as they do
define them, chess will cease to be interesting.

Think “reorganization.”

Best,

Bill P.

[From Bjorn Simonsen (2005.08.24,08:45 EST)]

From Bill Powers (2005.08.23.1145 MDT)

Consider a very simple example with only two input signals and

one output (perceptual) signal:

p2 = k1ap1a + k1bp2a

I guess you would write:

P2 = k1ap1a + k1bp1b (?)

Suppose k1a is 2 and k1b is 3. What values can p1a and p1b have

that will leave p2 with a value of, say, 12?

I guess you would write: Suppose k1a is ½ (half). (?)

bjorn

that will leave p2 with a
value of, say, 12?

I guess you would write: Suppose k1a is � (half).
(?)
[From Bill Powers (2005.08.24.0222 MDT)]

Bjorn Simonsen
(2005.08.24,08:45 EST)–

Suppose k1a is 2 and k1b is 3. What values can p1a and p1b have

I don’t know what you mean, but in answering I see I made a mistake
deriving the formula.

.

2p1a + 3p1b = 12

p1a + 1.5*p1b = 6

p1a = 6 - 1.5*p1b

To check it out, try some values of p1b:

p1b
p1a
2p1a + 3p1b

4
0
12

3
1.5
12

2
3
12

1
4.5
12

0
6
12

So that works.

Best,

Bill P.

[From Jeff Vancouver (2005.08.24.0900 EST)]

[From Bill Powers
(2005.08.23.1145 MDT)]

[old] Two
examples might help and also point to the solution. First, consider yourself in
a room looking at a wall in that room. Shift a foot in either direction and all
the stimuli about that wall has changed, yet you do not really notice it. The
wall looks the same. At a very low level, the differences in the sensations do
not become differences in perceptions (or that low level perceptions
differences do not propagate up to the higher input functions). That is,
somehow our minds know what changes or differences to ignore (and presumably
what to send up the hierarchy). How do we do this?

[Bill] That’s already taken care of in HPCT. A higher
system’s perception is some function of a set of lower-level perceptual
signals. In general, this is a many-to-one transformation, meaning that
multiple input signals produce a single higher-level perceptual signal.
Consider a very simple example with only two input signals and one output
(perceptual) signal:

p2 = k1ap1a + k1bp2a

This means that the perceptual signal of the higher order (p2) is computed as
the sum of two signals in the lower order of systems, p1a and p1b. The two
weights k1a and k1b are characteristics of the higher-order perceptual input
function and determine the relative contributions of p1a and p1b to the
perceptual signal p2.

Of course if p2 is a controlled variable, the control system can act to oppose
disturbances of p2 by altering p1, p2, or both when disturbances of p1, p2, or both occur. You can model this with Vensim
if you make the output function an integrator so the system will be stable. You
will find that disturbances can move the point on a plot of p1 against p2
freely along the line where p1a2 + p1b3 = reference value, but the
control system will keep the point from moving off that line. change the
reference level, and it will be a different line along which disturbances can
move the pointwithout resistance.

So I really think the problem you describe is a nonproblem. Or better, the way
it’s stated, it’s a badly framed problem (talking about framing problems!).

[new] If I
understand you correctly, the input function for p2 is set up to produce the
same output no matter than angle at which the object is looked at. That is, p2
is a book either I look from the top or the side.

Yet this does not address the question of
how the input function arises. How is it that p2a
develops a weight that gives it some impact on p2 and not p2z? Having a zero weight
for an input signal means it will be ignored, but why does it have a zero
weight? One answer is that p2z is literally too far away from p2’s input
function. That some level of proximity of the neurons carrying the signals must
exist for associates to be made during a reorganization process. Moreover,
as the mind interacts with the world, dendrites grow such that eventually p2z
is close enough such that an association could be made. Said more abstractly,
an answer is that not all possible signals are assessed for possible inclusion
in an input function. The result is that sometimes, particularly early in an organism’s
career, important signals are missed. This is just speculation. Other
systems/processes might be involved. Also, it is true that unimportant signals
are included into input functions that should not be. We call these
superstitions.

[old] A variation
on this example is to consider that some change in stimuli is important (i.e.,
a lion comes in view). How do we know, of all the stimuli that are changing,
what the important changing stimuli are?

[Bill] Who says we know
what the (objectively) important changing stimuli are? We don’t. We have to
find out what they are, by experience or (advantage of being human) hearing it
from someone else (“Stay away from those brown things with long
teeth”). We learn to perceive things by constructing perceptual input
functions specialized to report them in the form of variable perceptual
signals. We develop thousands of input functions each of which receives not two
but hundreds of signals from lower-order systems, so there are hundreds of
dimensions in which the environment can change either with or without changing
any one perceptual signal.

[new] You
are beginning to get to the issue here. The thing is, if it where hundreds,
that might be fine, but it is millions. Does that change the nature of the
problem? Is it reasonable to suggest that all possible signals are available
for inclusion in an input function? This seems problematic to me. It seems to
me that the hierarchy helps (but I am not sure exactly how) and the idea that
proximity matters (because physics says it has to).

Meanwhile, has anyone modeled this “(advantage
of being human”) process? I am not disagreeing with the concept, but
curious what the structure of the control systems would be that allow it to happen.

[old]One answer
to this example is that we have evolved mechanisms for differentiating
meaningful from less than meaningful stimuli change (i.e., those that input
functions that red “lion – danger” survived to pass on that
input function; and we developed inhibitors to the changing perspective
problem.

[Bill] Naah.
That’s much too fuzzy a model. What’s “meaningful?” What’s a changing
percspective in terms of neural signals? What kind of operation is
“differentiating?” This sort of answer doesn’t answer anything. I
think the HPCT solution is a better direction to take. At least it’s specific
enough to make simple models with, and maybe to make more complex ones
eventually.

[new] I would not
be so quick to reject this model. It is fuzzy, but so are we. Nor am I
suggesting that such a process accounts for all the aplomb which we seem to
have (see above for description of learning processes). Yet, if I understand
correctly, your HPCT example seems to assume a designed system. I must be wrong
there, yes?

[Bill] It’s not as if there
is something there to know and we just have to find it out. We invent
strategies, and learn them from others, and try them. Some work better than
others. Some good ones take too much computing time, unless you have a computer
to carry them out for you. Nobody has figured out even how to define the
optimal moves, much less compute them on the fly. As soon as they do define
them, chess will cease to be interesting.

[new] Yes, but
how does this happen?

Think “reorganization.”

[new] I need
details. Your HPCT example did not speak to reorganization. The weights were
givens. I am not really expecting that you have all the answers. Indeed, I know
you do not, as you have said many times. I am simply trying to assess the boundary
of our ignorance.

Jeff Vancouver

Yet this does not address the
question of how the input function arises. How is it that p2a develops a
weight that gives it some impact on p2 and not
p2z?
[From Bill Powers (2005.08.24.0746 MDT)]

Jeff Vancouver
(2005.08.24.0900 EST) –

[new] If I understand you correctly, the input function for p2 is set up
to produce the same output no matter than angle at which the object is
looked at. That is, p2 is a book either I look from the top or the
side.

That’s the right analogy, although of course seeing a book as a book at
any angle would involve three dimensions and much more complex
transformations than in my example.

Wait a minute, things got garbled there. P2 is just a variable which can
have any value from zero to some maximum. The symbol doesn’t mean that
the value of p is 2; it means there there is a variable at the second
level, named “p2”. There are two variables at the first level,
both named p1 to show they are in level 1. I used “a” and
“b” to differentiate between them so p1a is one of the
variables at level 1, and p1b is the other variable. All these variables
can have any value between zero and max, so that, for example p1a = 4, or
p2 = 11.

Having a zero
weight for an input signal means it will be ignored, but why does it have
a zero weight?

“Why” is asking for the whole answer to how reorganization
works. I ignore such questions, figuring that we will find out
eventually, and I have enough to do just answering little
questions.
In my example, the weights are named like the variables: “k” is
the general symbol for a constant; k1 is a constant applied to a
perception coming from a first-level system. The constant applied to the
signal p1a is named k1a, and the constant applied to p1b is named k1b.
The values of k1a and k1b are properties of the second-level input
function, telling us how much each lower-level perception contributes to
the value of the second level perception p2. If k1a is 2, that says that
signal p1a is multiplied by 2 and added in to the value of p2. With k2a
being 3, signal p1b is multiplied by 3 and added to the value of p2. So
the total value of signal p2 is 2p1a + 3p1b. The constants could have
any other values, of course; I’m just talking about the consequences of
having a particular pair of values, however they got that way.
That formula defines the input function of the higher system. It
determines how the second-level perception p2 will vary as the two lower
signals, p1a and p1b, vary. See my post to Bjorn from last
night.

As to how the weightings (k1a and k1b) come to have the values they do,
this is a matter of learning/reorganization, about which we know very
little in detail. Weightings, as discussed in chapter 3 of BCP, can be
determined by the number of branches into which an axon divides just
before synapsing with dendrites. That branching is called, in neurology,
“arborization.” If an axon branches into three
“processes” as they are called, then for each impulse reaching
the junction three impulses leave it, one in each branch or process.
Since all the branches converge on the same cell, the signal is
effectively multiplied by a factor of 3. Some incoming axons arborize
into several hundred processes just before synapsing with the dendrites
of a receiving cell.

There is still another way in which weightings of a given incoming signal
can be changed. There are vesicles at the end of each process, in which
neurotransmitters are manufactured. The number of vesicles determines how
much neurotransmitter will be released for each neural impulse reaching
the end of the neural process. In addition, there is variable re-uptake
of emitted neurotransmitter molecules, which re-enter the vesicles they
came from after their message has been passed across the synaptic cleft
to the dendrite of the receiving cell. And of course the messenger
molecules in the receiving cell can also vary in concentration, altering
the sensitivity to neurotransmitters.

One answer is
that p2z is literally too far away from p2�s input function. That some
level of proximity of the neurons carrying the signals must exist for
associates to be made during a reorganization process.

This, too, is a basic part of reorganization. A growing axon steers up
the gradient of substances released by the receiving cell, and the
receiving cell releases those substances as a way of altering its own
input of neural signals. Capillary blood vessels grow right along the
same paths as the axons, supplying nutrients.

The way you put it makes no sense to me, however. What is p2z? You seem
to treat it as something physically different from p2. Your model seems
very different from mine.

Moreover, as the
mind interacts with the world, dendrites grow such that eventually p2z is
close enough such that an association could be made. Said more
abstractly, an answer is that not all possible signals are assessed for
possible inclusion in an input function.

I don’t like it. We’re not talking about “asociations” here,
but about how neural signals are weighted at the inputs to receiving
neurons. Association would be a much more global phenomenon involving
extensive circuitry.

The result is
that sometimes, particularly early in an organism�s career, important
signals are missed. This is just speculation. Other systems/processes
might be involved. Also, it is true that unimportant signals are included
into input functions that should not be. We call these
superstitions.

I think we’re getting way ahead of the model here.

[old] A
variation on this example is to consider that some change in stimuli is
important (i.e., a lion comes in view). How do we know, of all the
stimuli that are changing, what the important changing stimuli are?

[Bill]
Who says we know what the
(objectively) important changing stimuli are? We don’t. We have to find
out what they are, by experience or (advantage of being human) hearing it
from someone else (“Stay away from those brown things with long
teeth”). We learn to perceive things by constructing perceptual
input functions specialized to report them in the form of variable
perceptual signals. We develop thousands of input functions each of which
receives not two but hundreds of signals from lower-order systems, so
there are hundreds of dimensions in which the environment can change
either with or without changing any one perceptual signal.

[new] You are
beginning to get to the issue here. The thing is, if it where hundreds,
that might be fine, but it is millions. Does that change the nature of
the problem?

No. The difficulty here is thinking of perception as a problem of
recognizing things that are already actually there in real reality. That
is not the problem. The problem is one of taking a large number of
variables at the millions of inputs to the nervous system, and
constructing input functions that will produce signals showing some kind
of orderliness. It’s not as if the world were full of information trying
to get into the nervous system, so filters have to be constructed to keep
out the excess. That’s a very old-fashined idea and I don’t know how
anyone who has looked seriously at epistemology could support it. The
problem is not in deciding what information is important and what is
unimportant; it’s devising means for getting any information at all from
the environment. Active computing is needed to create information out of
all those inputs, so there is something more or less orderly to pass on
to higher systems. If you see a lion, that’s a triumph of perceptual
computing. You don’t also perceive all those other things that you have
decided are too unimportant to perceive. They don’t even exist until
you’ve constructed input functions that can drag them out of the mishmash
of the environment. And this happens at every level of organization. A
perceptual input function that detects the high-level situation we call
“danger” does not make use of the length of the lions whiskers.
It doesn’t perceive them and then decide they’re unimportant. It just
doesn’t perceive them.

Is it
reasonable to suggest that all possible signals are available for
inclusion in an input function? This seems problematic to me. It seems to
me that the hierarchy helps (but I am not sure exactly how) and the idea
that proximity matters (because physics says it has
to).

Not all possible signals are available for inclusion in an input
function. Higher-order signals, for example, are travelling away from the
input function and never get to it. Signals reach the input function from
more than one lower level, I think, but most of them are from the
immediately adjacent lower level. Also, since the brain has extent in
space, the signals available to a forming input function must be those
from nearby parts of the brain (with obvious exceptions relating to the
long neural tracts, but those tracts form early when source and
destination are much closer together, and stretch). Optical information,
for example comes from specialized nuclei in the midbrain and brainstem,
not from the toes.

Meanwhile, has anyone
modeled this �(advantage of being human�) process? I am not disagreeing
with the concept, but curious what the structure of the control systems
would be that allow it to happen.

I don’t think there’s anything special except that we have a lot of
levels, more developed at each level than other animals, so we can use
one perception to stand for others. Other animals can do the same things,
just not in such an overelaborate way.

[new] I would not be
so quick to reject this model. It is fuzzy, but so are we. Nor am I
suggesting that such a process accounts for all the aplomb which we seem
to have (see above for description of learning processes). Yet, if I
understand correctly, your HPCT example seems to assume a designed
system. I must be wrong there, yes?

Yes. If it’s a designed system, I want my money
back.

[Bill] It’s
not as if there is something there to know and we just have to find it
out. We invent strategies, and learn
them from others, and try them. Some work better than others. Some good
ones take too much computing time, unless you have a computer to carry
them out for you. Nobody has figured out even how to define the optimal
moves, much less compute them on the fly. As soon as they do define them,
chess will cease to be interesting.

[new] Yes, but how
does this happen?

That’s what the whole concept of reorganization is about. I don’t know
what you’re asking for here is asking “how”? Are you asking for
an advance view of the final results of a couple of thousand years of
research?

Think
“reorganization.”

[new] I need
details. Your HPCT example did not speak to reorganization. The weights
were givens.

The weights are among the properties of the system that get reorganized.
Exactly what causes this reorganizing, especially in relation to error
signals as I have proposed, is completely unknown. Go ask somebody
smarter, or buy a time machine if you can’t wait.

Best,

Bill P.

[From Jeff Vancouver (2005.08.245.0949 EST)]

[From Bill Powers (2005.08.24.0746 MDT)]

“Why” is asking for the whole answer to how reorganization
works. I ignore such questions, figuring that we will find out eventually, and I
have enough to do just answering little questions.

Fair enough.

The way you put it makes no sense to me, however. What is p2z? You seem to
treat it as something physically different from p2. Your model seems very
different from mine.

Oops, I meant p1z. It is a potential level
1 perception that a level 2 input function could possibly incorporate.

I don’t like it. We’re not talking about “asociations” here, but
about how neural signals are weighted at the inputs to receiving neurons.
Association would be a much more global phenomenon involving extensive
circuitry.

Associations, as often conceived particularly
these days, does not required extensive circuitry at all (though I cannot speak
to all conceptualizations of the term). They are (for our intents and purposes)
like the weighting (i.e., “k’s”) you
describe.

No. The difficulty here is thinking of perception as a problem of
recognizing things that are already actually there in real reality. That is not
the problem. The problem is one of taking a large number of variables at the
millions of inputs to the nervous system, and constructing input functions that
will produce signals showing some kind of orderliness.

Yes, exactly.

It’s not as if the world were full of information trying to get into
the nervous system, so filters have to be constructed to keep out the excess.
That’s a very old-fashined idea and I don’t know how anyone who has looked
seriously at epistemology could support it.

I agree. Part of what we plan to say is
this exactly this. Your second statement is bothersome because the audience
might complain that we are describing a straw man (no serious thinker, i.e.,
thinkers of the frame problem, hold the old-fashioned idea). But the editors/reviewers
will weed that out if it is an issue.

Not all possible signals are available for inclusion in an input
function. Higher-order signals, for example, are travelling away from the input
function and never get to it. Signals reach the input function from more than
one lower level, I think, but most of them are from the immediately adjacent
lower level. Also, since the brain has extent in space, the signals available
to a forming input function must be those from nearby parts of the brain (with
obvious exceptions relating to the long neural tracts, but those tracts form
early when source and destination are much closer together, and stretch).
Optical information, for example comes from specialized nuclei in the midbrain
and brainstem, not from the toes.

It sounds like we are thinking much the
same here as well. I am not sure I agree that perceptual signals only go up the
hierarchy, but I am glad to see you suspect signals might “jump”
levels. But I am not really competent enough to speak to these issues.

I don’t think there’s anything special except that we have a lot of levels,
more developed at each level than other animals, so we can use one perception
to stand for others. Other animals can do the same things, just not in such an
overelaborate way.

I agree with this as well (we were talking
about using verbal interactions with others to learn). I also believe, though,
that if computational models of processes like these could be developed and
shown consistent with observation (particularly in ways that are better than
other models), the science would stand up and take notice.

That’s what the whole concept of reorganization is about. I don’t know
what you’re asking for here is asking “how”? Are you asking for an
advance view of the final results of a couple of thousand years of research?

I am not as skeptical as you regarding the
timeline of this research. Progress is being made as we type.

The weights are among the properties of the system that get reorganized.
Exactly what causes this reorganizing, especially in relation to error signals
as I have proposed, is completely unknown. Go ask somebody smarter, or buy a
time machine if you can’t wait.

You are pretty smart.

Thanks for your time on this matter. It is
great that such a forum exists. I wish I had more time to interact with it (and
the resources to go to CSG meetings). Take care.

Jeff