Re: Uncertainty (was second-order and third-order
beliefs)
[Martin Taylor 2008.04.02.14.48]
This is in part a test message in a investigation of why a long
message on this topic failed twice to be posted when I submitted it,
and also failed when Rick submitted it on my behalf. This version may
fail to be posted, because I use the name Shannon.
[Bill Powers (2008.04.02.1106
MDT)]
Martin Taylor 2008.04.02.10.38 –
Bill:
At any rate, it seems to me that
saying I believe something introduces the possibility of doubt.
“He’s an honest person,” versus “I believe he’s an
honest person”.
Martin*:*
Yes, I agree. That’s the distinction I would make between the kind
of perception in “I see a chair in this room” and “I
believe there is a chair in the next room” and “I believe
that what I see through the fog is a chair”. They are all
perceptions, but the latter two have less supporting information –
the perceptual input function has ambiguous inputs along with the well
defined ones.
This suggests that all perceptions actually have two attributes: the
value and the uncertainty.-------------------
I see what you mean. However, I don’t see the uncertainty as being an
attribute of a perception, but simply as another
perception.
I think you and I and Rick are all agreed on that. Shannon
certainly would say so. For Shannon, all uncertainty is about
something the other end of a communication channel – in this case the
Perceptual Input Function and what lies between it and the
environmental variable to which the percepual signal value has an
uncertain realtionship. Shannon’s “Information” is all about
changes in the uncertainty about something before and after the
something is observed, whether it is about the identity of the
something or about its value on a continuum.
I’d say it is the output of a
perceptual input function that takes lower-order perceptions as inputs
and generates an output perceptual signal that is a report on the
degree of uncertainty being perceived.
Uncertainty about what? That’s been the main issue that’s been
troubling me. Just to perceive uncertainty is no value. You can’t
control such a perception. The only way to control a perception of
uncertainty is do act in a way that relates to the thing in the
environment about which you are uncertain.
To see a relationship between that
perception and the perceptions from which it was derived would require
a third perceptual input function receiving both the uncertainty
signal and the signals from which the uncertainty was derived. That
would generate the perception of a relationship between a sense of
uncertainty and some other perception.
This gets a little baroque, doesn’t it? And it seems to
presuppose a wiring to a specific relation perceiver that has input
functions hard-wired from each perceptual signal and from its
corresponding uncertainty perceptual signal. Then there is an issue as
to how that “reference” perceptual control for the relation
perception comes to affect the actual control of the uncertainty
perception about the perception that is uncertain. In effect, you are
doubling or tripling the number of independent control units in the
system.
Isn’t it simpler to imagine that the relation between perceptual
signal values and their uncertainties is maintained in the connection
pattern through the system up to the level at which the particular
perception is produced. Such a connection organization would avoid
several problems, without, so far as I can see, introducing the novel
ones yours introduces.
At first sight, I don’t like your proposal, because I don’t see
how it could work, and because it seems unnecessarily complicated.
Neither reason is sufficient to say it’s wrong, though.
It would also explain how we can
perceive an unpredictably varying perception without feeling uncertain
about it,
That, we can do, for sure. But unpredictably varying has only to
do with uncertainty for predicted future value, not with uncertainty
about the present value, which is the value we normally don’t feel
uncertain about. (Caveat: We do feel uncertain about the present
value, though, if the changes in value are fast enough as well as
being unpredictable.)
and how we can have a feeling of
uncertainty without knowing what it is we’re uncertain
about.
I suppose, since uncertainty is a perception, one can have
uncertainty about almost anything, and knowing what we are uncertain
about could be one of them. But I don’t know that I can remember
experiencing it.
My biggest problem is with your proposal
for a vector representation (it’s not really a complex number, is it?
No square roots of -1). This requires the ability to label signals:
this is a signal indicating a perception, and that is a signal
indicating the uncertainty in the perception (or the thing
perceived).
In the system whence I got the notion, it was indeed a complex
number defined by r-theta, where in PCT language theta was the
magnitude, and the greater r, the less uncertainty about theta. That
representation worked very well in a perceptron-like structure.
I don’t think there is any way to
label neural signals. They are all alike, just like electrical signals
in a circuit.
But in the PCT hierarchy, all signal paths are labelled. They
connect THIS output to THAT input. No more labelling than that is
required.
You’re really reverting to the
“pattern” or “coding” idea of perception, in which
one neural signal pathway can carry signals representing many
different perceptions, with the code indicating which perception it
is.
That isn’t necessary. Supposing the complex number representation
were actually valid, all that would be required for using complex
numbers would be to replace the single connection for value with two
connections.
That idea in turn is conditioned by
the concept that nerves carry “messages” which are really
discrete packages – actually, what I would call category
perceptions.
I don’t know that it need be category perceptions at all.
Apparently the timing pattern of neural impulses relative to their
neighbours is critical, perhaps more so than their frequency. That
gives a two-dimensional analogue representation, which could
accommodate a complex variable, especially since it seems that the
less prominent the target (such as a light flash or, probably, a tone)
the earlier the first relevant impulse.
PCT is at bottom a circuit-based kind of
model, in which signals are just signals, serving only to connect the
output of one function-network to the input of another and conveying
only magnitude information.
Classically, in neuropsychology, that’s called “place
coding”. There used to be a lot of argument as to whether
place-coding or signal-coding was the sole or the most important
method of getting information through the neural system. I am not up
on contemporary neurophysiology, but the little I do know suggests
that both are important. Certainly impulse timing seems to
matter.
No signal considered in isolation
has any meaning; to figure out its meaning you have to know where it
came from, and also what is done with it after it gets to its
destination. And, wasteful or not, the basic architecture is that of
Selfridge’s Pandemonium model, in which each different kind of
perception comes from a physically different input function that
produces it. Only the connectivity matters; the signals are all
alike.
Yes. In a place-coding system it doesn’t matter that one path
resulting in P(V) connects, eventually, to environmental variable V,
and another derives ultimately from some function that outputs
uncertainty “U(V)”. The fact that the signals are on THOSE
wires is what matters. And it would be the same if the perceptual
Input function that produces P(V) also produced U(V), and if those two
signals, on separate connectors, both served as input to any
higher-level PIF that used P(V).
There is no machinery at the
receiving end that can distinguish different kinds of perceptions in
any one signal path. What arrives by one signal path is simply a
representation of the magnitude of one variable. The variable might
represent uncertainty, or it might represent the taste of
applesauce.
Yep.
But it just tells the destination how
much of it there is, not WHAT it is.
So far, so good.
The identification of WHAT doesn’t
come until the category level – and then, all that says WHAT is there
is the fact that there is more signal in this path than that
path.
Why would you say that? WHAT it is determines which error signal
induces what action outputs, outputs that affect WHICH environmental
variable. That’s as true at the lower levels as it is at the category
level and above. I fail to see what “category level” has to
do with the situation.
In a way, I hope my invocation of Shannon pervents this message
being posted, because that would resolve a mystery.
Martin